Azure Website 101 : Scaling in Azure

Earlier I have discussed the various type of scaling options (vertical scaling, horizontal scaling) that we have available. In this particular article I am going to focus on horizontal scaling, more specifically how I can scale my website from the azure portal to handle the extra spikes and scale it down again in off season to minimize the overall cost.

The first pre-requisite of scaling is to upgrade the pricing tier for the target website. The minimum pricing tier that let me scale my web app is basic tier which indicates that I can’t enjoy the scaling facilities if my web app is running in either free or shared tier.

Upgrading the pricing tier is easy. All I have to do is to open the Settings blade for my desired web app and then I have to click the  App Service Plan which will open up a new blade from where I have to choose the Pricing tier.

Azure_Website_101_Scaling_in_Azure_01

The following list shows the maximum available instance count possible (at the time of this writing) for any given tier .

Pricing Tier Instance Count
Free N/A
Shared N/A
Basic 3
Standard 10
Premium 20

Once my webapp is running under any (basic, standard, premium) of these tier I can click the Scale option from Settings blade which will open up the Scale blade. From there I can slide the Instances bar from left to right to increase the number of instances, Once I am done with this setting I  have to hit the Save button.

Figure below tells that, right now 3 instances are running for this webapp and I have asked to increase the instance count to 7.

Azure_Website_101_Scaling_in_Azure_02

Its good that azure let me change the instance count manually. But sometime manual scaling (setting the instance value) does not make any sense for a given situation. Auto scaling may be more  suitable, more preferable.

Auto Scaling : Auto scaling increase the instance count up or down automatically according to conditions defined. During demand spikes to maintain performance it increase the instance count and when  there is not enough load it simply decreases  the instance count to reduce overall costs. Applications that have stable demand patterns or that experience hourly, daily, or weekly spikes are the scenario where auto scaling should be implemented.

The following visualization shows the auto scaling feature in action. When there don’t have enough incoming requests only one instance are serving the client.

Azure Website 101_AutoScale_01

But if suddenly request increases from client end and one single instance is not able to handle, process all requests smoothly then it will increase the instance count to response smoothly. This is  what auto scaling is!

Azure Website 101_AutoScale_02

Configuring auto scale rule is as easy as we did manual scaling. First I have to open the Scale blade and from the drop down of SCALE BY option I have to choose CPU Percentage. Then  I have to move the two sliders to define the minimum and maximum number of instances and Target Range to scale automatically.

Azure_Website_101_Scaling_in_Azure_03

If I want I can also schedule the time (always, recurrence, fixed date)  and set some performance rule so that auto scaling works  on my defined way.

Azure_Website_101_Scaling_in_Azure_04

Following figure show the options from where I can set rule.

Azure_Website_101_Scaling_in_Azure_05

Note :

Even with a single instance count site which are running in premium mode benefits with high availability and greater robustness. Increasing the premium instance count (scale out) ensures greater performance and fault tolerance.

Summary :

Scaling is one of the key feature of cloud. Azure team makes this feature highly customizable to fit business need.  As you can see from the portal itself you are allowed to have maximum 20 instances which are enough for most businesses, but If your business requires more then 20 instances then you could either call the Microsoft support center or could knock the account manager at Microsoft directly to allocate you more instances.

Good Reads :

All about Scaling

One of the key aspects that made cloud popular is scalability , that means you can increase or decrease your resources at any given time. Now, if you are new in cloud, the question that might disturb you is why we want an application with fluctuating resources rather than fixed resources. The answer is simple, to save MONEY !

There is no definite number how much we can save by moving our solution to cloud but a quick example could give you a good understanding of how much we are actually wasting when we stick with on premise solution. Lets start with a traditional scenario (S.S.C result publication) when a huge number of incoming request comes to  website during the result publication time, to handle these extra incoming request you could expand your CPU cores,  ram but such expanding cost is huge! Plus it does not make any sense to spend this huge amount of money only to handle these extra spikes which literally stays only for two or three days in a year.

Another good example could be a  a line-of-business (LOB) application that is expected to be online only during business hours. Cloud providers usually charge their service by hour. Since only 64-65 hours of availability are needed per week, that mean we could save money for (24*7-65 hours) 103 hours at least in each week just by removing extra instances. In fact, if the application is designed properly removing all compute nodes/instances for certain time periods is possible.

Azure_Website_101_Scaling_01

The above figure shows if you setup your infrastructure just by watching analysis report of one week or so how wrong you would be in the long run.

There are mainly two types of scaling.

  1. Vertical Scaling
  2. Horizontal Scaling

Vertical Scaling (aka Scaling Up) :

Adding extra hardware like processing power, RAM, storage etc. to your system is called Vertical Scaling. The best part of this type of scaling is that your data resides in one place and in its addition you don’t have to take the hassle to manage multiple instance. The negative side with this sort of scaling is first its not cost effective and secondly you have to deal with the configuration manually.

Horizontal Scaling  (aka Scaling Out) :

In cloud you handle the extra spikes by increasing the instance number instead of buying new servers. Cloud provider ensures your data and apps are available from all instances immediately.  When you have low load in server you could scale down your resources by removing the extra instance. Thus, you only ever have the infrastructure you need and pay no more then what you need. This type of scaling is called Horizontal scaling.

Azure_Website_101_Scaling_02

I like horizontal scaling because i don’t want to waste even a single penny and I know in cloud its much simpler to exploit extra capacity. I just have to scale it in either direction to fit my need. This will minimize the overall cost because after releasing a resource, I do not have to pay for it beyond the current rental period until I am in need to spin new instance again (in future).

The following figure shows a side by side comparison between vertical and horizontal scaling –

Vertical Scaling Horizontal Scaling
Physical Limitation Resource of a single host Resource of a Cluster
Cost Migration High Low
Additional software license Low High
Upgrading downtime High Low
Other concerns No coordination overhead Need load balance and gateway

Heads Up :

At the time of scaling proper attention has to be taken on managing user session state.

Summary :

Each and every single web application has a capacity limit. Based on the number of visitors the performance of our web app either increases or decreases. So, to ensure a good stable performance for visitors we have to scale in/out our server capability regularly. While adding new infrastructure is a solution that could handle this issue, increasing/decreasing the number of instance is another solution.

Good Read :

 

Azure Website 101: Stream logging

In software development, you will never anticipate every error when developing. There are times when code has been in place for many days, weeks, months even years without any issues and all of a sudden it crushes badly. You open up your project to have a glimpse on the code and there you notice the comment that you had left there years ago – “When I wrote this, only God and I understood what I was doing. Now, God only knows”. I know this made you panicking. This is where logging comes into play. A logging system constantly keeps an eye on every interaction that are occurring between visitor and your website. If something goes unusual it start keep tracking those issue so that you can do further analysis. That is why even experts suggest that when something goes wrong the first thing that developer should check is the log file and the log database.

Enough talk! Lets get into the main business. I know I can put the tracing method in the following way :

System.Diagnostics.Trace.TraceInformation("Trace statement");

System.Diagnostics.Trace.TraceWarning("Warning statement");

System.Diagnostics.Trace.TraceError("Error statement");

When in development phase I could check all these trace information against my development site right at my output window of visual studio. But now I want to check these info for my production site.  What I need to do is to enable some of the configuration from the azure portal. Open up the Settings blade for your website and click Diagnostics logs.

Figure : Diagnostic Logs

Figure : Diagnostic Logs

Clicking the Diagnostics log option will open up a new blade from where I need to enable Application Logging.

From here I can also filter which sorts of information/log report (informational, warning or error ) I would like to have store for future investigation.

Along with the application log this panel also allow me to store the following type of logging message if i want.

  • Web Server Logging : This option will let me trace HTTP transactions info.  I can use this info to check which browser is client mainly using, time taken to execute an action, even I can trace client IP address.
  • Detailed Error Logging – This will let me log detailed information for any error occurred.
  • Failed Request Tracing – I can get detailed information on failed requests by enabling this. This log includes trace of the IIS components as well which can be useful in identifying error.

Note :

Under the hood this service automatically log deployment information when you publish content on your site. This gives you the flexibility to to implement the log tracing for your custom deployment script.

So far, I have setup all the required configuration to store my desired log information in desired storage. Now, lets imagine something really bad happened. I need to download the log file. This is quite easy. I can again go the Logs blade to check my FTP credential.

Figure : Logs blade

Once I have the credential on my hand I can open up my favorite FTP client  to download all the log file that was generated for my site. But we have seen earlier that azure gives me the flexibility to store different type of log information. This make sense that all the log files will not be stored in the same directory.

The directory structure that the logs are stored in is as follows:

  • Application logs : Navigate to /LogFiles/Application/ to download application specific log. This folder may contains one or more text files.
  • Web Server Logs –  Web server log information can be downloaded from /LogFiles/http/RawLogs this location.
  • Detailed Error Logs : Navigate to  /LogFiles/DetailedErrors/ to download detailed error logs. This folder contains one or more .html files.
  • Failed Request Traces : Navigate to /LogFiles/W3SVC#########/ to download failed request information.  This folder contains one XSL file and one or more XML files. Make sure to download the XSL file into the same directory as the XML file. This will ensure human readable formating of your data when viewed in Internet Explorer.
  • Deployment logs –  Deployment logs can be downloaded from /LogFiles/Git.

Having the log information in flat file is good but I would like to check those in real time. What I can do is open the Tools blade by clicking Tools option.

Azure_Website_101_Logging_04

Clicking the Log Stream will open up a new blade that shows live log information of our interested site. From here If I can check both application log and web server log.

Figure : Live Log streaming

Figure : Live Log streaming

If you don’t log errors, warnings and debug information, how will you know what went wrong when the site goes down? send the information in emails, and also log to file in case the emails don’t go through. That way, either you have an email if the file system is full, or you have a log if the mail server is down. Error handling will allow the application to gracefully handle errors and display error messages accordingly.

Wrapping Up:

From my experience  I can say logging system become more helpful when web app actually gets in the road. It helps to catch the unexpected exception by collecting detail information which you will never be able to reproduce in a debugger.

Good Read :

Azure Website 101: Traffic Routing

We often got requirements from client where they asked us to develop promo campaign that they believe can help them to get the market. In many cases they design such campaign based on their hypothesis and market analysis report. Sometime their hypothesis works but some time it fails badly. One way to avoid such failure is to test the hypothesis with some real user. A/B testing (also known as split testing) does exactly the same. You compare two campaign by showing the two variants (let’s call them A and B) to similar visitors at the same time. The one that produce higher % of success determines which campaign you should stick with.

Azure_Website_101_Traffic_Routing_01

As you can imagine this kind of setup (traffic routing based on some custom rule) requires a lot of works .  Azure offers this feature just out of the box. If your site is hosted as PaaS then login to your azure dashboard, tweak some settings and you are done! Azure will automatically load balance a percentage of the traffic going to your site between production and your designated slot based on your configuration. In azure this awesome feature was called “Testing in Production” and now the name has slightly changed to “Traffic Routing”.

Lets say, our client came up with two campaign offer and they would like to figure out which one is creating positive impact on users. They would like to route 10% traffic to the first campaign and another 10% to the second campaign. The rest 80% traffic would go the the older site.

To have this setup lets first login to your azure account and select the particular website for which you would like to redirect the visitors. Clicking the settings tab will open the Settings blade as shown in the following figure.

Figure : Settings Blade

Figure : Settings Blade

Click the Traffic Routing to open up a new blade to configure the traffic. Choose a deployment slot and set the traffic.

Figure : Traffic Routing Blade

Figure : Traffic Routing Blade

Configure all the slot from this window. You can also create new slot by clicking Add Slot option from the top blade.

Figure : Traffic configuring for different slot

Figure : Traffic configuring for different slot

Here, we have setup 10% traffic for Campaign-Offer-1 and Campaign-Offer-2. The rest 80% traffic will go to the actual site. Then we can hook up Application InsightsNew Relic or some other event/diagnostics system to measure the difference in user reaction between this two campaign. The one that produce most positive response will be replace with the old site.

Heads Up:

  • To ensure a seamless experience.to your user, make sure you write enough code to manage their authentication and session information.
  • If you need to perform stress trace, keep in mind that they share the same resources as your production site. So doing such test could affect your production site! On such case move the test site/slot to a separate resource group.

Wrapping Up:

By default 100% of the traffic goes to the Production slot but there could be situation where you might want to redirect some of your customers to a different version of your site and want to figure out if the new changes bringing any positive impact on your site. Situation like that can be easily handled with the feature named “Traffic Routing”.

 Good Read :

Azure Website 101: Endpoint Monitoring

To make any successful B2C business, the organization must have to be in touch with its customer and one of the best way to be in touch is to build a website. Thats why every B2C organization wants to make sure that their site is accessible from all around the world 24×7. Now, how will you make sure that your client’s website is up and available not just from one location but from different country of different geolocaiton as well?

If you have said “Ping” then you are absolutely right. You have to ping the site from different places to check if your website is returning “200” as its status code. You have a series of option from free to paid service. System Center Global Service Monitor in System Center 2012 Operations Manager is another good option to monitor availability, performance and reliability of your website.

But If you are using azure then you are lucky to have everything right at home. All the features that you might need are probably already in Azure. Website endpoint monitoring is one of them. Endpoint monitoring lets you monitor the availability of HTTP or HTTPS endpoints from geo-distributed locations.

Heads Up :

The only caveat that it has is that Endpoint monitoring is only available with Reserved Mode instances !

To get started, visit the old portal first and choose the website of your particular interest then go the configuration tab and scroll down to the monitoring section.

The monitoring section allows you to add up to two URLs for monitoring. Add a friendly name for each URL and select the locations around the world from where you wish to monitor your sites availability.  Each of the provided URL can be ping from up to 3 test locations. After you have saved the configuration, the Web Site’s URL will be tested periodically (in every 5 minutes) from each of the configured locations.

To see the results of the tests select your website from the new portal and you will see a nice visualization on the dashboard.

Azure_Website_101_Endpoint_Monitoring_02

Availability is monitored using HTTP response codes, and response time. A monitoring test fails if the HTTP response code is greater than or equal to 400 or if the response takes more than 30 seconds. An endpoint is considered available if its monitoring tests succeed from all the specified locations.

Note :

You can see the last five tests from each location but can’t currently see any history past that.

If you want you can also configure and set your desired metric to be shown in chart from various endpoints.

Azure_Website_101_Endpoint_Monitoring_03

 The dashboard will periodically update the status of monitoring result that you can drill into for further investigation. This is indeed good to have some live status of your ping result but this is not feasible to monitor this dashboard 24×7. One thing that you are probably saying is “What good it could be if it can’t notify in extreme failure?

Well, in the  world of azure you are blessed with enormous feature. You can certainly create a rule to send you notification if certain threshold meets the bar. For example you can add rule to send you mail if the uptime is greater than a value % you expected, or when your visitor getting a certain HTTP status code as response.

To do so, click the Add alert option from the Metric blade.

Azure_Website_101_Endpoint_Monitoring_04

Clicking the Add alert option will open up a new blade from where you can set the limits (with condition) and the email address where you expecting to get alert notification.

Azure_Website_101_Endpoint_Monitoring_05

Summary:

End point monitoring is one of the cool feature that you should implement in order to make sure you site is accessible from around the world and even if its get down for any particular reason, you get the alert notification on your email asap so that you can take steps to fix it for the production site.

Good Read:

Azure Website 101: Enhance the security of your WebApp

When you type a website URL in browser and hit enter, your browser start rendering that particular site for you. Most of the people think requesting a particular site only involves fetching the appropriate page from server and rendering those information with proper style is all about. But this is not the case actually, in fact  lot of things happen behind the scene in first 500 milliseconds that you probably never think of! Let me explain it a bit.

Azure_Website_1010_Enhance_Security_01

Figure : HTTP Request-Response diagram

Whenever a request is made from a browser, along with the HTTP request browser also send HTTP headers to that web server. Some of these HTTP headers are useful for server because that carries useful information which are needed to handle that particular request. For example, with each HTTP request browser send User-Agent HTTP header to the server, seeing these HTTP header server can understand what browser has client used to make that request, what is the version client is using etc. On the other hand, along with the requested content server also returns few HTTP headers some of which carries important information like content type(how the response is going to render), how long the site will remain in cache (Cache-Control) and so on.

Some of the HTTP header returned by server is not mandatory for rendering the site content. In fact some these HTTP header (Server, X-Powered-By, X-AspNet-Version) can open security hole. Now lets consider the disaster part. What do you think what would happen if someone knows the vulnerability of a particular web server and also the version combination of that particular Asp.Net version? On such situation they can exploit those sites completely.

And for this very reason  Internet Engineering Task Force (IETF) has the following say (RFC 2068) :

Revealing the specific software version of the server may allow the server machine to become more vulnerable to attacks against software that is known to contain security holes. Implementers SHOULD make the Server header field a configurable option.

Along with the security threat that these headers are carrying with each response it also create tiny impact on the performance. A calculation shows that their inclusion simply raises  each HTTP response by around 100 bytes. This is such a small value that you might not want to bring into consideration but think again what happen if your server is receiving millions of requests concurrently? What do you think how much performance benefit you are actually giving up?

Note:

Along with the security threat that it exposes you are draining 100 bytes with each SINGLE response ! IIS Lockdown also recommends to turn these headers off.

Now that you are convinced to remove those extra HTTP header which are passing with each HTTP response, we can proceed further to see how we can remove those HTTP header, but before that lets have a glance of those header :

  • Server : Specifies web server version.
  • X-Powered-By : Indicates that the website is “powered by ASP.NET.”
  • X-AspNet-Version : Specifies the version of ASP.NET used.
Removing Server Header :

To remove this  header open up Web.Config and add the code shown below to remove the server header.

Remove Powered-By headers:

Remove X-AspNet-Version :

Summary :

From thousand feet, innocuous data that our server is transmitting is of no use to anyone, but when we dig a little deeper we find these chunks of information suddenly becomes quite useful to the evildoers. Additionally by removing those we move one step ahead in improving server performance.

Good Read:

Azure Website 101: KUDU the secret door !

A good number of people prefer IaaS (Infrastructure as a Service) solution over PaaS (Platform as a Service) and the reason they like to tell us is that they can RDP on the server to make any changes to their website. But the contrary  as you might get choosing IaaS solution means some extra work like updating the OS and its patches, maintaining the network infrastructure as so on  comes on the developers end. To me these are just overkill of your time. I like PaaS solution (Microsoft Azure  Website (MAWS) is a PaaS) because that let me focus on my project rather then the infrastructure where my solution will actually run. But what If I still would like to have a little more control over my Websites once I deployed in MAWS?

The solution is to use the secret door of Azure Websites : the KUDU debug console!

Tell me more about that secret door (KUDU) :

The KUDU Console is a tool that gives you both command line and file browser access to your site from where you can get server information thats running your MAWS. Additionally, if you want you can also execute powershell scripts to do variety of work from creating a brand new website to run a scheduler/webjobs, everything!

Did You Know?

Previously KUDU was only used for local git deployment.

The image below shows the KUDU dashboard. It contains several tools (Debug Console, Diagnostics Dump, Log Stream etc) that you can use to manage, monitor and debug your site.

 Where is the secret door?

As the title says its a secret door, so obviously you won’t see the door unless you type the magic word in URL of your browser.  Simply insert SCM in the URL of your Website between your website name and the azurewebsites.net domain like this:{yoursite}.scm.azurewebsites.net. For example if your site name is testsite.azurewebsites.net then to navigate to the secret world you have to hit testsite.scm.azurewebsites.net

As soon you want to pass the secret door the door keeper will ask for credential. This console runs under HTTPS and is protected by your deployment credentials.  Once you enter your Microsoft account credentials (unless already signed-in), you reach the Kudu dashboard of your site.

Azure_Website_101_KuduConsole_01

Figure : Kudu Dashboard

Now that we have got the access on KUDU dashboard (the secret door to azure website) the next thing that we would like to do is to figure how much power and control we got over our azure website by accessing the KUDU dashboard.

As you can see the dashboard Tab like Environment, Debug console, Process Explorer titles are pretty much self explanatory.  Lets discuss one by one.

Environment

The environment page can help you see what your website “sees” in terms of the current environment it’s running on. While the “System Info” section can can help you to find out the the details of the server, other sections like AppSettings, Environment variable can tell about their respective information.

NOTE:

While there is a Connections strings section, it may not be exactly what your site sees as connection strings.

 Debug Console

KUDU console offers you to enjoy the best of both world. Based on your expertise can use both CMD or the Powershell terminal to run arbitrary external commands/scripts. The thing that I like most of this page is that the console window and UI portion (Tree view of folder structure) works simultaneously. In other words, “current directory” is synced between the file explorer and the terminal for ease of use.

Figure : KUDU Debug Console

Figure : KUDU Debug Console

In fact you can use this as the replacement of FTP client ! Both the UI and the terminal lets you to check file hierarchy. From there you can download, edit or delete particular file (or full folders). Not only this but this also gives you the flexibility to use the drag & drop feature to upload your desired file in the appropriate location.  You can check this for a  good reference.

A good example of a scenario where this feature might come in handy could be like this- imagine you were in the middle of journey and your client called you on your phone and asked you to do a hot fix by changing some minor change on the production site. Remember you don’t have your development laptop with you but you have access to a browser! Well, if this is the case then you can do some basic minor fixes and changes right away from the secret door that I introduced you here (i.e. Kudu Debug Console).

Heads UP:

Keep in mind that you are editing the live site here so be CAREFUL and don’t forget to change the source in the repository later. Also note that If you are making changes on the Web.Config file then your site will restart which means all the users session (who are currently connected with your site) will get dump with the restart !

Some of you might planning to tell me what are the good practices we should follow. Yes, I know, best practices say “don’t do this!” but don’t you think at the middle of your journey when you don’t have access to your development machine with a site down this is just amazing to have some direct access to your site just by using a browser?

 Process Explorer

Process Explorer shows the processes that are currently running, Clicking the properties button will show the details of that particular w3wp instance.

Figure : Process Explorer

Figure : Kudu Process Explorer

This will pop up a new window from where you can find out the running threads, attached handlers and environment variable. In its addition, if you want you can also download “memory dumps” for further analysis.

Azure_Website_101_KuduConsole_04

Figure : Inside w3wp.exe process

Tools

Other tools that ships with KUDU dashboard are diagnostic dumps, log steam and web hooks. Log steam is pretty much helpful in case you want to check live log messages (which you sent through System.Diagnostics.Trace.WriteLine) popping up on your kudu dashboard.

Good Read: