Windows Azure: Configuration

Are you ready for the next portion of information about Azure? I hope you are. Today I will focus on the configuration options of Windows Azure, where and how you can set up your Azure application and how to turn logging on and off. Also, I will share some recommendations about how to configure Kentico CMS 5.5 R2 and what the situation will look like in 6.0.

Configuration options

Let’s start in a more general way with a description of how Windows Azure works internally. You already know from my first post about the existence of the Windows Azure fabric controller, which manages your roles for you. If we omit the details, it works this way:
  1. You create a package on your development machine and upload it to the cloud via the portal (you can of course use other options but let’s say you choose the portal — it doesn’t matter right now). You also upload the service configuration file there.
  2. The package is saved as a “Golden image”. The fabric controller uses this package for creating VMs (virtual machines) with instances of your roles according to your configuration.
  3. The fabric controller monitors the health of your instances so that it can start a new instance if one of them suffers a failure.
  4. The built-in load balancer receives requests and forwards them to instances and roles.

In other words your job is to prepare the package and deploy it to Azure; everything else is done automatically for you. This means that you only care about your application, not about the underlying software (i.e. operating system) or hardware. This is one of the benefits of cloud computing by the way. But it also determines what you can and what you cannot influence. For example, if you need to use Windows XP because you are using some library which doesn’t support newer OS versions, you cannot move your application to Azure.
In my previous post I wrote that most of the settings are stored in two new configuration files. Now let’s take a deeper look at what you can find there.

Set number of instances

You can influence the number of instances in the service configuration file (ServiceConfiguration.cscfg). You can add the Instances tag directly under your role, for example:

<Role name="CMSApp">
    <Instances count="2" />   

This will set 2 instances for the CMSApp role.

Input and internal endpoints 

Every role can have endpoints (but it is not a condition). They allow the definition of communication channels and you can set them in the service definition file (ServiceDefinition.csdef). There are two types of endpoints. Input points define a public interface for the given role and are exposed via the load balancer to the outside world. When you send a request to this type of endpoint, it is forwarded to any of the role’s instances.
Internal endpoints address an exact instance of the specified role. However, you can use this type of endpoint only inside the Azure datacenter, not from the outside world. Their purpose is internal communication between roles and especially between instances.
A good example of using internal endpoints is our web farm module. For those of you who don’t know how it works I’ll give a little reminder here. The web farm module is made for multiple server environments like Windows Azure. It synchronizes cached data between servers (instances in the Azure world). Each server is defined by a name and URL. When one server changes some cached data, it creates a task for the other servers and contacts them via defined urls in order to process these tasks. But if the instance sent the request to an input endpoint, the request would go through the load balancer and only one instance would receive it, so in this case internal endpoints have to be used. This functionality isn’t implemented in Kentico CMS 5.5 R2 but it will be part of  version 6.0. 
Example of endpoint settings:

<WebRole name="CMSApp">   
      <InputEndpoint name="HttpIn" protocol="http" port="80" localPort="80" />
      <InternalEndpoint name="InternalHttpIn" protocol="http" port="8080" />

In this example the CMSApp role has two endpoints, one of each type. The input type uses http and the public access port is 80. If you create the Azure project using SDK 1.3 or newer, you can also specify the localPort. This attribute determines on which port your application actually runs. From the Kentico CMS perspective, it is important to set port and localPort to the same value because localPort is used to generate absolute URLs in our CMS.

Local storage

I haven’t talked about the local storage feature yet. In addition to Windows storages, you can store files on the hard drive of every instance. This drive is a standard NTFS drive, so you can access this storage with System.IO API. The bad news is that there is no backup for instance drives (you already know why from the beginning of this post) and this type of storage is not durable. It should be used only as cache or as a temp, which is its intended purpose. You can set up local storage in the service definition file:

<WebRole name="CMSApp">   
      <LocalStorage name="CMSTemp" cleanOnRoleRecycle="true" />     

This creates local storage named CMSTemp for all instances of the CMSApp role. The cleanOnRoleRecycle attribute determines if the storage should be erased when a role is recycled (restarted).

Where the role will run

In newer versions of the SDK (1.3 and newer) there is an option to set where you want to run your role. The first option, which is the only one in older versions of the SDK, is hosted web core. In this case your application runs in special IIS mode and only core modules are loaded. You can find more information about this option here.
The second option is to run under full IIS. In this case you can use all capabilities of IIS, for example run multiple sites within one Azure web role. This article explains the main advantages and differences.
When you create a new Azure project, the default behavior is to run under full IIS. Let’s look at the following part of the service definition file:

<WebRole name="CMSApp">
      <Site name="Web">
          <Binding name="HttpIn" endpointName="HttpIn" />

The Sites element decides whether the application runs under full IIS or not, so in this case the CMSApp role is running under full IIS. To run under hosted web core you need to comment out or delete this section.
Running applications under full IIS is great, but unfortunately there are some issues with this. Some of them are described here. Also, if you run Kentico CMS in Visual studio under full IIS, the application doesn’t start in most cases. This problem is not caused by our CMS, it is a known bug in the SDK. More information about this issue can be found in the Azure deployment guide.

Custom application settings

You can also specify your own settings into the configuration files. The service definition file is used to declare these settings:

<WebRole name="CMSApp">   
      <Setting name="CMSConnectionString" />

The example above shows how to define a setting for the CMSApp role called CMSConnectionString. The value must be set in the service configuration file:

 <Role name="CMSApp">   
     <Setting name="CMSConnectionString" value=""/>

As you can see, the only difference between the definition and configuration is the value attribute. Remember, if you want to use your own setting, you have to specify it in both files, otherwise it will not work (the site doesn’t start). The best practice here is to define all settings which you may want to use into the service definition file and set blank values for them in the service configuration file. Then when you start using a setting, you don’t need to redeploy the whole solution because the service configuration is placed outside of the package and can be updated separately.
I think these are used most commonly. My intention here was to explain a few important settings rather than give a list of all of them. Also, you can set these settings in the properties of the role, but from my experience, editing these configuration files manually is faster.

Cloud environment configuration

Now you know how to set up the application, but what about the cloud infrastructure? You have multiple choices here:
  1. Portal ( – definitely the easiest solution and the recommended way for beginners. Also, the portal must be used for several operations - signing up for beta programs, uploading management certificates and viewing bills. Portal authentication is based on live id and it’s recommended to have only one person with credentials for it.
  2. Powershell, Azure management console – ideal way for developers, it doesn’t require credentials for the portal, management certificates are used for authorization, tasks can usually be automated.
  3. Directly via the Service management API – I don’t recommend using the REST management API directly. In most cases you don’t need to perform management tasks directly in your code. And if you do, you can use the csmanage sample. This sample already implements the REST API, so you can use it to build your own management logic.

Web.config file vs. service configuration file

It has been already said above that the service configuration file is a single file which can be updated without redeployment. And because of this fact, it is more appropriate to store application settings in the service configuration file. But this approach also has a few disadvantages. The biggest disadvantage from my point of view is missing support for encryption of configuration settings. It could be a security issue if you store the connection string there.
That is why we have decided to support both. With Kentico CMS 6.0, you will be able to store settings in the web.config file and also in the service definition file. When an application setting is needed, Kentico CMS first looks into the web.config and if it isn’t there, then it tries to read this setting from the service configuration file.

Logging in Windows Azure

When developing, one of the main reasons for using the Azure emulator instead of the real cloud directly is debugging. By debugging I mean starting the application in Debug mode where you can use breakpoints, watch, call stack and all the cool features there. You  simply can’t do that with your cloud role. Fortunately, you have other options. First of all, with the Ultimate edition of Visual studio 2010, you can use IntelliTrace to debug. More information on this topic is here. Another option is to connect to your role via remote desktop. Here you can find a step-by-step tutorial how to do it. These methods are very useful, especially when you are developing. But once you deploy your app into the production environment, it’s time to use good old logging.
Logging in Windows Azure works the following way:
  1. You set up logging in your code by setting the DiagnosticMonitorConfiguration object (more on that below).
  2. Logs are transferred to Azure storage — you can choose if you want to transfer periodically or on demand.
  3. You can download or view logs from Azure storage.
When you want to set up logging, you first need to get the default configuration using the following line of code:

var config = DiagnosticMonitor.GetDefaultInitialConfiguration();

Then, you can collect the following information (each has a code example):
  • Performance counters
  • Infrastructure logs
  • Windows events
  • Trace logs
  • Custom directories
  • IIS logs
  • Complete crash dumps
The Code bellow shows how to collect each type of information:

// Performance counters
PerformanceCounterConfiguration procTimeConfig = new PerformanceCounterConfiguration();
procTimeConfig.CounterSpecifier = @"\Processor(*)\% Processor Time";
procTimeConfig.SampleRate = System.TimeSpan.FromSeconds(5.0);
// Infrastucture logs
config.DiagnosticInfrastructureLogs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
config.DiagnosticInfrastructureLogs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);           

// Windows events
config.WindowsEventLog.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
config.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);

// Trace logs
config.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
config.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);

// Custom directories
config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);

// Capture complete crash dumps

<!-- IIS logs -->
        <add path="Default.aspx">
            <add provider="ASP" verbosity="Verbose"/>
            <add provider="ASPNET" areas="Infrastructure,Module,Page,AppServices" verbosity="Verbose"/>
            <add provider="ISAPI Extension" verbosity="Verbose"/>
            <add provider="WWW Server" areas="Authentication,Security,Filter,StaticFile,CGI,Compression,Cache,RequestNotifications,Module" verbosity="Verbose"/>
        <failureDefinitions statusCodes="200-599"/>

Recommended settings for Kentico CMS

There are some recommended values for both Azure and Kentico CMS which should be set in order to run Kentico CMS on Azure with the best performance and without any problems. These setting values are a result of Kentico CMS architecture, the Azure model and the current limitations of running Kentico CMS on Azure. 

Azure recommended settings

  • All of Kentico CMS runs under one web role and this web role should have only one instance. This is a current limitation of Kentico CMS 5.5 R2.
  • All endpoints should have the same port and internalPort values.
  • The smart search, web analytics and media libraries modules need a storage service, Azure drive and local storage.

Kentico CMS 5.5 R2 recommended settings

  • Modules should store data into the database.
  • Debug or Event log cannot store information into files.
  • For smart search, web analytics and media libraries, the storage account name and key must be set in the configuration file.
  • WebDAV support should be disabled.

Kentico CMS 6.0 recommended settings

  • All endpoints should have the same port and internalPort values.
  • All other settings are up to you.
The purpose of this listing is just to give you an idea of what you need to set up. An exact step-by-step manual of how to do it is part of the Azure deployment guide.

Huh, we are at the end of today’s post. I hope it was useful to you. In the next part of this series, we will look at the most important requirements of the Windows Azure application – statelessness. Stay tuned.
Share this article on   LinkedIn

Dominik Pinter

I'm a fan of cloud computing (primarily Windows Azure) and I really like to dig into web application security. My blog is focused on everything related to Kentico, .NET Framework, Cloud platforms and web application security.