Accessing Custom IIS 7 Configuration at Runtime

I’ve been working on a series of posts and Webcasts in my “spare time” directed at some of the areas in which I am most frequently asked questions. I am currently working on developing topics dealing with IIS configuration. It seems logical that this would bring me a great deal of questions considering that IIS 7.0 is using a completely different configuration system than it did in previous versions. That said, I wanted to follow up to my Webcast (“Extending IIS Configuration“) with an article, and another webcast dealing with accessing custom configuration data at runtime.

In the webcast, I described how to extend an existing IIS configuration system object. I want to talk a little more about that. In this blog post I will take on four topics:

  1. Give you some warnings that were not provided in the Webcast regarding extending native IIS configuration
  2. Show you how to access your extended configuration data at runtime in a managed-code HTTP module
  3. Describe how to improve the example used in the Webcast in a custom configuration section.
  4. Show you how to access your custom configuration section at runtime in a managed-code HTTP module

Extending Native Configuration Elements

Introduction

I talked with Carlos today to discuss the validity of extending native configuration elements for our own purposes. While there are certainly valid reasons to do this, it doesn’t come without a risk to your application. Any time that you modify a piece of data that was created by another party — Microsoft or otherwise — you run the risk of having your data overwritten or corrupting the data that the other party needs to use. That said, lets revisit the sample that is frequently given in many articles and was reused in my webcast.

Warnings

In our Webcast, we discussed the idea that you could extend a native configuration element simply by creating a new configuration element schema and placing it in our %windir%sytem32inetsrvconfigschema directory. For our example, we created a file named SiteOwner_schema.xml and put the following code into it:

<configSchema>
   
<sectionSchema name=”system.applicationHost/sites”>
       
<attribute name=”ownerName” type=”string”/>
        <
attribute name=”ownerEmail” type=”string”/>
        <
attribute name=”ownerPhone” type=”string”/>    
    </
sectionSchema>
</configSchema>

All we have done is matched the same section schema path as the existing “system.applicationHost/sites” sectionSchema element in our IIS_schema.xml file. We then added schema information for our three new attributes.

This simple step allows us to immediately start adding attributes to our site configuration, and to access that data programmatically, through PowerShell, and through the AppCmd utility. However, you should be aware of a few dangers before you do something of this nature.

The first problem is that many of our native configuration elements have associated “default” configuration elements. For instance, the <site/> element has an associated <siteDefaults/> element. When modifying the configuration schema for <site/> , you should also modify the <siteDefaults/> configuration schema to match. That said, you might consider changing the above sample configuration schema as follows:

<configSchema>
   
<sectionSchema name=”system.applicationHost/sites”>
       
<attribute name=”ownerName” type=”string”/>
        <
attribute name=”ownerEmail” type=”string”/>
        <
attribute name=”ownerPhone” type=”string”/>
    </sectionSchema>    
    <element name=”siteDefaults”>       
       
<attribute name=”ownerName” defaultValue=”undefined” type=”string”/>
        <
attribute name=”ownerEmail” defaultValue=”undefined”  type
=”string”/>
        <
attribute name=”ownerPhone” defaultValue=”undefined” type=”string”/>    
     </element>
</configSchema>

As you can see, in the bolded section above we have now modified the <siteDefaults/> configuration element to match our changes to the <site/> element.  This isn’t the end of the story, however. While changing this might have seemed simple, modifying other elements become much more complex. Consider the fact that when you modify the schema for <application/> elements, there are several places where <applicationDefaults/> are kept. You will find <applicationDefaults/> under both the sites collection definition, as well as under the associated a site element definition. If you modify the application schema definition, you must be sure to edit the schema for both of the applicationDefaults definitions. This gets even more complex if you try to modify the <virtualDirectoryDefaults/> definition. You’ll find that defined three times in the native configuration: under the sites collection definition, under the application element definition, and under the site element definition itself.

You can see how you can quickly start adding complexity to your changes. Making sure that you make changes everywhere you need to becomes all the more risky.

The second problem you may have is that you may not be able to rely on your third-party to keep their schema definition in line with your own expectations. From one minor revision to the next, a configuration schema you may be depending on can change before your very eyes. This becomes a huge issue. Your future installation may need to make many dependency and version checks to decide what schema document to install, and your module may even need to make checks in case the user upgrades the third-party component without installing the upgraded schema. This can get very tricky, to say the least.

The third problem we can run into when we take a dependency on existing configuration is that external code may not behave the way we want it to. For instance, there is no guarantee that utilities and code written to handle those elements will know what to do if it encounters data that it doesn’t expect. Sure, we can cause IIS to validate our changes through schema, but there is no guarantee that the components which handle those elements can take on the challenge of handling the existing data. There is even a good chance that you could cause an Access Violation (read: crash) if the data is not handled properly. There is also a good chance that the components which normally handle those elements may overwrite your custom data.

So before we go on to show you how to access this extended configuration element data, please be warned that it may not be the best solution to your problem. Later in this article, I will describe how to modify our example to give us the same functionality without altering the schema definition of the existing elements.

Accessing Extended Native Configuration Elements at Runtime

Introduction

With the warnings having been fairly posted, I’ll show you quickly how to access your custom configuration data at runtime from an HTTP module. Once you have saved your schema file to the appropriate directory, its time to write the HTTP module. That module is going to be very simple. We need to create a module that uses the PreSendRequestHeaders event. In this event we will get our “system.applicationHost/sites” configuration section, loop through the configured sites looking for the site on which the current HTTP request is executing, and then write HTTP headers out to the client. Just to make sure that our sample is working, we’ll wrap it up by adding JavaScript to an HTML page that will request the headers from the Web site and display them in a message box.

Getting Started

For this sample, I used Visual Studio 2008, but you can use plain old notepad and command-line compilation if your are feeling particularly froggy.

Visual Studio 2008 Walk-through

The following walk-through will create an HTTP module that can display the site owner details of the configuration system

  1. Click on the File | New | Project …  menu item.
  2. In the “Project types” tree on the left, drill down to the C# | Windows tree element
  3. In the “Templates” selector on the right, select the “Class Library” template
  4. Change the Name of the project to “SiteOwner
  5. Change the Location to “c:samples”
  6. Verify that the “Create directory for solution” checkbox is checked
  7. Click OK
  8. Click on the Project | Add Reference … menu item.
  9. Under the .NET tab, select the component with the Component Name of “System.Web
  10. Click OK
  11. Rename the “Class1.cs” file in the Solution Explorer to “SiteOwnerHeaderModule.cs
  12. Paste the following code in “SiteOwnerHeaderModule.cs” file:
    using System;
    using System.Web;
    using System.Web.Hosting;
    using Microsoft.Web.Administration;
    using System.Collections.Specialized;

    namespace SiteOwner
    {
       
    public class SiteOwnerHeaderModule : IHttpModule {
           
    public void Dispose() {}
           
    public void Init(HttpApplication context) {
               
    context.PreSendRequestHeaders += new EventHandler(OnPreSendRequestHeaders);
           
    }
           
    void OnPreSendRequestHeaders(object sender, EventArgs e){
                // Get the site collection

               
    ConfigurationSection section =
                   
    WebConfigurationManager.GetSection(“system.applicationHost/sites”);
               
    ConfigurationElementCollection cfgSites = section.GetCollection();
                HttpApplication app = (HttpApplication)sender;
                HttpContext context = app.Context;
                // Loop through the sites to find the site of 
                // the currently executing request
               
    foreach (ConfigurationElement cfgSite in cfgSites) {
                   
    if (HostingEnvironment.SiteName == (string)cfgSite.GetAttributeValue(“name”)) {
                        // This is the correct site, get the site owner attributes
                       
    SiteOwner siteOwner = new SiteOwner(cfgSite);
                       
    NameValueCollection headers = context.Response.Headers;
                       
    headers.Add(“Site-Owner-Name”, siteOwner.Name);
                       
    headers.Add(“Site-Owner-Email”, siteOwner.Email);
                       
    headers.Add(“Site-Owner-Phone”, siteOwner.Phone);
                       
    break;
                   
    }
               
    }
           
    }
       
    }
       
    class SiteOwner {
           
    public string Name { get; set; }
           
    public string Email { get; set; }
           
    public string Phone { get; set; }
           
    public SiteOwner(ConfigurationElement site) {
               
    this.Name = site.GetAttributeValue(“ownerName”) as string ;
               
    this.Email = site.GetAttributeValue(“ownerEmail”) as string ;
               
    this.Phone = site.GetAttributeValue(“ownerPhone”) as string;
           
    }
       
    }
    } 

  13. Click the File | Save menu item.
  14. Right-click the SiteOwner project in the Solution Explorer window and select Properties from the menu
  15. Click the Signing tab
  16. Check the Sign the assembly checkbox
  17. Select “<new>” from the Choose a strong name key file dropdown box
  18. Type “KeyFile” in the Key file name box
  19. Uncheck the checkbox labeled Protect my key file with a password
  20. Click OK
  21. Click the File | Save SiteOwner menu item
  22. Click the Build | Build SiteOwner menu item.

Notepad.exe and Command-line Walk-through

The following walk-through will create an HTTP module that can display the site owner details of the configuration system. This walk-through assumes that you have the Microosft .NET Framework SDK or Windows SDK installed. These include the C# compilers needed to build. For more help on building with csc.exe, see “Command-line building With csc.exe

  1. Paste the code from step 10 of the Visual Studio .NET 2008 walk-through into a new notepad instance
  2. Click File | Save As
  3. Browse to the “C:samples” folder if it exists; create the folder if it does not
  4. Type “SiteOwnerHeaderModule.cs” in the File name box
  5. Click the Save button
  6. Open a new SDK Command Prompt (Windows SDK) or Visual Studio Command Prompt (Visual Studio 2008).
  7. At the command prompt type:
    csc.exe /target:module /out:SiteOwner.netmodule /debug /r:System.Web.dll /r:%windir%system32inetsrvMicrosoft.Web.Administration.dll SiteOwnerHeaderModule.cs
  8. Press ENTER then type:
    sn.exe -k keyfile.snk
  9. Press ENTER then type:
    al.exe /out:SiteOwner.dll SiteOwner.netmodule /keyfile:keyfile.snk

Deploying Your HTTP Module

Once you have built your assembly using either Visual Studio 2008 or csc.exe, its time to deploy it to your system. This sample assumes that you don’t want all of your customers to have to install this same component to each Web site, so we’ll install it as a default module on the system.

Registering the Assembly in the GAC

To do this we first need to copy the assembly file(s) to the target system, and register them in the Global Assembly Cache. Using your preferred method to copy the files, copy the SiteOwner.dll (and SiteOwner.netmodule if you used command-line compilation) to the %windir%system32inetsrv directory on the target installation machine. Next open up a command prompt and type:

gacutil -i %windir%system32InetSrvSiteOwner.dll

Now you’ll need to get a copy of the “Public Key Token” for the assembly so that you can install this “strong-named assembly” module in IIS. To do this open up your %windir%assembly directory in Windows Explorer. Scroll down to the SiteOwner “file” in that window and copy the “Public Key Token”. You can do this easily by right-clicking the “file” and selecting Properties from the menu. Double click on the Public Key Token to select the entire contents, then right click the highlighted text and select Copy from the menu.

image

Installing the Module as a Default Module in IIS 7.0

Now that you have your assembly registered and your Public Key Token copied, its time to add this module to all the Web sites on your IIS 7.0 server. To accomplish this, open up your %windir%system32inetsrvconfigapplicationHost.config file in notepad. Scroll down near the bottom of the file to the default location path that looks something like this:

<location path=”” overrideMode=”Allow”>

Inside of that element scroll down to the <modules> element under the <system.webServer> section. You should find a number of modules already listed. To add yours, simply scroll to the bottom of the list and type the following:

<add name=”SiteOwner” type=”SiteOwner.SiteOwnerHeaderModule, SiteOwner, Version=1.0.0.0, Culture=neutral, PublicKeyToken=” />

Once you have typed this line in, you can paste your Public Key Token in right after “PublicKeyToken=” in the line you just typed. Save the applicationHost.config file.

Testing the Module

Choose a Web site on your server to test your application. If you want to run this module for files other than ASP.NET files, you’ll need to choose a Web site running in Integrated Pipeline. For our sample, we are going to make an HTML file to test our code. Our HTML file will contain some JavaScript that will get the headers from the server and display them in a message box. We could use a number of diagnostic tools to do this same work, but this will serve our purposes.

Open up Notepad and paste the following markup into it:

<html>
<body>
<script language=”JavaScript” type=”text/javascript”>
<!–
var req = new ActiveXObject(‘Microsoft.XMLHTTP’);
req.open(‘HEAD’, location.href, false);
req.send();
alert(req.getAllResponseHeaders())
//–>
</script>
</body>
</html>

Save the file to the physical root of the Web site you wish to test. Type in the corresponding URL to your test file in your web browser. Your result should be a JavaScript alert that looks something like the following:

image 

You’ll notice in my results, I have already set the Site’s ownerName attribute. (see the Webcast titled “Extending IIS Configuration” to see how).

Improving our Sample

Introduction

As I stated earlier in this post, this demonstrates something that is definitely possible, but somewhat risky to do. We now want to improve our sample to limit our dependencies on native and/or third-party configuration schema definitions, and prevent us from encroaching on another module’s configuration territory. In concept, what we will do is move our configuration data to its own configuration section and use the site’s ID to allow us to map the data from the site collection to our configuration section.

Modifying the Schema

First, you’ll want to remove the custom site owner configuration data from the site nodes. Otherwise, once we modify the schema, the configuration will not validate. Once you’ve removed that data open the SiteOwner_schema.xml file in notepad and change it to the following:

<configSchema> 
   
<sectionSchema name=”system.applicationHost/siteOwner”> 
       
<collection addElement=”site”> 
       
<attribute name=”id” type=”uint” required=”true” isUniqueKey=”true” />
            <
attribute name=”ownerName” type=”string” /> 
            <
attribute name=”ownerEmail” type=”string” /> 
            <
attribute name=”ownerPhone” type=”string” /> 
        </
collection> 
   
</sectionSchema> 
</configSchema>

Save the file in the same location as you did previously. Now we need to tell the applicationHost.config file about our new section schema. To do this, we add a new section declaration underneath of the system.applicationHost sectionGroup in the applicationHost.config file.

<sectionGroup name=”system.applicationHost”>
    …
    <section name=”siteOwner” allowDefinition=”AppHostOnly” overrideModeDefault=”Deny” />
    …
</
sectionGroup>

Next we need to modify our HTTP module to read from the new configuration section:

using System;
using System.Web;
using System.Web.Hosting;
using Microsoft.Web.Administration;
using System.Collections.Specialized;

namespace SiteOwner{
   
public class SiteOwnerHeaderModule : IHttpModule{
       
public void Dispose() { }
       
public void Init(HttpApplication context){
           
context.PreSendRequestHeaders
               
+= new EventHandler(OnPreSendRequestHeaders);
       
}
       
void OnPreSendRequestHeaders(object sender, EventArgs e){
           
// Get the site collection
            ConfigurationSection section = 
                WebConfigurationManager.GetSection(“system.applicationHost/sites”);
           
ConfigurationElementCollection cfgSites = section.GetCollection(); 
            HttpApplication app = (HttpApplication)sender;
            HttpContext context = app.Context;

           
// Loop through the sites to find the site of 
            // the currently executing request
            foreach (ConfigurationElement cfgSite in cfgSites){
                if (HostingEnvironment.SiteName == (string)cfgSite.GetAttributeValue(“name”)){
                   
// This is the correct site, find the associated
                    // Site Owner record in our custom config section
                    SiteOwner siteOwner = SiteOwner.Find((long)cfgSite.GetAttributeValue(“id”));
                   
if (siteOwner != null) {
                       
// Add the headers if we found a matching
                        // site owner record configured
                        NameValueCollection headers = context.Response.Headers;
                       
headers.Add(“Site-Owner-Name”, siteOwner.Name);
                       
headers.Add(“Site-Owner-Email”, siteOwner.Email);
                       
headers.Add(“Site-Owner-Phone”, siteOwner.Phone);
                   
}
                   
break;
               
}
           
}
       
}
   
}
   
class SiteOwner{
       
public string Name { get; set; }
       
public string Email { get; set; }
       
public string Phone { get; set; }
       
public SiteOwner(ConfigurationElement owner){
           
this.Name = owner.GetAttributeValue(“ownerName”) as string;
           
this.Email = owner.GetAttributeValue(“ownerEmail”) as string;
           
this.Phone = owner.GetAttributeValue(“ownerPhone”) as string;
       
}
       
public static SiteOwner Find(long siteId){
             // Find the matching siteOwner record for this siteId         
           
ConfigurationSection section = 
                        WebConfigurationManager.GetSection(“system.applicationHost/siteOwner”);
           
ConfigurationElementCollection cfgSiteOwners = section.GetCollection();
           
foreach (ConfigurationElement cfgOwner in cfgSiteOwners) {
               
if (siteId == (long)cfgOwner.GetAttributeValue(“id”)){
                   
return new SiteOwner(cfgOwner);
               
}
           
}
           
return null;
       
}
   
}
}

 

There isn’t much changed in this compared to the last. The difference, apart from a few code comments, is that I’ve modified the OnPreRequestHeaders function to get the current site ID, and pass that into the static SiteOwner.Find function. Then I modified the SiteOwner.Find function to read from our new configuration section. Please note that production code would likely set SiteOwner to extend from ConfigurationElement, and a special ConfigurationSection would be developed. Go ahead and compile and deploy this assembly the same way you did with the previous version. Make sure that you run gacutil on the new assembly once it is copied. Also, since we are running against a static HTML file, you might need to run iisreset.exe from a command-line in order to clear the cache from the server.

Once you have deployed your assembly and the schema file, its time to test the application. If we refresh our Web page, we will get a JavaScript alert without any site owner headers listed:

image

This is because we haven’t created any site owner configuration entries yet. So lets add a record directly to our config. I added the following configuration directly after the <sites> node in the applicationHost.config file:

<siteOwner>
   
<site id=”1″ ownerName=”Tobin Titus” ownerEmail=”tobint@microsoft.com” ownerPhone=”555-121-2121″ />
</
siteOwner>       

Save your file and refresh your Web browser. You should now see your site owner configuration data.

image

Conclusions

In this blog post, we’ve walked through extending existing IIS configuration objects, and accessing the custom data at runtime through an HTTP module. We’ve also talked about the risks of this approach and have demonstrated a better approach. This approach gives us the same ability to customize our IIS 7.0 configuration system, but gives us a much smaller dependency on the native configuration schema. We have little worry about schema changes, we don’t have to worry about other modules or utilities stomping on our custom data, and most importantly, we are far less likely to cause any Access Violations with the later approach. 

Many thanks go to Carlos once again for his help in putting this post together as well as for his CodeColorizer plug-in for Live Writer. High-fives and cookies in his office!

Reading IIS.NET Blogs (or any RSS) with Powershell

Being a member of the IIS team, I often find myself checking blog posts to see what the members of the product team are blogging about.  However, since Powershell came out, I find myself doing more and more work on my scripts. It’s a bit annoying to have to jump out of Powershell to go read blog posts.  As such, I’ve written a few quick scripts to help me read IIS.NET from my pretty blue shell. For those of you who are already familiar with powershell and don’t want to read the long blog post, you can download my blog script from the DownloadCENTER: http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1387 

Setting up your Powershell environment


To start, I’ve written a few supporting functions in my profile.  These functions help me keep my scripts organized and, since I change my scripts quite often, it helps me to sign them as well. 
First off, if you haven’t created your own certificate for signing code, please go back and take a look at my first Powershell blog post that give you the details on how to do this.  
Next, we need to add a few things to your Powershell profile.  To open your Powershell profile from within Powershell, type: 

PS > notepad $profile 
First, I add a function to allow us to easily sign our scripts (assuming you have created a cert to sign them wth): 

## Sign a file
##-------------------------------------------
function global:Sign-Script ( [string] $file )
{
     $cert = @(Get-ChildItem cert:CurrentUserMy -codesigning)[0]
     Set-AuthenticodeSignature $file $cert
}
set-alias -name sign -value sign-script

The next function is used to help me organize things. I have several scripts for various work environments.  I like to organize them by function. So, I keep my IIS scripts in an “IIS” directory, my common scripts in a “common” directory and so on.  Inside each of my script directories, I keep a “load.ps1” script that I can  use to initialize any of my work environments.  Lastly, I create a Powershell drive that matches the work environment name so I can get to my scripts easily. The function below does all the work for me. 

## Create a Drive
##-------------------------------------------
function global:New-Drive([string]$alias)
{
     $path = (join-path -path $global:profhome -childpath $alias)
     if( !(Test-Path $path ) )
     {
          ## Create the drive's directory if it doesn't exist
          new-item -path $global:profhome -name $alias -type directory
     }
     else
     {
          ## Execute the load script for this drive if one exists
          $loadscript = (join-path -path $path -childpath "load.ps1")
          if( Test-Path $loadscript)
          {
               $load = &$loadscript
          }
     }
     # Create the drive
     new-Psdrive -name $alias -scope global -Psprovider FileSystem -root $path
}

Within my profile, I simply call this function and pass in an alias. When the function executes it will create a directory with the alias name, if it doesn’t exist already. If the directory does exist, it will check for the load.ps1 file inside that path and execute it. Lastly, it will create powershell drive. I have the following calls added to my profile below: 

## Custom PS Drives
##-------------------------------------------
New-Drive -alias "common"
New-Drive -alias "iis"

Go ahead and save your profile now and type these commands: 

PS > Set-ExecutionPolicy Unrestricted
PS > &$profile
PS > Sign $profile
PS > Set-ExecutionPolicy AllSigned

 
The first command sets Powershell into unrestricted mode. This is because we need to execute the profile script and it hasn’t been signed yet.  The next command executes the profile. The third command uses the “sign” function that our profile script loaded. Since our profile is now signed, we can set our execution policy back to AllSigned. AllSigned means that Powershell will execute scripts as long as they are signed. 

From this point on, we can make changes to our profile and simply call our sign function again before we close our Powershell instance. The next instance of powershell that is opened will have our changes.  

Creating / Using Blog Functionality


Now that we have our environment set up, lets get to the blogging part.  If you’ve set up your environment right, you can execute the following command: 

PS > cd iis: 

This command will put you in the iis scripts directory.  Next, create a new blogs script by typing: 

PS > notepad blogs.ps1 

You’ll be prompted if you want to create the file. Go ahead and say yes.  Next, paste the following into the the notepad and save it: 
  

## Sets up all custom feeds from feeds.txt
##---------------------------------------------------
function global:Import-Feed
{
     if( $global:RssFeeds -eq $null )
     {
          $global:RssFeeds = @{};
     }
     $RssFeeds.Add( "iisblogs", "http://blogs.iis.net/rawmainfeed.aspx" );
     $RssFeeds.Add( "iisdownloads", "http://www.iis.net/DownloadCENTER/all/rss.aspx" );
}
Import-Feed ## Call Import-Feed so we are ready to go
## Gets a feed or lists available feeds
##---------------------------------------------------
function global:Get-Feed( [string] $name )
{
     if( $RssFeeds.ContainsKey( $name ) )
     {
          return $RssFeeds[$name];
     }
     else
     {
          Write-Host "The path requested does not exist";
          Write-Output $RssFeeds;
     }
}
## Gets IIS Blogs
##---------------------------------------------------
function global:Get-Blog([int]$index, [int]$last, [int]$first, [int]$open)
{
     $url = (Get-Feed iisblogs)
     return (Get-RSS $url $index $last $first $open)
}
## Gets a specific blog
##---------------------------------------------------
function global:Get-AuthorBlog([string]$creator)
{
     Get-Blog | Where-Object {$_.creator -eq $creator}
}
## Gets Downloads from IIS
##---------------------------------------------------
function global:Get-Download([int]$index, [int]$last, [int]$first, [int]$open)
{
     $url = (Get-Feed iisdownloads)
     return (Get-RSS $url $index $last $first $open)
}
## Gets a generic RSS Feed
##---------------------------------------------------
function global:Get-RSS([string]$url, [int]$index, [int]$last, [int]$first, [int]$open)
{
     $feed = [/xml](new-object System.Net.WebClient).DownloadString($url)
     if($index)
     {
          return $feed.rss.channel.item[$index]
     }
     if($open)
     {
          $ieaddr = $env:programfiles + "internet exploreriexplore.exe"
          return &(get-item $ieaddr) $feed.rss.channel.item[$open].link
     }
     if($last)
     {
          return ($feed.rss.channel.item | Select -last $last)
     }
     if($first)
     {
          return ($feed.rss.channel.item | Select -first $first)
     }
     return $feed.rss.channel.item
}

Once you’ve saved this file, close it.  We need to sign this script and execute it by typing: 

PS IIS:> sign blogs.ps1
PS IIS:> ./blogs.ps1 

Now lets start reading.  

  • Read all Blogs
PS iis:> Get-Blog 
  • Read the last five blog posts
PS iis:> Get-Blog -last 5 
  • Read the first five blog posts
PS iis:> Get-Blog -first 5 
  • Read the 8th blog post
PS iis:> Get-Blog -index 8 
  • Open the 12th blog post and open in Internet Explorer
PS iis:> Get-Blog -open 12 
  • Read all blog posts by Bill Staples
PS iis:> Get-AuthorBlog bills 
  • Read all items in DownloadCENTER
PS iis:> Get-Download 
  • Get titles of all items in DownloadCENTER
PS iis:> Get-Download | Select Title 

Of course, all the laws of Powershell still apply, so I can still do fun stuff like like listing only the blog titles from my blog. 

PS iis:> Get-AuthorBlog TobinTitus | Select Title 

I can do the same witht he raw blog output: 

PS iis:> Get-Blog -last 5 | Select pubDate, Creator, Title 

Happy reading. 

IIS.NET Blogs in Powershell

OT: Cat and Mouse

For those of you looking for IIS information, this blog post is not for you. For those of you that just like general computer information, read on and enjoy.

The computer mouse has come a long way over the years.  When Doug Engelbart invented the computer mouse, it was rather crude looking but was none-the-less very revolutionary.  Engelbart showed his invention to the world in 1968 during a presentation now know as “The Mother of All Demos.”  The mouse that Engelbart demoed was an electromechanical device that leveraged a large sphere to turn x and y coordinate counters as the device was rolled across a surface. Each click of the counter would tell the computer how far the mouse had “traveled”.  For several years the mouse kept the same basic principal. We improved on the original idea and replaced a large sphere with small rubbery balls with x,  i and often an additional diagonal gear wheel. The diagonal indicator was used to help correct the cursor movement if the mouse was rotated or tilted. The rubber helped the ball move across slick surfaces when a mouse pad just wasn’t cutting it. The downside to this rubbery surface was that pet owners ended up with a lot of cat (or dog) fur rolling into the mouse.  You would often have to open the bottom portion of the mouse and clean the hair and other debris out to make your pointing device work efficiently again.  Playing Doom or Quake with a junked-up mouse was an instant indication of a n00b that needed serious pwning.

Roll forward to the present day and we find optical mice taking the electromechanical device’s place. The optical approach solves a lot of the problems associated with older mice. For one, the new mice don’t have the rolling dowel-like rollers (counters) that can get gunked up anymore.  A rubbery ball is not picking up every piece of debris and yanking it into the mouse cavity as though it were a time-capsule for desktop debris.  So, why does your mouse still freak out when a piece of fur gets trapped under your optical mouse? 

The answer is pretty interesting and I’m sure will be solved with the next iteration of mouse invention. Many optical mice are created using a camera or an optoelectric sensor, an optical processor to compare images taken by the camera/sensor, and an LED  that illuminates the surface under your mouse.  The camera/sensor takes ~1500 picture samples a second!  The pictures are small (usually 10-20 square pixels) and gray-scale (usually registering fewer than 100 shades).  The optical processor examines and compares the picture samples to determine the relative position of the mouse to its previous position.

Now introduce your cat’s hair to the equation.  The cat hair gets entangled in the small cavity of the mouse where the optical sensor lives. As the mouse rolls across the desk, static builds up and flips the hair around wildly. As your mouse snaps those thousands of pictures, the hair position is captured and the optical processor gets confused by the sudden movement of the cat hair in the picture comparisons.  You can move your mouse slowly to the right, but if the hair is flipped around within the mouse cavity, the processor will think that you have jerked the mouse to the left or sharply downward.  If your cat is watching the computer screen as mine often does, these sharp movements may cause the cat to attack your monitor — truly creating an interesting game of cat and mouse.

IIS 7 Logging UI For Vista – Download Now

As many of you already know, the management console for IIS 7.0 on Windows Vista does not have a UI for logging.  Since this was a pain point for several customers, I decided to test out the extensibility APIs by creating a logging UI module. 

I’ve posted a preview version of my logging UI on the newly opened IIS Download Center. I will be releasing a few updates throughout the week with changes. The module also contains the source code for my UI module under the Microsoft Permissive License. This code will also be updated in future releases.

You can find the download at: http://www.iis.net/downloads/default.aspx?tabid=34&i=1328&g=6

If you have any questions, please feel free to contact me through my blog.

Ups and Downs of the past month

It’s sometimes hard to hold a completely technical blog, particularly when you have long absences from one post to the next. You feel you have to explain yourself to your readership each time you take more than a week or two between posts. This is no different. Despite having a ton of stuff to blog about, I haven’t posted since December. Much of this has to do with regular holiday planning, but much more has happened. For me, this past month has had some major ups and downs for me emotionally and I’m still a little mixed up.

This post will have a great deal of personal information in it, and much of it has nothing to do with IIS, but it should give you some insight into Microsoft if you are interested in that sort of thing.

If you’ve been paying attention to the weather in the Pacific Northwest, you know that on December 14th, we had a major windstorm that knocked out power to over 1 million customers. The bad news is that on December 15th, I had a flight scheduled to go visit my family on the west coast. United Airlines cancelled several flights, which put their check-in line in complete disarray. My flight wasn’t cancelled, but no thanks to United Airlines, I was not able to board my flight, and no other flights could be found for me along with my two cats. My trip was cancelled.

Since I was still in town, I helped put the finishing touches on a “Think Week” paper I had been writing along with two other employees here at Microsoft. Despite the power outages, we were able to make it into one of the buildings at Microsoft and submit our paper by the deadline. It was an interesting experience to submit a paper that every full time employee of the company can read and comment on, including Bill Gates.

I also took the opportunity to work on writing a UI module for IIS in that time. The module was actually finished, but after consulting with one of the developers, I decided to modify the sample and I haven’t had time to clean it up and submit it yet. More on that later.

On December 28th, I got a phone call saying that my grandfather had passed away. My family has always been important to me. My grandparents hold a special place in my heart because they gave me a lot of my determination. My grandfather was a tail-gunner in World War II, had seen more inventions and re-inventions in his lifetime than I could fathom. He always chuckled when I came home to visit and told him about this great “new” thing in technology that would change the world unlike anything else ever had. I didn’t get the joke then, but I do now. I don’t mean to downplay the importance of technology. When putting it in perspective, we are not the first people to change the world and we will certainly not be the last. I’ll miss my grandfather terribly and there are no words to describe how this has changed my world.

It was good, however, to go back home for my grandfather’s funeral. I got to help my dad clean up and set up his workshop. I also got to see one my best friend since I was 8 years old. I haven’t seen him in 10 months since I moved out west to work for Microsoft. I returned from my grandfather’s funeral on the 8th and have tried to get 100% back into the swing of things. I really hadn’t been able to focus on work the way that I usually do until Thursday. I was finally making some headway on a few projects at work.

Before I continue, I need to give some of you some background on me. 20 years ago, when I was in 5th grade, I taught myself Microsoft BASIC and Atari Assembler. I remember telling my parents back then that I was going to work for Microsoft one day. My mom told me to finish my homework first. Of course, I have finished my homework and 20 years later, here I am working for Microsoft and submitting a paper to the very man that started the company. A few weeks had gone by and we received a LOT of feedback about our paper, but none of that feedback is from Mr Gates. We sort of expected that. Bill doesn’t respond to many papers in a year.

However, the other day I got a barely coherent message on my cell phone. It was from one of the co-authors of the think week paper. He was an excited statement about Bill Gates reading our paper. I tried to connect to the VPN to go read the feedback but had some issues because my home network was, shall we say, “in flux”. It drove me nuts that there was feedback from the very man that we addressed the paper to, but I couldn’t read it. I jumped in the shower and then rushed into work to read it personally. During the entire ride to work, I was forcing myself to watch my speed carefully. However, my heart was racing so fast that I think it pushed 10 extra pounds of blood into my right foot. I got to work and quickly opened up the Think Week site and scrolled to our paper. When I read the comment, I couldn’t help but become giddy. The feedback was favorable and verbose — about a page and a half. I was elated and sitting here today, I’m still in shock about the entire thing.

I have to say that I’m extremely thankful for the opportunities we have inside our company. Inside the company, you can tackle any problem that you want. You simply need to apply your efforts in an area, and you’ll likely get the support of your managers, team, and friends. I know there are many people out there who like to point out some negative issues inside Microsoft. Some anonymous blogs out there take a rather candid look at company issues and seem to err on the side of complaining. My experience inside the company so far has been spectacular. It helps that I knew people inside the company before I moved out here. That said, there are many opportunities inside the company to make recommendations and get involved. I can’t be happy enough with that.

So, while this was not a technical post, I felt I owed it to you all to explain where I have been and why I haven’t been blogging (again). Thanks for putting up with me.

Extending IIS 7 APIs through PowerShell (Part II)

In my previous post, I showed you how easy it was to leverage your knowledge of the IIS 7 managed SDK in Windows PowerShell.  We loaded the IIS 7 managed assemblies and then traversed the object model to display site information and stop application pools.  While this in itself was pretty cool, I don’t think I quite got my point across about how powerful IIS 7 and PowerShell are together. As such, I wanted to show you some more fun things to do with PowerShell in the name of easy IIS 7 administration.

First, our examples still required a great deal of typing and piping and filtering.  Let’s modify our profile script from my previous post by adding at least one new global variable that will give us access to the ServerManager without much typing.  Add the following line to your profile script from my previous post.

new-variable iismgr -value (New-Object Microsoft.Web.Administration.ServerManager) -scope "global"

(if you don’t have a profile script yet, go back to my previous post to learn how to create one).

Note: If you signed your script before, you’ll have to do it again after modifying the script

Open a new instance of PowerShell and now you can access the site collection just by typing:

PS C:> $iismgr.Sites

 That’s considerably smaller than our previous examples.  But let’s not stop there.  What happens if I want to search the site collection? PowerShell has some fun syntax for this as well. I simply pipe the output of my SiteCollection to a “Where-Object” cmdlet and then specify what site I’m looking for:

$iismgr.Sites | Where-Object {$_.Name -match "Default*"}

This is still quite a bit of typing when all I really want to do is find the default website. You may ask “Wouldn’t it be easier if we could just add a “Find” method to the SiteCollection object?” Well I’m glad you asked *cough*! Next, we are going to do just that! Open up another instance of notepad and add the following XML to it:


<?xml version="1.0" encoding="utf-8" ?>
<!-- *******************************************************************
Copyright (c) Microsoft Corporation.  All rights reserved.
 
THIS SAMPLE CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY 
OF ANY KIND,WHETHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO 
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR
PURPOSE. IF THIS CODE AND INFORMATION IS MODIFIED, THE ENTIRE RISK OF USE
OR RESULTS IN CONNECTION WITH THE USE OF THIS CODE AND INFORMATION 
REMAINS WITH THE USER.
******************************************************************** -->
<Types>
  <Type>
    <Name>Microsoft.Web.Administration.SiteCollection</Name>
      <Members>
        <ScriptMethod>
          <Name>Find</Name>
          <Script>
          $rtr = "";
          if ( $args[0] ) 
          { $name = $args[0];
            $rtr = ((New-Object Microsoft.Web.Administration.ServerManager).Sites | 
                     Where-Object {$_.Name -match $name}) 
          } 
          else 
          { 
            $rtr = "No sites found." 
          };
          $rtr
          </Script>
        </ScriptMethod>
      </Members>
  </Type>
</Types>

What we’ve done is identified that we want to add a scripted method to the Microsoft.Web.Administration.SiteCollection object. In our case, I’ve added a “Find” method by using the same cmdlet t at we typed before to search the site collection. The difference is, this type I use the $args array variable to check for a parameter and use it if one is available. Now, save this file into your %windir%system32WindowsPowerShellv1.0 directory as “iis.types.ps1xml”. Once you’ve saved the file, sign it the same way you signed your profile script. Keep in mind that these xml files contain code, so signing your xml is required to keep your PowerShell experience a secure one. Now, open your profile script (again) and add the following lines to the end:


new-variable iissites -value (New-Object Microsoft.Web.Administration.ServerManager).Sites -scope "global"
new-variable iisapppools -value (New-Object Microsoft.Web.Administration.ServerManager).ApplicationPools -scope "global"
update-typedata -append (join-path -path $PSHome -childPath "iis.types.ps1xml")

Note: Once again, you’ll have to re-sign this profile if your execution policy requires signed scripts.

Notice that I added two more variables: $iissites and $iisapppools. These variables allow me to access the site collection and application pool collection with a simple keyword. Lets try them out in PowerShell. Make sure you open a new instance of PowerShell so your profile script and xml type data are updated properly. Once your new instance of PowerShell is open, type the following:

PS C:> $iissites.Find("^Default*")

PowerShell will do all the work for you and you have MUCH less typing to do.

Another alternative to using xml files is to simply create a function and add it to your profile. For instance, we can create a function called “findsite” that provides the same functionality as our previous example. Either type the following command into PowerShell or add it to your profile script:

PS C:> function findsite { $name=$args[0]; ((New-Object Microsoft.Web.Administration.ServerManager).Sites | Where-Object {$_.Name -match $name}); } }

Now you can search for a site using the following syntax:

PS C:> findsite default*

Whatever way we choose to extend Microsoft.Web.Administration and/or PowerShell, we can use our output as we did before:

PS C:> (findsite default*).Id

The previous line should display the Id of the default web site. We can also stop the website:

PS C:> (findsite default*).Stop()

We can keep taking this to extremes and truncate every operation that we perform on a semi-regular basis. These scripts are not one-offs. Each script function we create or append to an existing object model can be reused and piped as input to another function. The possibilities are endless and completely customizable to your needs.

Adding performance

Some of you may have found the title to this blog post as amusing as I do. Throughout my career, I’ve been called into many a meeting asking that I “add” performance to a complete or nearly-complete product. No matter how many scowls I got, I could never resist joking about just adding the IPerformance interface. “And, while you are at it, just add the IScalable one too”, I would quip. (OK, I know some of you are doing searches for these interfaces — don’t bother). Laugh as hard as I may, I’m embarrassed to say that I am trying to do this very thing now.

A few weeks ago, I started playing around with a small sample that shows off some of the fun features of IIS 7. As I started coding, I added more features to the sample. Scope creep is a no-no, but this was something I was doing on my own time so I didn’t have a problem with it. However, the sample got so large that I had to actually stop and design a public API to support it. By the time I was done, my “sample” had a set of providers, a common API, localization support, utilities, and most notably, performance issues.

Last year on my personal blog, I talked about “engineering for usability“. In that post, I declared that the simplest design is sometimes (and probably most often) the best approach. That said, performance should have been considered very early on, and the sample should have been kept simple. Performance, as many of you know, is not something you add as an afterthought. Starting my sample the way I did, I never considered code performance to be an issue.

Now I’m forced to decide: do I scale back my sample knowing full well that it doesn’t have a security model or provide easy extensibility, or do I redesign the sample with the current feature set and design for performance up front? I’m leaning toward the latter despite reciting “keep it simple, stupid” in my head over and over again.

Where can I find in IIS 7.0 ?

Question: Where can I find Windows Authentication in IIS 7.0

I’ve seen this question or questions like this asked numerous times so I thought it would make a nice quick — and hopefully useful — blog post.

In previous versions of IIS, administrators were able to enable or disable features on their servers by simply checking or unchecking a box in the IIS management console.  However, when an administrator unchecked a feature in IIS, the feature still existed on the machine and was even loaded.  Unchecking a box allowed the administrator to disable a feature — but it didn’t physically remove anything from the machine.

This all changes with IIS 7.0. Currently, when you install IIS 7.0 on Windows Vista (or Longhorn server for that matter), you must select the features that you want installed. Each feature in IIS 7.0 is componentized into its own module library. The features you select at installation, and ONLY the features you select, will be copied to disk under the “%windir%system32inetsrv” directory. If you do not install the feature, it simply won’t exist on your machine. This obviously makes the server more robust and performant at the sime time — providing a high amount of customization as well as security.

Another thing to note is that once a feature module has been installed to the proper directory, it must also be added to the IIS configuration system. For more information on how to install and uninstall modules, see: Getting Started with Modules.

If you are having a hard time getting WindowsAuthentication, DynamicCompression, or some other feature of IIS to work. A good place to start looking is your installation. Make sure you’ve installed the features you need for your purposes.

Steve Wozniak … at Microsoft?

I know it may sound very strange coming from a Microsoft employee, but last Friday I found myself in awe while I sat directly in front of Steve Wozniak while he gave a presentation … on Microsoft campus. 

I hung on the man’s every word as he tried to fit a story into the short time he had here. The same spirit that drove me into this industry at the ripe old age of 9 is what drove him into the engineering field very early in life. For those of you that don’t know, Mr. Wozniak is largely known for co-founding Apple Computer with Steve Jobs. It is amazing that Steve came on Microsoft campus to talk to us and even better that he didn’t pull any punches in his talks. He had a few nicely phrased jabs about Microsoft that were, in my opinion, probably deserved. I won’t repeat them, but you can get a general sense of what Steve thinks about Microsoft by looking at his blog. He’s a very gracious guy though, and it was so very cool to meet him. What was even more amazing was how many people from Microsoft crammed themselves into this small room to listen to him. There wasn’t a single piece of carpet left to stand or sit on.

I’ve run into several celebrities and “big names” in my life, but I am never really been star-struck until I meet a geek with a well-deserved reputation. This was definitely one of those times. I took some pictures of the event and once I get a chance to pull them off of my camera, I’ll post them here.

Thanks for stopping by Microsoft, Mr Wozniak. It was an honor to meet you.

Unsafe thread safety

As I stated in my last post, for the past two days I’ve been sitting in Jeffrey Richter’s threading class. The class is near the end and I can’t say that a lot of new concepts have been taught. Another student and I have decided that the class should have been renamed, “Threading Basics”. That’s not to say anything of Richter’s teaching skill or the content of the class.  It just goes to show that if the architecture of a product is right, the threading code should be extremely simple to use.

However, what I love about this class is that it reminds me how much I enjoy this topic.  The theory of the perfect architecture doesn’t exist. Additionally, many times the developer has little-to-no ability to push back on a bad architecture. It’s in these cases that you must use your bag of concurrency tricks to work out of the whole the architect(s) put you in.  That sound easy enough, but what if the architecture included – *gasp* – “less-than-optimal” decisions in the actual .NET framework classes?  What if those decisions were made in the very methods that are supposed to help you with these synchronization problems?  These problems perplexed me when I first encountered them and I never really thought to blog about them (I was actually just scared I was doing something wrong).  Taking this class gave me the perfect excuse to bring the topic up.

Take a look at the following pattern in C++:

class CConcurrencySample
{
public:
    CConcurrencySample( ) 
    {   // Initialize a critical section on class construction
        InitializeCriticalSection( &m_criticalSection );
    };
    ~CConcurrencySample( ) 
    {   // Destroy the critical section on class destruction
        DeleteCriticalSection( &m_criticalSection );
    };

    // This method is over-simplified on purpose
    ////////////////////////////////////////////////////////////
    BOOL SafeMethod( )
    {
        // Claim ownership of the critical section
        if( ! TryEnterCriticalSection( &m_criticalSection ) )
        {
            return FALSE; 
        }

        ////////////////////////////////////////////////
        // Thread-safe code goes here
        ////////////////////////////////////////////////

        // Release ownership of the critical section
        LeaveCriticalSection( &m_criticalSection );

        return TRUE;
    };
private:
    CRITICAL_SECTION m_criticalSection;
};

Anyone that’s done any work in threading recognizes this pattern. We’ve declared a class named CConcurrencySample. When any code instantiates an instance of CConcurrencySample, the constructor initializes the private CRITICAL_SECTION instance in the object. When any code destroys an instance of CConcurrencySample, the CRITICAL_SECTION is also deleted. With this pattern, we know that all members have access to the critical section and can claim ownership of the critical section at any point. For my purposes, I’ve created a method named SafeMethod that will take the critical section lock when it is called, and release the lock when it leaves the method. The good part about this pattern is that each instance has a way to lock SafeMethod. The downside is that if client doesn’t need to call SafeMethod, then the CRITICAL_SECTION is created, initialized and destroyed without need — uselessly taking up memory (24 bytes on a 32bit OS) and processor cycles.

The CLR implemented this pattern and even expanded on it. They also tried to fix the waste incurred with non-use of the critical section. The CLR implementation works as follows. Each System.Object created by the CLR contains a sync block index (4 bytes on a 32bit OS) that is defaulted to -1. If a critical section is entered within the object, the CLR adds the object’s sync block to an array and sets the object’s sync block index to the position in the array. With me so far? So how does one enter a critical section in .NET? Using the Monitor class, of course. The following example emulates the same functionality as the previous C++ sample by using C#.

public class CConcurrencySample
{
   bool SafeMethod( )
   {
      // Claim ownership of the critical section
      if( Monitor.TryEnter( this ) )
      {
          return false;
      }

      ////////////////////////////////////////////////
      // Thread-safe code goes here
      ////////////////////////////////////////////////
    		
      // Release ownership of the critical section
      Monitor.Exit( this );

      return true;
    }
}

Monitor.Enter and Monitor.Exit are supposed to provide similar functionality of entering and leaving a critical section (respectively). Since all System.Object’s have their own SyncBlock, we just need to pass our current object to the Monitor.Enter and Monitor.Exit method. This performs the lock I described earlier by setting the sync block index. Sounds great, but what’s the difference between that C++ example and the C# pattern? What issue can you see in the .NET framework implementation of this pattern that you don’t see in the C++ sample I provided?

Give up?

The answer simple. In the C++ sample, the lock object (the CRITICAL_SECTION field) is private. This means that no external clients can lock my object. My object controls the locks. In the .NET implementation, Monitor.Enter can take ANY object. ANY caller can lock on ANY object. External clients locking on your object’s SyncBlock can cause a deadlock.

    public class CConcurrencySample2
    {
        private Object m_lock = null;

        bool SafeMethod ( )
        {
            if ( m_lock == null )
            {
                m_lock = new Object( );
            }
            if ( ! Monitor.TryEnter( m_lock ) )
            {
                return false;
            }

            ////////////////////////////////////////////////
            // Thread-safe code goes here
            ////////////////////////////////////////////////

            Monitor.Exit( m_lock );

            return true;
        }
    }

With this approach, we are declaring an plain, privately-declared object in the class. Be careful when using this approach that you don’t use a value-type for your private lock. Value type’s passed to a Monitor will be boxed each time the methods on Monitor is called and a new SyncBlock is used — effectively making the lock useless.

So what does this have to do with IIS? Nothing just yet. But I plan to cover some asynchronous web execution patterns in future posts and I figured this would be a great place to start.

This information is pretty old but sitting in class I realized it might not be completely obvious to everyone just yet. If you were someone in-the-know about this information since WAY back in 2002, please forgive the repitition.