Category Archives: Technical

These posts are technical in nature. The casual reader may not care about these posts. These are for my fellow geeks.

IIS 7.0 Error Support

I know that most of you reading the blogs on this site have already seen how handy the errors are in IIS 7.0 as compared to previous error conditions. This weekend, I was playing with BlogEngine.NET and thought I’d put it on my personal site (tobint.com) to try it out. I downloaded the software and tried setting it up on my server. After following the brief instructions, I hit an error:

HTTP Error 500.22 – Internal Server Error
An ASP.NET setting has been detected that does not apply in Integrated managed pipeline mode.

In previous versions of IIS, this might have been all that I had to go on. I’d then search one or several search engines for that exact error and read through blog posts, forums, knowledge base articles and FAQ’s before I found what I needed. In IIS 7.0, I found the friendly error page awaiting me:

Error 500.22

 I looked at the “Things you can try” and a picked the first “fix” on the list. Since my application was not in the Default Web Site, I ran the appcmd for my application in “tobint.com/”. 

AppCmd

 As you can see, the error page correctly identified that my configuration needed to be migrated, and gave me the steps I needed to migrate it effectively. No searching the web. No trying to trace the problem down. No turning on tracing for ASP.NET pages. 

I love this feature. It’s a huge time saver and I don’t think it gets bragged on enough.

Accessing Custom IIS 7 Configuration at Runtime

I’ve been working on a series of posts and Webcasts in my “spare time” directed at some of the areas in which I am most frequently asked questions. I am currently working on developing topics dealing with IIS configuration. It seems logical that this would bring me a great deal of questions considering that IIS 7.0 is using a completely different configuration system than it did in previous versions. That said, I wanted to follow up to my Webcast (“Extending IIS Configuration“) with an article, and another webcast dealing with accessing custom configuration data at runtime.

In the webcast, I described how to extend an existing IIS configuration system object. I want to talk a little more about that. In this blog post I will take on four topics:

  1. Give you some warnings that were not provided in the Webcast regarding extending native IIS configuration
  2. Show you how to access your extended configuration data at runtime in a managed-code HTTP module
  3. Describe how to improve the example used in the Webcast in a custom configuration section.
  4. Show you how to access your custom configuration section at runtime in a managed-code HTTP module

Extending Native Configuration Elements

Introduction

I talked with Carlos today to discuss the validity of extending native configuration elements for our own purposes. While there are certainly valid reasons to do this, it doesn’t come without a risk to your application. Any time that you modify a piece of data that was created by another party — Microsoft or otherwise — you run the risk of having your data overwritten or corrupting the data that the other party needs to use. That said, lets revisit the sample that is frequently given in many articles and was reused in my webcast.

Warnings

In our Webcast, we discussed the idea that you could extend a native configuration element simply by creating a new configuration element schema and placing it in our %windir%sytem32inetsrvconfigschema directory. For our example, we created a file named SiteOwner_schema.xml and put the following code into it:

<configSchema>
   
<sectionSchema name=”system.applicationHost/sites”>
       
<attribute name=”ownerName” type=”string”/>
        <
attribute name=”ownerEmail” type=”string”/>
        <
attribute name=”ownerPhone” type=”string”/>    
    </
sectionSchema>
</configSchema>

All we have done is matched the same section schema path as the existing “system.applicationHost/sites” sectionSchema element in our IIS_schema.xml file. We then added schema information for our three new attributes.

This simple step allows us to immediately start adding attributes to our site configuration, and to access that data programmatically, through PowerShell, and through the AppCmd utility. However, you should be aware of a few dangers before you do something of this nature.

The first problem is that many of our native configuration elements have associated “default” configuration elements. For instance, the <site/> element has an associated <siteDefaults/> element. When modifying the configuration schema for <site/> , you should also modify the <siteDefaults/> configuration schema to match. That said, you might consider changing the above sample configuration schema as follows:

<configSchema>
   
<sectionSchema name=”system.applicationHost/sites”>
       
<attribute name=”ownerName” type=”string”/>
        <
attribute name=”ownerEmail” type=”string”/>
        <
attribute name=”ownerPhone” type=”string”/>
    </sectionSchema>    
    <element name=”siteDefaults”>       
       
<attribute name=”ownerName” defaultValue=”undefined” type=”string”/>
        <
attribute name=”ownerEmail” defaultValue=”undefined”  type
=”string”/>
        <
attribute name=”ownerPhone” defaultValue=”undefined” type=”string”/>    
     </element>
</configSchema>

As you can see, in the bolded section above we have now modified the <siteDefaults/> configuration element to match our changes to the <site/> element.  This isn’t the end of the story, however. While changing this might have seemed simple, modifying other elements become much more complex. Consider the fact that when you modify the schema for <application/> elements, there are several places where <applicationDefaults/> are kept. You will find <applicationDefaults/> under both the sites collection definition, as well as under the associated a site element definition. If you modify the application schema definition, you must be sure to edit the schema for both of the applicationDefaults definitions. This gets even more complex if you try to modify the <virtualDirectoryDefaults/> definition. You’ll find that defined three times in the native configuration: under the sites collection definition, under the application element definition, and under the site element definition itself.

You can see how you can quickly start adding complexity to your changes. Making sure that you make changes everywhere you need to becomes all the more risky.

The second problem you may have is that you may not be able to rely on your third-party to keep their schema definition in line with your own expectations. From one minor revision to the next, a configuration schema you may be depending on can change before your very eyes. This becomes a huge issue. Your future installation may need to make many dependency and version checks to decide what schema document to install, and your module may even need to make checks in case the user upgrades the third-party component without installing the upgraded schema. This can get very tricky, to say the least.

The third problem we can run into when we take a dependency on existing configuration is that external code may not behave the way we want it to. For instance, there is no guarantee that utilities and code written to handle those elements will know what to do if it encounters data that it doesn’t expect. Sure, we can cause IIS to validate our changes through schema, but there is no guarantee that the components which handle those elements can take on the challenge of handling the existing data. There is even a good chance that you could cause an Access Violation (read: crash) if the data is not handled properly. There is also a good chance that the components which normally handle those elements may overwrite your custom data.

So before we go on to show you how to access this extended configuration element data, please be warned that it may not be the best solution to your problem. Later in this article, I will describe how to modify our example to give us the same functionality without altering the schema definition of the existing elements.

Accessing Extended Native Configuration Elements at Runtime

Introduction

With the warnings having been fairly posted, I’ll show you quickly how to access your custom configuration data at runtime from an HTTP module. Once you have saved your schema file to the appropriate directory, its time to write the HTTP module. That module is going to be very simple. We need to create a module that uses the PreSendRequestHeaders event. In this event we will get our “system.applicationHost/sites” configuration section, loop through the configured sites looking for the site on which the current HTTP request is executing, and then write HTTP headers out to the client. Just to make sure that our sample is working, we’ll wrap it up by adding JavaScript to an HTML page that will request the headers from the Web site and display them in a message box.

Getting Started

For this sample, I used Visual Studio 2008, but you can use plain old notepad and command-line compilation if your are feeling particularly froggy.

Visual Studio 2008 Walk-through

The following walk-through will create an HTTP module that can display the site owner details of the configuration system

  1. Click on the File | New | Project …  menu item.
  2. In the “Project types” tree on the left, drill down to the C# | Windows tree element
  3. In the “Templates” selector on the right, select the “Class Library” template
  4. Change the Name of the project to “SiteOwner
  5. Change the Location to “c:samples”
  6. Verify that the “Create directory for solution” checkbox is checked
  7. Click OK
  8. Click on the Project | Add Reference … menu item.
  9. Under the .NET tab, select the component with the Component Name of “System.Web
  10. Click OK
  11. Rename the “Class1.cs” file in the Solution Explorer to “SiteOwnerHeaderModule.cs
  12. Paste the following code in “SiteOwnerHeaderModule.cs” file:
    using System;
    using System.Web;
    using System.Web.Hosting;
    using Microsoft.Web.Administration;
    using System.Collections.Specialized;

    namespace SiteOwner
    {
       
    public class SiteOwnerHeaderModule : IHttpModule {
           
    public void Dispose() {}
           
    public void Init(HttpApplication context) {
               
    context.PreSendRequestHeaders += new EventHandler(OnPreSendRequestHeaders);
           
    }
           
    void OnPreSendRequestHeaders(object sender, EventArgs e){
                // Get the site collection

               
    ConfigurationSection section =
                   
    WebConfigurationManager.GetSection(“system.applicationHost/sites”);
               
    ConfigurationElementCollection cfgSites = section.GetCollection();
                HttpApplication app = (HttpApplication)sender;
                HttpContext context = app.Context;
                // Loop through the sites to find the site of 
                // the currently executing request
               
    foreach (ConfigurationElement cfgSite in cfgSites) {
                   
    if (HostingEnvironment.SiteName == (string)cfgSite.GetAttributeValue(“name”)) {
                        // This is the correct site, get the site owner attributes
                       
    SiteOwner siteOwner = new SiteOwner(cfgSite);
                       
    NameValueCollection headers = context.Response.Headers;
                       
    headers.Add(“Site-Owner-Name”, siteOwner.Name);
                       
    headers.Add(“Site-Owner-Email”, siteOwner.Email);
                       
    headers.Add(“Site-Owner-Phone”, siteOwner.Phone);
                       
    break;
                   
    }
               
    }
           
    }
       
    }
       
    class SiteOwner {
           
    public string Name { get; set; }
           
    public string Email { get; set; }
           
    public string Phone { get; set; }
           
    public SiteOwner(ConfigurationElement site) {
               
    this.Name = site.GetAttributeValue(“ownerName”) as string ;
               
    this.Email = site.GetAttributeValue(“ownerEmail”) as string ;
               
    this.Phone = site.GetAttributeValue(“ownerPhone”) as string;
           
    }
       
    }
    } 

  13. Click the File | Save menu item.
  14. Right-click the SiteOwner project in the Solution Explorer window and select Properties from the menu
  15. Click the Signing tab
  16. Check the Sign the assembly checkbox
  17. Select “<new>” from the Choose a strong name key file dropdown box
  18. Type “KeyFile” in the Key file name box
  19. Uncheck the checkbox labeled Protect my key file with a password
  20. Click OK
  21. Click the File | Save SiteOwner menu item
  22. Click the Build | Build SiteOwner menu item.

Notepad.exe and Command-line Walk-through

The following walk-through will create an HTTP module that can display the site owner details of the configuration system. This walk-through assumes that you have the Microosft .NET Framework SDK or Windows SDK installed. These include the C# compilers needed to build. For more help on building with csc.exe, see “Command-line building With csc.exe

  1. Paste the code from step 10 of the Visual Studio .NET 2008 walk-through into a new notepad instance
  2. Click File | Save As
  3. Browse to the “C:samples” folder if it exists; create the folder if it does not
  4. Type “SiteOwnerHeaderModule.cs” in the File name box
  5. Click the Save button
  6. Open a new SDK Command Prompt (Windows SDK) or Visual Studio Command Prompt (Visual Studio 2008).
  7. At the command prompt type:
    csc.exe /target:module /out:SiteOwner.netmodule /debug /r:System.Web.dll /r:%windir%system32inetsrvMicrosoft.Web.Administration.dll SiteOwnerHeaderModule.cs
  8. Press ENTER then type:
    sn.exe -k keyfile.snk
  9. Press ENTER then type:
    al.exe /out:SiteOwner.dll SiteOwner.netmodule /keyfile:keyfile.snk

Deploying Your HTTP Module

Once you have built your assembly using either Visual Studio 2008 or csc.exe, its time to deploy it to your system. This sample assumes that you don’t want all of your customers to have to install this same component to each Web site, so we’ll install it as a default module on the system.

Registering the Assembly in the GAC

To do this we first need to copy the assembly file(s) to the target system, and register them in the Global Assembly Cache. Using your preferred method to copy the files, copy the SiteOwner.dll (and SiteOwner.netmodule if you used command-line compilation) to the %windir%system32inetsrv directory on the target installation machine. Next open up a command prompt and type:

gacutil -i %windir%system32InetSrvSiteOwner.dll

Now you’ll need to get a copy of the “Public Key Token” for the assembly so that you can install this “strong-named assembly” module in IIS. To do this open up your %windir%assembly directory in Windows Explorer. Scroll down to the SiteOwner “file” in that window and copy the “Public Key Token”. You can do this easily by right-clicking the “file” and selecting Properties from the menu. Double click on the Public Key Token to select the entire contents, then right click the highlighted text and select Copy from the menu.

image

Installing the Module as a Default Module in IIS 7.0

Now that you have your assembly registered and your Public Key Token copied, its time to add this module to all the Web sites on your IIS 7.0 server. To accomplish this, open up your %windir%system32inetsrvconfigapplicationHost.config file in notepad. Scroll down near the bottom of the file to the default location path that looks something like this:

<location path=”” overrideMode=”Allow”>

Inside of that element scroll down to the <modules> element under the <system.webServer> section. You should find a number of modules already listed. To add yours, simply scroll to the bottom of the list and type the following:

<add name=”SiteOwner” type=”SiteOwner.SiteOwnerHeaderModule, SiteOwner, Version=1.0.0.0, Culture=neutral, PublicKeyToken=” />

Once you have typed this line in, you can paste your Public Key Token in right after “PublicKeyToken=” in the line you just typed. Save the applicationHost.config file.

Testing the Module

Choose a Web site on your server to test your application. If you want to run this module for files other than ASP.NET files, you’ll need to choose a Web site running in Integrated Pipeline. For our sample, we are going to make an HTML file to test our code. Our HTML file will contain some JavaScript that will get the headers from the server and display them in a message box. We could use a number of diagnostic tools to do this same work, but this will serve our purposes.

Open up Notepad and paste the following markup into it:

<html>
<body>
<script language=”JavaScript” type=”text/javascript”>
<!–
var req = new ActiveXObject(‘Microsoft.XMLHTTP’);
req.open(‘HEAD’, location.href, false);
req.send();
alert(req.getAllResponseHeaders())
//–>
</script>
</body>
</html>

Save the file to the physical root of the Web site you wish to test. Type in the corresponding URL to your test file in your web browser. Your result should be a JavaScript alert that looks something like the following:

image 

You’ll notice in my results, I have already set the Site’s ownerName attribute. (see the Webcast titled “Extending IIS Configuration” to see how).

Improving our Sample

Introduction

As I stated earlier in this post, this demonstrates something that is definitely possible, but somewhat risky to do. We now want to improve our sample to limit our dependencies on native and/or third-party configuration schema definitions, and prevent us from encroaching on another module’s configuration territory. In concept, what we will do is move our configuration data to its own configuration section and use the site’s ID to allow us to map the data from the site collection to our configuration section.

Modifying the Schema

First, you’ll want to remove the custom site owner configuration data from the site nodes. Otherwise, once we modify the schema, the configuration will not validate. Once you’ve removed that data open the SiteOwner_schema.xml file in notepad and change it to the following:

<configSchema> 
   
<sectionSchema name=”system.applicationHost/siteOwner”> 
       
<collection addElement=”site”> 
       
<attribute name=”id” type=”uint” required=”true” isUniqueKey=”true” />
            <
attribute name=”ownerName” type=”string” /> 
            <
attribute name=”ownerEmail” type=”string” /> 
            <
attribute name=”ownerPhone” type=”string” /> 
        </
collection> 
   
</sectionSchema> 
</configSchema>

Save the file in the same location as you did previously. Now we need to tell the applicationHost.config file about our new section schema. To do this, we add a new section declaration underneath of the system.applicationHost sectionGroup in the applicationHost.config file.

<sectionGroup name=”system.applicationHost”>
    …
    <section name=”siteOwner” allowDefinition=”AppHostOnly” overrideModeDefault=”Deny” />
    …
</
sectionGroup>

Next we need to modify our HTTP module to read from the new configuration section:

using System;
using System.Web;
using System.Web.Hosting;
using Microsoft.Web.Administration;
using System.Collections.Specialized;

namespace SiteOwner{
   
public class SiteOwnerHeaderModule : IHttpModule{
       
public void Dispose() { }
       
public void Init(HttpApplication context){
           
context.PreSendRequestHeaders
               
+= new EventHandler(OnPreSendRequestHeaders);
       
}
       
void OnPreSendRequestHeaders(object sender, EventArgs e){
           
// Get the site collection
            ConfigurationSection section = 
                WebConfigurationManager.GetSection(“system.applicationHost/sites”);
           
ConfigurationElementCollection cfgSites = section.GetCollection(); 
            HttpApplication app = (HttpApplication)sender;
            HttpContext context = app.Context;

           
// Loop through the sites to find the site of 
            // the currently executing request
            foreach (ConfigurationElement cfgSite in cfgSites){
                if (HostingEnvironment.SiteName == (string)cfgSite.GetAttributeValue(“name”)){
                   
// This is the correct site, find the associated
                    // Site Owner record in our custom config section
                    SiteOwner siteOwner = SiteOwner.Find((long)cfgSite.GetAttributeValue(“id”));
                   
if (siteOwner != null) {
                       
// Add the headers if we found a matching
                        // site owner record configured
                        NameValueCollection headers = context.Response.Headers;
                       
headers.Add(“Site-Owner-Name”, siteOwner.Name);
                       
headers.Add(“Site-Owner-Email”, siteOwner.Email);
                       
headers.Add(“Site-Owner-Phone”, siteOwner.Phone);
                   
}
                   
break;
               
}
           
}
       
}
   
}
   
class SiteOwner{
       
public string Name { get; set; }
       
public string Email { get; set; }
       
public string Phone { get; set; }
       
public SiteOwner(ConfigurationElement owner){
           
this.Name = owner.GetAttributeValue(“ownerName”) as string;
           
this.Email = owner.GetAttributeValue(“ownerEmail”) as string;
           
this.Phone = owner.GetAttributeValue(“ownerPhone”) as string;
       
}
       
public static SiteOwner Find(long siteId){
             // Find the matching siteOwner record for this siteId         
           
ConfigurationSection section = 
                        WebConfigurationManager.GetSection(“system.applicationHost/siteOwner”);
           
ConfigurationElementCollection cfgSiteOwners = section.GetCollection();
           
foreach (ConfigurationElement cfgOwner in cfgSiteOwners) {
               
if (siteId == (long)cfgOwner.GetAttributeValue(“id”)){
                   
return new SiteOwner(cfgOwner);
               
}
           
}
           
return null;
       
}
   
}
}

 

There isn’t much changed in this compared to the last. The difference, apart from a few code comments, is that I’ve modified the OnPreRequestHeaders function to get the current site ID, and pass that into the static SiteOwner.Find function. Then I modified the SiteOwner.Find function to read from our new configuration section. Please note that production code would likely set SiteOwner to extend from ConfigurationElement, and a special ConfigurationSection would be developed. Go ahead and compile and deploy this assembly the same way you did with the previous version. Make sure that you run gacutil on the new assembly once it is copied. Also, since we are running against a static HTML file, you might need to run iisreset.exe from a command-line in order to clear the cache from the server.

Once you have deployed your assembly and the schema file, its time to test the application. If we refresh our Web page, we will get a JavaScript alert without any site owner headers listed:

image

This is because we haven’t created any site owner configuration entries yet. So lets add a record directly to our config. I added the following configuration directly after the <sites> node in the applicationHost.config file:

<siteOwner>
   
<site id=”1″ ownerName=”Tobin Titus” ownerEmail=”tobint@microsoft.com” ownerPhone=”555-121-2121″ />
</
siteOwner>       

Save your file and refresh your Web browser. You should now see your site owner configuration data.

image

Conclusions

In this blog post, we’ve walked through extending existing IIS configuration objects, and accessing the custom data at runtime through an HTTP module. We’ve also talked about the risks of this approach and have demonstrated a better approach. This approach gives us the same ability to customize our IIS 7.0 configuration system, but gives us a much smaller dependency on the native configuration schema. We have little worry about schema changes, we don’t have to worry about other modules or utilities stomping on our custom data, and most importantly, we are far less likely to cause any Access Violations with the later approach. 

Many thanks go to Carlos once again for his help in putting this post together as well as for his CodeColorizer plug-in for Live Writer. High-fives and cookies in his office!

Reading IIS.NET Blogs (or any RSS) with Powershell

Being a member of the IIS team, I often find myself checking blog posts to see what the members of the product team are blogging about.  However, since Powershell came out, I find myself doing more and more work on my scripts. It’s a bit annoying to have to jump out of Powershell to go read blog posts.  As such, I’ve written a few quick scripts to help me read IIS.NET from my pretty blue shell. For those of you who are already familiar with powershell and don’t want to read the long blog post, you can download my blog script from the DownloadCENTER: http://www.iis.net/downloads/default.aspx?tabid=34&g=6&i=1387 

Setting up your Powershell environment


To start, I’ve written a few supporting functions in my profile.  These functions help me keep my scripts organized and, since I change my scripts quite often, it helps me to sign them as well. 
First off, if you haven’t created your own certificate for signing code, please go back and take a look at my first Powershell blog post that give you the details on how to do this.  
Next, we need to add a few things to your Powershell profile.  To open your Powershell profile from within Powershell, type: 

PS > notepad $profile 
First, I add a function to allow us to easily sign our scripts (assuming you have created a cert to sign them wth): 

## Sign a file
##-------------------------------------------
function global:Sign-Script ( [string] $file )
{
     $cert = @(Get-ChildItem cert:CurrentUserMy -codesigning)[0]
     Set-AuthenticodeSignature $file $cert
}
set-alias -name sign -value sign-script

The next function is used to help me organize things. I have several scripts for various work environments.  I like to organize them by function. So, I keep my IIS scripts in an “IIS” directory, my common scripts in a “common” directory and so on.  Inside each of my script directories, I keep a “load.ps1” script that I can  use to initialize any of my work environments.  Lastly, I create a Powershell drive that matches the work environment name so I can get to my scripts easily. The function below does all the work for me. 

## Create a Drive
##-------------------------------------------
function global:New-Drive([string]$alias)
{
     $path = (join-path -path $global:profhome -childpath $alias)
     if( !(Test-Path $path ) )
     {
          ## Create the drive's directory if it doesn't exist
          new-item -path $global:profhome -name $alias -type directory
     }
     else
     {
          ## Execute the load script for this drive if one exists
          $loadscript = (join-path -path $path -childpath "load.ps1")
          if( Test-Path $loadscript)
          {
               $load = &$loadscript
          }
     }
     # Create the drive
     new-Psdrive -name $alias -scope global -Psprovider FileSystem -root $path
}

Within my profile, I simply call this function and pass in an alias. When the function executes it will create a directory with the alias name, if it doesn’t exist already. If the directory does exist, it will check for the load.ps1 file inside that path and execute it. Lastly, it will create powershell drive. I have the following calls added to my profile below: 

## Custom PS Drives
##-------------------------------------------
New-Drive -alias "common"
New-Drive -alias "iis"

Go ahead and save your profile now and type these commands: 

PS > Set-ExecutionPolicy Unrestricted
PS > &$profile
PS > Sign $profile
PS > Set-ExecutionPolicy AllSigned

 
The first command sets Powershell into unrestricted mode. This is because we need to execute the profile script and it hasn’t been signed yet.  The next command executes the profile. The third command uses the “sign” function that our profile script loaded. Since our profile is now signed, we can set our execution policy back to AllSigned. AllSigned means that Powershell will execute scripts as long as they are signed. 

From this point on, we can make changes to our profile and simply call our sign function again before we close our Powershell instance. The next instance of powershell that is opened will have our changes.  

Creating / Using Blog Functionality


Now that we have our environment set up, lets get to the blogging part.  If you’ve set up your environment right, you can execute the following command: 

PS > cd iis: 

This command will put you in the iis scripts directory.  Next, create a new blogs script by typing: 

PS > notepad blogs.ps1 

You’ll be prompted if you want to create the file. Go ahead and say yes.  Next, paste the following into the the notepad and save it: 
  

## Sets up all custom feeds from feeds.txt
##---------------------------------------------------
function global:Import-Feed
{
     if( $global:RssFeeds -eq $null )
     {
          $global:RssFeeds = @{};
     }
     $RssFeeds.Add( "iisblogs", "http://blogs.iis.net/rawmainfeed.aspx" );
     $RssFeeds.Add( "iisdownloads", "http://www.iis.net/DownloadCENTER/all/rss.aspx" );
}
Import-Feed ## Call Import-Feed so we are ready to go
## Gets a feed or lists available feeds
##---------------------------------------------------
function global:Get-Feed( [string] $name )
{
     if( $RssFeeds.ContainsKey( $name ) )
     {
          return $RssFeeds[$name];
     }
     else
     {
          Write-Host "The path requested does not exist";
          Write-Output $RssFeeds;
     }
}
## Gets IIS Blogs
##---------------------------------------------------
function global:Get-Blog([int]$index, [int]$last, [int]$first, [int]$open)
{
     $url = (Get-Feed iisblogs)
     return (Get-RSS $url $index $last $first $open)
}
## Gets a specific blog
##---------------------------------------------------
function global:Get-AuthorBlog([string]$creator)
{
     Get-Blog | Where-Object {$_.creator -eq $creator}
}
## Gets Downloads from IIS
##---------------------------------------------------
function global:Get-Download([int]$index, [int]$last, [int]$first, [int]$open)
{
     $url = (Get-Feed iisdownloads)
     return (Get-RSS $url $index $last $first $open)
}
## Gets a generic RSS Feed
##---------------------------------------------------
function global:Get-RSS([string]$url, [int]$index, [int]$last, [int]$first, [int]$open)
{
     $feed = [/xml](new-object System.Net.WebClient).DownloadString($url)
     if($index)
     {
          return $feed.rss.channel.item[$index]
     }
     if($open)
     {
          $ieaddr = $env:programfiles + "internet exploreriexplore.exe"
          return &(get-item $ieaddr) $feed.rss.channel.item[$open].link
     }
     if($last)
     {
          return ($feed.rss.channel.item | Select -last $last)
     }
     if($first)
     {
          return ($feed.rss.channel.item | Select -first $first)
     }
     return $feed.rss.channel.item
}

Once you’ve saved this file, close it.  We need to sign this script and execute it by typing: 

PS IIS:> sign blogs.ps1
PS IIS:> ./blogs.ps1 

Now lets start reading.  

  • Read all Blogs
PS iis:> Get-Blog 
  • Read the last five blog posts
PS iis:> Get-Blog -last 5 
  • Read the first five blog posts
PS iis:> Get-Blog -first 5 
  • Read the 8th blog post
PS iis:> Get-Blog -index 8 
  • Open the 12th blog post and open in Internet Explorer
PS iis:> Get-Blog -open 12 
  • Read all blog posts by Bill Staples
PS iis:> Get-AuthorBlog bills 
  • Read all items in DownloadCENTER
PS iis:> Get-Download 
  • Get titles of all items in DownloadCENTER
PS iis:> Get-Download | Select Title 

Of course, all the laws of Powershell still apply, so I can still do fun stuff like like listing only the blog titles from my blog. 

PS iis:> Get-AuthorBlog TobinTitus | Select Title 

I can do the same witht he raw blog output: 

PS iis:> Get-Blog -last 5 | Select pubDate, Creator, Title 

Happy reading. 

IIS.NET Blogs in Powershell

OT: Cat and Mouse

For those of you looking for IIS information, this blog post is not for you. For those of you that just like general computer information, read on and enjoy.

The computer mouse has come a long way over the years.  When Doug Engelbart invented the computer mouse, it was rather crude looking but was none-the-less very revolutionary.  Engelbart showed his invention to the world in 1968 during a presentation now know as “The Mother of All Demos.”  The mouse that Engelbart demoed was an electromechanical device that leveraged a large sphere to turn x and y coordinate counters as the device was rolled across a surface. Each click of the counter would tell the computer how far the mouse had “traveled”.  For several years the mouse kept the same basic principal. We improved on the original idea and replaced a large sphere with small rubbery balls with x,  i and often an additional diagonal gear wheel. The diagonal indicator was used to help correct the cursor movement if the mouse was rotated or tilted. The rubber helped the ball move across slick surfaces when a mouse pad just wasn’t cutting it. The downside to this rubbery surface was that pet owners ended up with a lot of cat (or dog) fur rolling into the mouse.  You would often have to open the bottom portion of the mouse and clean the hair and other debris out to make your pointing device work efficiently again.  Playing Doom or Quake with a junked-up mouse was an instant indication of a n00b that needed serious pwning.

Roll forward to the present day and we find optical mice taking the electromechanical device’s place. The optical approach solves a lot of the problems associated with older mice. For one, the new mice don’t have the rolling dowel-like rollers (counters) that can get gunked up anymore.  A rubbery ball is not picking up every piece of debris and yanking it into the mouse cavity as though it were a time-capsule for desktop debris.  So, why does your mouse still freak out when a piece of fur gets trapped under your optical mouse? 

The answer is pretty interesting and I’m sure will be solved with the next iteration of mouse invention. Many optical mice are created using a camera or an optoelectric sensor, an optical processor to compare images taken by the camera/sensor, and an LED  that illuminates the surface under your mouse.  The camera/sensor takes ~1500 picture samples a second!  The pictures are small (usually 10-20 square pixels) and gray-scale (usually registering fewer than 100 shades).  The optical processor examines and compares the picture samples to determine the relative position of the mouse to its previous position.

Now introduce your cat’s hair to the equation.  The cat hair gets entangled in the small cavity of the mouse where the optical sensor lives. As the mouse rolls across the desk, static builds up and flips the hair around wildly. As your mouse snaps those thousands of pictures, the hair position is captured and the optical processor gets confused by the sudden movement of the cat hair in the picture comparisons.  You can move your mouse slowly to the right, but if the hair is flipped around within the mouse cavity, the processor will think that you have jerked the mouse to the left or sharply downward.  If your cat is watching the computer screen as mine often does, these sharp movements may cause the cat to attack your monitor — truly creating an interesting game of cat and mouse.

IIS 7 Logging UI For Vista – Download Now

As many of you already know, the management console for IIS 7.0 on Windows Vista does not have a UI for logging.  Since this was a pain point for several customers, I decided to test out the extensibility APIs by creating a logging UI module. 

I’ve posted a preview version of my logging UI on the newly opened IIS Download Center. I will be releasing a few updates throughout the week with changes. The module also contains the source code for my UI module under the Microsoft Permissive License. This code will also be updated in future releases.

You can find the download at: http://www.iis.net/downloads/default.aspx?tabid=34&i=1328&g=6

If you have any questions, please feel free to contact me through my blog.

Extending IIS 7 APIs through PowerShell (Part II)

In my previous post, I showed you how easy it was to leverage your knowledge of the IIS 7 managed SDK in Windows PowerShell.  We loaded the IIS 7 managed assemblies and then traversed the object model to display site information and stop application pools.  While this in itself was pretty cool, I don’t think I quite got my point across about how powerful IIS 7 and PowerShell are together. As such, I wanted to show you some more fun things to do with PowerShell in the name of easy IIS 7 administration.

First, our examples still required a great deal of typing and piping and filtering.  Let’s modify our profile script from my previous post by adding at least one new global variable that will give us access to the ServerManager without much typing.  Add the following line to your profile script from my previous post.

new-variable iismgr -value (New-Object Microsoft.Web.Administration.ServerManager) -scope "global"

(if you don’t have a profile script yet, go back to my previous post to learn how to create one).

Note: If you signed your script before, you’ll have to do it again after modifying the script

Open a new instance of PowerShell and now you can access the site collection just by typing:

PS C:> $iismgr.Sites

 That’s considerably smaller than our previous examples.  But let’s not stop there.  What happens if I want to search the site collection? PowerShell has some fun syntax for this as well. I simply pipe the output of my SiteCollection to a “Where-Object” cmdlet and then specify what site I’m looking for:

$iismgr.Sites | Where-Object {$_.Name -match "Default*"}

This is still quite a bit of typing when all I really want to do is find the default website. You may ask “Wouldn’t it be easier if we could just add a “Find” method to the SiteCollection object?” Well I’m glad you asked *cough*! Next, we are going to do just that! Open up another instance of notepad and add the following XML to it:


<?xml version="1.0" encoding="utf-8" ?>
<!-- *******************************************************************
Copyright (c) Microsoft Corporation.  All rights reserved.
 
THIS SAMPLE CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY 
OF ANY KIND,WHETHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO 
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR
PURPOSE. IF THIS CODE AND INFORMATION IS MODIFIED, THE ENTIRE RISK OF USE
OR RESULTS IN CONNECTION WITH THE USE OF THIS CODE AND INFORMATION 
REMAINS WITH THE USER.
******************************************************************** -->
<Types>
  <Type>
    <Name>Microsoft.Web.Administration.SiteCollection</Name>
      <Members>
        <ScriptMethod>
          <Name>Find</Name>
          <Script>
          $rtr = "";
          if ( $args[0] ) 
          { $name = $args[0];
            $rtr = ((New-Object Microsoft.Web.Administration.ServerManager).Sites | 
                     Where-Object {$_.Name -match $name}) 
          } 
          else 
          { 
            $rtr = "No sites found." 
          };
          $rtr
          </Script>
        </ScriptMethod>
      </Members>
  </Type>
</Types>

What we’ve done is identified that we want to add a scripted method to the Microsoft.Web.Administration.SiteCollection object. In our case, I’ve added a “Find” method by using the same cmdlet t at we typed before to search the site collection. The difference is, this type I use the $args array variable to check for a parameter and use it if one is available. Now, save this file into your %windir%system32WindowsPowerShellv1.0 directory as “iis.types.ps1xml”. Once you’ve saved the file, sign it the same way you signed your profile script. Keep in mind that these xml files contain code, so signing your xml is required to keep your PowerShell experience a secure one. Now, open your profile script (again) and add the following lines to the end:


new-variable iissites -value (New-Object Microsoft.Web.Administration.ServerManager).Sites -scope "global"
new-variable iisapppools -value (New-Object Microsoft.Web.Administration.ServerManager).ApplicationPools -scope "global"
update-typedata -append (join-path -path $PSHome -childPath "iis.types.ps1xml")

Note: Once again, you’ll have to re-sign this profile if your execution policy requires signed scripts.

Notice that I added two more variables: $iissites and $iisapppools. These variables allow me to access the site collection and application pool collection with a simple keyword. Lets try them out in PowerShell. Make sure you open a new instance of PowerShell so your profile script and xml type data are updated properly. Once your new instance of PowerShell is open, type the following:

PS C:> $iissites.Find("^Default*")

PowerShell will do all the work for you and you have MUCH less typing to do.

Another alternative to using xml files is to simply create a function and add it to your profile. For instance, we can create a function called “findsite” that provides the same functionality as our previous example. Either type the following command into PowerShell or add it to your profile script:

PS C:> function findsite { $name=$args[0]; ((New-Object Microsoft.Web.Administration.ServerManager).Sites | Where-Object {$_.Name -match $name}); } }

Now you can search for a site using the following syntax:

PS C:> findsite default*

Whatever way we choose to extend Microsoft.Web.Administration and/or PowerShell, we can use our output as we did before:

PS C:> (findsite default*).Id

The previous line should display the Id of the default web site. We can also stop the website:

PS C:> (findsite default*).Stop()

We can keep taking this to extremes and truncate every operation that we perform on a semi-regular basis. These scripts are not one-offs. Each script function we create or append to an existing object model can be reused and piped as input to another function. The possibilities are endless and completely customizable to your needs.

Adding performance

Some of you may have found the title to this blog post as amusing as I do. Throughout my career, I’ve been called into many a meeting asking that I “add” performance to a complete or nearly-complete product. No matter how many scowls I got, I could never resist joking about just adding the IPerformance interface. “And, while you are at it, just add the IScalable one too”, I would quip. (OK, I know some of you are doing searches for these interfaces — don’t bother). Laugh as hard as I may, I’m embarrassed to say that I am trying to do this very thing now.

A few weeks ago, I started playing around with a small sample that shows off some of the fun features of IIS 7. As I started coding, I added more features to the sample. Scope creep is a no-no, but this was something I was doing on my own time so I didn’t have a problem with it. However, the sample got so large that I had to actually stop and design a public API to support it. By the time I was done, my “sample” had a set of providers, a common API, localization support, utilities, and most notably, performance issues.

Last year on my personal blog, I talked about “engineering for usability“. In that post, I declared that the simplest design is sometimes (and probably most often) the best approach. That said, performance should have been considered very early on, and the sample should have been kept simple. Performance, as many of you know, is not something you add as an afterthought. Starting my sample the way I did, I never considered code performance to be an issue.

Now I’m forced to decide: do I scale back my sample knowing full well that it doesn’t have a security model or provide easy extensibility, or do I redesign the sample with the current feature set and design for performance up front? I’m leaning toward the latter despite reciting “keep it simple, stupid” in my head over and over again.

Where can I find in IIS 7.0 ?

Question: Where can I find Windows Authentication in IIS 7.0

I’ve seen this question or questions like this asked numerous times so I thought it would make a nice quick — and hopefully useful — blog post.

In previous versions of IIS, administrators were able to enable or disable features on their servers by simply checking or unchecking a box in the IIS management console.  However, when an administrator unchecked a feature in IIS, the feature still existed on the machine and was even loaded.  Unchecking a box allowed the administrator to disable a feature — but it didn’t physically remove anything from the machine.

This all changes with IIS 7.0. Currently, when you install IIS 7.0 on Windows Vista (or Longhorn server for that matter), you must select the features that you want installed. Each feature in IIS 7.0 is componentized into its own module library. The features you select at installation, and ONLY the features you select, will be copied to disk under the “%windir%system32inetsrv” directory. If you do not install the feature, it simply won’t exist on your machine. This obviously makes the server more robust and performant at the sime time — providing a high amount of customization as well as security.

Another thing to note is that once a feature module has been installed to the proper directory, it must also be added to the IIS configuration system. For more information on how to install and uninstall modules, see: Getting Started with Modules.

If you are having a hard time getting WindowsAuthentication, DynamicCompression, or some other feature of IIS to work. A good place to start looking is your installation. Make sure you’ve installed the features you need for your purposes.

Unsafe thread safety

As I stated in my last post, for the past two days I’ve been sitting in Jeffrey Richter’s threading class. The class is near the end and I can’t say that a lot of new concepts have been taught. Another student and I have decided that the class should have been renamed, “Threading Basics”. That’s not to say anything of Richter’s teaching skill or the content of the class.  It just goes to show that if the architecture of a product is right, the threading code should be extremely simple to use.

However, what I love about this class is that it reminds me how much I enjoy this topic.  The theory of the perfect architecture doesn’t exist. Additionally, many times the developer has little-to-no ability to push back on a bad architecture. It’s in these cases that you must use your bag of concurrency tricks to work out of the whole the architect(s) put you in.  That sound easy enough, but what if the architecture included – *gasp* – “less-than-optimal” decisions in the actual .NET framework classes?  What if those decisions were made in the very methods that are supposed to help you with these synchronization problems?  These problems perplexed me when I first encountered them and I never really thought to blog about them (I was actually just scared I was doing something wrong).  Taking this class gave me the perfect excuse to bring the topic up.

Take a look at the following pattern in C++:

class CConcurrencySample
{
public:
    CConcurrencySample( ) 
    {   // Initialize a critical section on class construction
        InitializeCriticalSection( &m_criticalSection );
    };
    ~CConcurrencySample( ) 
    {   // Destroy the critical section on class destruction
        DeleteCriticalSection( &m_criticalSection );
    };

    // This method is over-simplified on purpose
    ////////////////////////////////////////////////////////////
    BOOL SafeMethod( )
    {
        // Claim ownership of the critical section
        if( ! TryEnterCriticalSection( &m_criticalSection ) )
        {
            return FALSE; 
        }

        ////////////////////////////////////////////////
        // Thread-safe code goes here
        ////////////////////////////////////////////////

        // Release ownership of the critical section
        LeaveCriticalSection( &m_criticalSection );

        return TRUE;
    };
private:
    CRITICAL_SECTION m_criticalSection;
};

Anyone that’s done any work in threading recognizes this pattern. We’ve declared a class named CConcurrencySample. When any code instantiates an instance of CConcurrencySample, the constructor initializes the private CRITICAL_SECTION instance in the object. When any code destroys an instance of CConcurrencySample, the CRITICAL_SECTION is also deleted. With this pattern, we know that all members have access to the critical section and can claim ownership of the critical section at any point. For my purposes, I’ve created a method named SafeMethod that will take the critical section lock when it is called, and release the lock when it leaves the method. The good part about this pattern is that each instance has a way to lock SafeMethod. The downside is that if client doesn’t need to call SafeMethod, then the CRITICAL_SECTION is created, initialized and destroyed without need — uselessly taking up memory (24 bytes on a 32bit OS) and processor cycles.

The CLR implemented this pattern and even expanded on it. They also tried to fix the waste incurred with non-use of the critical section. The CLR implementation works as follows. Each System.Object created by the CLR contains a sync block index (4 bytes on a 32bit OS) that is defaulted to -1. If a critical section is entered within the object, the CLR adds the object’s sync block to an array and sets the object’s sync block index to the position in the array. With me so far? So how does one enter a critical section in .NET? Using the Monitor class, of course. The following example emulates the same functionality as the previous C++ sample by using C#.

public class CConcurrencySample
{
   bool SafeMethod( )
   {
      // Claim ownership of the critical section
      if( Monitor.TryEnter( this ) )
      {
          return false;
      }

      ////////////////////////////////////////////////
      // Thread-safe code goes here
      ////////////////////////////////////////////////
    		
      // Release ownership of the critical section
      Monitor.Exit( this );

      return true;
    }
}

Monitor.Enter and Monitor.Exit are supposed to provide similar functionality of entering and leaving a critical section (respectively). Since all System.Object’s have their own SyncBlock, we just need to pass our current object to the Monitor.Enter and Monitor.Exit method. This performs the lock I described earlier by setting the sync block index. Sounds great, but what’s the difference between that C++ example and the C# pattern? What issue can you see in the .NET framework implementation of this pattern that you don’t see in the C++ sample I provided?

Give up?

The answer simple. In the C++ sample, the lock object (the CRITICAL_SECTION field) is private. This means that no external clients can lock my object. My object controls the locks. In the .NET implementation, Monitor.Enter can take ANY object. ANY caller can lock on ANY object. External clients locking on your object’s SyncBlock can cause a deadlock.

    public class CConcurrencySample2
    {
        private Object m_lock = null;

        bool SafeMethod ( )
        {
            if ( m_lock == null )
            {
                m_lock = new Object( );
            }
            if ( ! Monitor.TryEnter( m_lock ) )
            {
                return false;
            }

            ////////////////////////////////////////////////
            // Thread-safe code goes here
            ////////////////////////////////////////////////

            Monitor.Exit( m_lock );

            return true;
        }
    }

With this approach, we are declaring an plain, privately-declared object in the class. Be careful when using this approach that you don’t use a value-type for your private lock. Value type’s passed to a Monitor will be boxed each time the methods on Monitor is called and a new SyncBlock is used — effectively making the lock useless.

So what does this have to do with IIS? Nothing just yet. But I plan to cover some asynchronous web execution patterns in future posts and I figured this would be a great place to start.

This information is pretty old but sitting in class I realized it might not be completely obvious to everyone just yet. If you were someone in-the-know about this information since WAY back in 2002, please forgive the repitition.

Accessing IIS 7 APIs through PowerShell (Part I)

I’ve caught the PowerShell bug. In between stints with my ever-expanding code samples, I play with PowerShell a lot.  I thought I’d share a quick example of how to load Microsoft.Web.Administration.dll and use it to perform some basic tasks.

Note: I’m running these samples on Windows Vista RTM, but I have no reason to believe this will not work on the PowerShell release candidates for the Vista RC* builds that are available now

So let’s get started.

First, PowerShell has no idea where Microsoft.Web.Administration.DLL is so you have to tell it how to load it. Anyone who has written code to dynamically load an assembly should be familiar with this syntax.  Type the following command

PS C:> [System.Reflection.Assembly]::LoadFrom( “C:windowssystem32inetsrvMicrosoft.Web.Administration.dll” )

The path to your assembly may change depending on your install.  I’ll show you later how to use environment variables to calculate the correct path.  In the mean time the out put of the line above display something like the following:

GAC  Version    Location
---  -------    --------
True v2.0.50727 C:WindowsassemblyGAC_MSILMicrosoft.Web.Administration 7.0.0.0__31bf3856ad364e35Microsoft . . .

Once the assembly is loaded you can use PowerShell’s “New-Object” command to create a ServerManager object that is defined in Microsoft.Web.Administration.

PS C:> (New-Object Microsoft.Web.Administration.ServerManager)

This doesn’t give you much except the list of properties the ServerManager exposes:

ApplicationDefaults      : Microsoft.Web.Administration.ApplicationDefaults
ApplicationPoolDefaults  :
ApplicationPools         :
SiteDefaults             : Microsoft.Web.Administration.SiteDefaults
Sites                    : {Default Web Site}
VirtualDirectoryDefaults : Microsoft.Web.Administration.VirtualDirectoryDefaults
WorkerProcesses          : {}

To get more detail, you need to use the properties and methods of the ServerManager object to drill down and get the information we want. The ServerManager provides access to all of the sites on your machine through a SiteCollection object. This SiteCollection is made available through the “Sites” property of the ServerManager. 

PS C:> (New-Object Microsoft.Web.Administration.ServerManager).Sites

Which will produce a list view of all the sites and their associated property names/values.

ApplicationDefaults        : Microsoft.Web.Administration.ApplicationDefaults
Applications               : {DefaultAppPool, Classic .NET AppPool}
Bindings                   : {}
Id                         : 1
Limits                     : Microsoft.Web.Administration.SiteLimits
LogFile                    : Microsoft.Web.Administration.SiteLogFile
Name                       : Default Web Site
ServerAutoStart            : True
State                      : Started
TraceFailedRequestsLogging : Microsoft.Web.Administration.SiteTraceFailedRequestsLogging
VirtualDirectoryDefaults   : Microsoft.Web.Administration.VirtualDirectoryDefaults
ElementTagName             : site
IsLocallyStored            : True
RawAttributes              : {name, id, serverAutoStart}
ApplicationDefaults        : Microsoft.Web.Administration.ApplicationDefaults
Applications               : {DefaultAppPool}
Bindings                   : {}
Id                         : 2
Limits                     : Microsoft.Web.Administration.SiteLimits
LogFile                    : Microsoft.Web.Administration.SiteLogFile
Name                       : Test Web Site 1
ServerAutoStart            : False
State                      : Stopped
TraceFailedRequestsLogging : Microsoft.Web.Administration.SiteTraceFailedRequestsLogging
VirtualDirectoryDefaults   : Microsoft.Web.Administration.VirtualDirectoryDefaults
ElementTagName             : site
IsLocallyStored            : True
RawAttributes              : {name, id, serverAutoStart}

Of course, this isn’t the easiest view to read, so let’s say we just list the site names by piping our site list to the “ForEach-Object” command in PowerShell and display a list of site names only:

 PS C:> (New-Object Microsoft.Web.Administration.ServerManager).Sites | ForEach-Object {$_.Name}

This looks much more concise:

Default Web Site
Test Web Site 1

We could also use the Select-Object syntax to query the list into a table format:

 PS C:> (New-Object Microsoft.Web.Administration.ServerManager).Sites | Select Id, Name
        Id Name
        -- ----
         1 Default Web Site
         2 Test Web Site 1

Now lets use PowerShell to manage application pools. We can fit several commands on one line by using the semi-colon.  The following command-line is actually four different operations: Storing the application pool collection into a variable, displaying the name and runtime status of the first application pool, stopping the first application pool, then displaying the name and status again.

PS C:> $pools=(New-Object Microsoft.Web.Administration.ServerManager).ApplicationPools; $pools.Item(0) | Select Name, State;$pools.Item(0).Stop(); $pools.Item(0) | Select Name, State

Running this sample should display the following:

Name                                         State
----                                         -----
DefaultAppPool                               Started
Stopped
DefaultAppPool                               Stopped

This is nice, but we can do this already with appcmd.exe right? Well, to some extent.  We don’t get the features of PowerShell that allow us to format our output the data to our liking. Also, as a developer, I find it much easier to use the API syntax I’m already familiar with than to remember appcmd.exe syntax.  Furthermore, PowerShell allows us to use WMI alongside our managed code calls, and unlike appcmd.exe, you can extend PowerShell and cmdlets. PowerShell gives you the ability to easily manage multiple servers from one command prompt on one machine.  Watch the PowerShell/IIS 7 interview on Channel9 if you want to see this remote administration in action.

One last thing that PowerShell brings to the table is the ability to “spot-weld” our object models (as Scott Hanselman quipped). You can create/modify/extend type data and formatting to your hearts desire.  For more information on this, check out the PowerShell documentation found in the PowerShell install, or in the PowerShell documentation set.

So, I would be remiss in this post if I didn’t try to make your PowerShell / IIS 7.0 life easier.  As such, I’ve created a profile script that loads all the IIS 7.0 managed assemblies for you.  The script is simple and contains more  echo commands than actual working script lines.

To install this script run the following command inside PowerShell:

PS C:> if ( test-path $profile ) { echo “Path exists.” } else { new-item -path $profile -itemtype file -force }; notepad $profile

This will create a profile path for you if you don’t already have one, then open up your profile in notepad.  If you haven’t added anything to the file, it will obviously display an empty file.  Paste the following in notepad when it opens:

echo “Microsoft IIS 7.0 Environment Loader”
echo “Copyright (C) 2006 Microsoft Corporation. All rights reserved.”
echo ”  Loading IIS 7.0 Managed Assemblies”

$inetsrvDir = (join-path -path $env:windir -childPath “system32inetsrv”)
Get-ChildItem -Path (join-path -path $inetsrvDir -childPath “Microsoft*.dll”) | ForEach-Object {[System.Reflection.Assembly]::LoadFrom( (join-path -path $inetsrvDir -childPath $_.Name)) }

echo ”  Assemblies loaded.”

Now, save the profile and close notepad.  You will likely have to sign this script or change your script execution policy to something very weak to make this script run properly (obviously I’m not recommending the latter). To find out more about signing scripts, type “get-help about_signing” in PowerShell. The instructions to create a self-signed certificate found in that help file are as follows:

In an SDK Command Prompt window, run the following commands.
The first command creates a local certificate authority for your computer.
The second command generates a personal certificate from the certificate authority:

makecert -n "CN=PowerShell Local Certificate Root" -a sha1 `
   -eku 1.3.6.1.5.5.7.3.3 -r -sv root.pvk root.cer `
   -ss Root -sr localMachine
makecert -pe -n "CN=PowerShell User" -ss MY -a sha1 `
   -eku 1.3.6.1.5.5.7.3.3 -iv root.pvk -ic root.cer
   MakeCert will prompt you for a private key password.

Go ahead and make a certificate for yourself following those instructions. To sign your profile, within PowerShell type the following:

PS C:>Set-AuthenticodeSignature $profile @(get-childitem cert:CurrentUserMy -codesigning)[0]

So far, you’ve created a certificate and signed your script. Now, you will have to change your script execution policy down at least one level from the default.  The default doesn’t allow scripts at all.  To get scripts to execute, at the minimum you’ll have to set it to “AllSigned” to allow only signed scripts to execute.  In this mode, each time you execute a script from a new publisher, you’ll be asked what level of trust to assign to the publisher (unless you respond to the prompt to “Always Run” or “Never Run” scripts from that publisher)

Do you want to run software from this untrusted publisher?
File C:UsersTobinTDocumentsWindowsPowerShellMicrosoft.PowerShell_profile.ps1 is published by CN=PowerShell User
and is not trusted on your system. Only run scripts from trusted publishers.
[V] Never run  [D] Do not run  [R] Run once  [A] Always run  [?] Help (default is "D"):

Now, each new instance of PowerShell that you run will automatically load the IIS 7.0 managed assemblies.  I know it seems like a great deal of work, but it really isn’t once you’ve made a few rounds around inside PowerShell. Consider that you only have to create the script once and then you have full the full range of the managed IIS 7.0 SDK at your fingertips inside PowerShell. 

If you have problems, feel free to leave comments and I’ll do my best to help you.