Author Archives: Tobint

IIS 7 Logging UI For Vista – Download Now

As many of you already know, the management console for IIS 7.0 on Windows Vista does not have a UI for logging.  Since this was a pain point for several customers, I decided to test out the extensibility APIs by creating a logging UI module. 

I’ve posted a preview version of my logging UI on the newly opened IIS Download Center. I will be releasing a few updates throughout the week with changes. The module also contains the source code for my UI module under the Microsoft Permissive License. This code will also be updated in future releases.

You can find the download at: http://www.iis.net/downloads/default.aspx?tabid=34&i=1328&g=6

If you have any questions, please feel free to contact me through my blog.

Ups and Downs of the past month

It’s sometimes hard to hold a completely technical blog, particularly when you have long absences from one post to the next. You feel you have to explain yourself to your readership each time you take more than a week or two between posts. This is no different. Despite having a ton of stuff to blog about, I haven’t posted since December. Much of this has to do with regular holiday planning, but much more has happened. For me, this past month has had some major ups and downs for me emotionally and I’m still a little mixed up.

This post will have a great deal of personal information in it, and much of it has nothing to do with IIS, but it should give you some insight into Microsoft if you are interested in that sort of thing.

If you’ve been paying attention to the weather in the Pacific Northwest, you know that on December 14th, we had a major windstorm that knocked out power to over 1 million customers. The bad news is that on December 15th, I had a flight scheduled to go visit my family on the west coast. United Airlines cancelled several flights, which put their check-in line in complete disarray. My flight wasn’t cancelled, but no thanks to United Airlines, I was not able to board my flight, and no other flights could be found for me along with my two cats. My trip was cancelled.

Since I was still in town, I helped put the finishing touches on a “Think Week” paper I had been writing along with two other employees here at Microsoft. Despite the power outages, we were able to make it into one of the buildings at Microsoft and submit our paper by the deadline. It was an interesting experience to submit a paper that every full time employee of the company can read and comment on, including Bill Gates.

I also took the opportunity to work on writing a UI module for IIS in that time. The module was actually finished, but after consulting with one of the developers, I decided to modify the sample and I haven’t had time to clean it up and submit it yet. More on that later.

On December 28th, I got a phone call saying that my grandfather had passed away. My family has always been important to me. My grandparents hold a special place in my heart because they gave me a lot of my determination. My grandfather was a tail-gunner in World War II, had seen more inventions and re-inventions in his lifetime than I could fathom. He always chuckled when I came home to visit and told him about this great “new” thing in technology that would change the world unlike anything else ever had. I didn’t get the joke then, but I do now. I don’t mean to downplay the importance of technology. When putting it in perspective, we are not the first people to change the world and we will certainly not be the last. I’ll miss my grandfather terribly and there are no words to describe how this has changed my world.

It was good, however, to go back home for my grandfather’s funeral. I got to help my dad clean up and set up his workshop. I also got to see one my best friend since I was 8 years old. I haven’t seen him in 10 months since I moved out west to work for Microsoft. I returned from my grandfather’s funeral on the 8th and have tried to get 100% back into the swing of things. I really hadn’t been able to focus on work the way that I usually do until Thursday. I was finally making some headway on a few projects at work.

Before I continue, I need to give some of you some background on me. 20 years ago, when I was in 5th grade, I taught myself Microsoft BASIC and Atari Assembler. I remember telling my parents back then that I was going to work for Microsoft one day. My mom told me to finish my homework first. Of course, I have finished my homework and 20 years later, here I am working for Microsoft and submitting a paper to the very man that started the company. A few weeks had gone by and we received a LOT of feedback about our paper, but none of that feedback is from Mr Gates. We sort of expected that. Bill doesn’t respond to many papers in a year.

However, the other day I got a barely coherent message on my cell phone. It was from one of the co-authors of the think week paper. He was an excited statement about Bill Gates reading our paper. I tried to connect to the VPN to go read the feedback but had some issues because my home network was, shall we say, “in flux”. It drove me nuts that there was feedback from the very man that we addressed the paper to, but I couldn’t read it. I jumped in the shower and then rushed into work to read it personally. During the entire ride to work, I was forcing myself to watch my speed carefully. However, my heart was racing so fast that I think it pushed 10 extra pounds of blood into my right foot. I got to work and quickly opened up the Think Week site and scrolled to our paper. When I read the comment, I couldn’t help but become giddy. The feedback was favorable and verbose — about a page and a half. I was elated and sitting here today, I’m still in shock about the entire thing.

I have to say that I’m extremely thankful for the opportunities we have inside our company. Inside the company, you can tackle any problem that you want. You simply need to apply your efforts in an area, and you’ll likely get the support of your managers, team, and friends. I know there are many people out there who like to point out some negative issues inside Microsoft. Some anonymous blogs out there take a rather candid look at company issues and seem to err on the side of complaining. My experience inside the company so far has been spectacular. It helps that I knew people inside the company before I moved out here. That said, there are many opportunities inside the company to make recommendations and get involved. I can’t be happy enough with that.

So, while this was not a technical post, I felt I owed it to you all to explain where I have been and why I haven’t been blogging (again). Thanks for putting up with me.

Extending IIS 7 APIs through PowerShell (Part II)

In my previous post, I showed you how easy it was to leverage your knowledge of the IIS 7 managed SDK in Windows PowerShell.  We loaded the IIS 7 managed assemblies and then traversed the object model to display site information and stop application pools.  While this in itself was pretty cool, I don’t think I quite got my point across about how powerful IIS 7 and PowerShell are together. As such, I wanted to show you some more fun things to do with PowerShell in the name of easy IIS 7 administration.

First, our examples still required a great deal of typing and piping and filtering.  Let’s modify our profile script from my previous post by adding at least one new global variable that will give us access to the ServerManager without much typing.  Add the following line to your profile script from my previous post.

new-variable iismgr -value (New-Object Microsoft.Web.Administration.ServerManager) -scope "global"

(if you don’t have a profile script yet, go back to my previous post to learn how to create one).

Note: If you signed your script before, you’ll have to do it again after modifying the script

Open a new instance of PowerShell and now you can access the site collection just by typing:

PS C:> $iismgr.Sites

 That’s considerably smaller than our previous examples.  But let’s not stop there.  What happens if I want to search the site collection? PowerShell has some fun syntax for this as well. I simply pipe the output of my SiteCollection to a “Where-Object” cmdlet and then specify what site I’m looking for:

$iismgr.Sites | Where-Object {$_.Name -match "Default*"}

This is still quite a bit of typing when all I really want to do is find the default website. You may ask “Wouldn’t it be easier if we could just add a “Find” method to the SiteCollection object?” Well I’m glad you asked *cough*! Next, we are going to do just that! Open up another instance of notepad and add the following XML to it:


<?xml version="1.0" encoding="utf-8" ?>
<!-- *******************************************************************
Copyright (c) Microsoft Corporation.  All rights reserved.
 
THIS SAMPLE CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY 
OF ANY KIND,WHETHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO 
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR
PURPOSE. IF THIS CODE AND INFORMATION IS MODIFIED, THE ENTIRE RISK OF USE
OR RESULTS IN CONNECTION WITH THE USE OF THIS CODE AND INFORMATION 
REMAINS WITH THE USER.
******************************************************************** -->
<Types>
  <Type>
    <Name>Microsoft.Web.Administration.SiteCollection</Name>
      <Members>
        <ScriptMethod>
          <Name>Find</Name>
          <Script>
          $rtr = "";
          if ( $args[0] ) 
          { $name = $args[0];
            $rtr = ((New-Object Microsoft.Web.Administration.ServerManager).Sites | 
                     Where-Object {$_.Name -match $name}) 
          } 
          else 
          { 
            $rtr = "No sites found." 
          };
          $rtr
          </Script>
        </ScriptMethod>
      </Members>
  </Type>
</Types>

What we’ve done is identified that we want to add a scripted method to the Microsoft.Web.Administration.SiteCollection object. In our case, I’ve added a “Find” method by using the same cmdlet t at we typed before to search the site collection. The difference is, this type I use the $args array variable to check for a parameter and use it if one is available. Now, save this file into your %windir%system32WindowsPowerShellv1.0 directory as “iis.types.ps1xml”. Once you’ve saved the file, sign it the same way you signed your profile script. Keep in mind that these xml files contain code, so signing your xml is required to keep your PowerShell experience a secure one. Now, open your profile script (again) and add the following lines to the end:


new-variable iissites -value (New-Object Microsoft.Web.Administration.ServerManager).Sites -scope "global"
new-variable iisapppools -value (New-Object Microsoft.Web.Administration.ServerManager).ApplicationPools -scope "global"
update-typedata -append (join-path -path $PSHome -childPath "iis.types.ps1xml")

Note: Once again, you’ll have to re-sign this profile if your execution policy requires signed scripts.

Notice that I added two more variables: $iissites and $iisapppools. These variables allow me to access the site collection and application pool collection with a simple keyword. Lets try them out in PowerShell. Make sure you open a new instance of PowerShell so your profile script and xml type data are updated properly. Once your new instance of PowerShell is open, type the following:

PS C:> $iissites.Find("^Default*")

PowerShell will do all the work for you and you have MUCH less typing to do.

Another alternative to using xml files is to simply create a function and add it to your profile. For instance, we can create a function called “findsite” that provides the same functionality as our previous example. Either type the following command into PowerShell or add it to your profile script:

PS C:> function findsite { $name=$args[0]; ((New-Object Microsoft.Web.Administration.ServerManager).Sites | Where-Object {$_.Name -match $name}); } }

Now you can search for a site using the following syntax:

PS C:> findsite default*

Whatever way we choose to extend Microsoft.Web.Administration and/or PowerShell, we can use our output as we did before:

PS C:> (findsite default*).Id

The previous line should display the Id of the default web site. We can also stop the website:

PS C:> (findsite default*).Stop()

We can keep taking this to extremes and truncate every operation that we perform on a semi-regular basis. These scripts are not one-offs. Each script function we create or append to an existing object model can be reused and piped as input to another function. The possibilities are endless and completely customizable to your needs.

Adding performance

Some of you may have found the title to this blog post as amusing as I do. Throughout my career, I’ve been called into many a meeting asking that I “add” performance to a complete or nearly-complete product. No matter how many scowls I got, I could never resist joking about just adding the IPerformance interface. “And, while you are at it, just add the IScalable one too”, I would quip. (OK, I know some of you are doing searches for these interfaces — don’t bother). Laugh as hard as I may, I’m embarrassed to say that I am trying to do this very thing now.

A few weeks ago, I started playing around with a small sample that shows off some of the fun features of IIS 7. As I started coding, I added more features to the sample. Scope creep is a no-no, but this was something I was doing on my own time so I didn’t have a problem with it. However, the sample got so large that I had to actually stop and design a public API to support it. By the time I was done, my “sample” had a set of providers, a common API, localization support, utilities, and most notably, performance issues.

Last year on my personal blog, I talked about “engineering for usability“. In that post, I declared that the simplest design is sometimes (and probably most often) the best approach. That said, performance should have been considered very early on, and the sample should have been kept simple. Performance, as many of you know, is not something you add as an afterthought. Starting my sample the way I did, I never considered code performance to be an issue.

Now I’m forced to decide: do I scale back my sample knowing full well that it doesn’t have a security model or provide easy extensibility, or do I redesign the sample with the current feature set and design for performance up front? I’m leaning toward the latter despite reciting “keep it simple, stupid” in my head over and over again.

Where can I find in IIS 7.0 ?

Question: Where can I find Windows Authentication in IIS 7.0

I’ve seen this question or questions like this asked numerous times so I thought it would make a nice quick — and hopefully useful — blog post.

In previous versions of IIS, administrators were able to enable or disable features on their servers by simply checking or unchecking a box in the IIS management console.  However, when an administrator unchecked a feature in IIS, the feature still existed on the machine and was even loaded.  Unchecking a box allowed the administrator to disable a feature — but it didn’t physically remove anything from the machine.

This all changes with IIS 7.0. Currently, when you install IIS 7.0 on Windows Vista (or Longhorn server for that matter), you must select the features that you want installed. Each feature in IIS 7.0 is componentized into its own module library. The features you select at installation, and ONLY the features you select, will be copied to disk under the “%windir%system32inetsrv” directory. If you do not install the feature, it simply won’t exist on your machine. This obviously makes the server more robust and performant at the sime time — providing a high amount of customization as well as security.

Another thing to note is that once a feature module has been installed to the proper directory, it must also be added to the IIS configuration system. For more information on how to install and uninstall modules, see: Getting Started with Modules.

If you are having a hard time getting WindowsAuthentication, DynamicCompression, or some other feature of IIS to work. A good place to start looking is your installation. Make sure you’ve installed the features you need for your purposes.

Steve Wozniak … at Microsoft?

I know it may sound very strange coming from a Microsoft employee, but last Friday I found myself in awe while I sat directly in front of Steve Wozniak while he gave a presentation … on Microsoft campus. 

I hung on the man’s every word as he tried to fit a story into the short time he had here. The same spirit that drove me into this industry at the ripe old age of 9 is what drove him into the engineering field very early in life. For those of you that don’t know, Mr. Wozniak is largely known for co-founding Apple Computer with Steve Jobs. It is amazing that Steve came on Microsoft campus to talk to us and even better that he didn’t pull any punches in his talks. He had a few nicely phrased jabs about Microsoft that were, in my opinion, probably deserved. I won’t repeat them, but you can get a general sense of what Steve thinks about Microsoft by looking at his blog. He’s a very gracious guy though, and it was so very cool to meet him. What was even more amazing was how many people from Microsoft crammed themselves into this small room to listen to him. There wasn’t a single piece of carpet left to stand or sit on.

I’ve run into several celebrities and “big names” in my life, but I am never really been star-struck until I meet a geek with a well-deserved reputation. This was definitely one of those times. I took some pictures of the event and once I get a chance to pull them off of my camera, I’ll post them here.

Thanks for stopping by Microsoft, Mr Wozniak. It was an honor to meet you.

Unsafe thread safety

As I stated in my last post, for the past two days I’ve been sitting in Jeffrey Richter’s threading class. The class is near the end and I can’t say that a lot of new concepts have been taught. Another student and I have decided that the class should have been renamed, “Threading Basics”. That’s not to say anything of Richter’s teaching skill or the content of the class.  It just goes to show that if the architecture of a product is right, the threading code should be extremely simple to use.

However, what I love about this class is that it reminds me how much I enjoy this topic.  The theory of the perfect architecture doesn’t exist. Additionally, many times the developer has little-to-no ability to push back on a bad architecture. It’s in these cases that you must use your bag of concurrency tricks to work out of the whole the architect(s) put you in.  That sound easy enough, but what if the architecture included – *gasp* – “less-than-optimal” decisions in the actual .NET framework classes?  What if those decisions were made in the very methods that are supposed to help you with these synchronization problems?  These problems perplexed me when I first encountered them and I never really thought to blog about them (I was actually just scared I was doing something wrong).  Taking this class gave me the perfect excuse to bring the topic up.

Take a look at the following pattern in C++:

class CConcurrencySample
{
public:
    CConcurrencySample( ) 
    {   // Initialize a critical section on class construction
        InitializeCriticalSection( &m_criticalSection );
    };
    ~CConcurrencySample( ) 
    {   // Destroy the critical section on class destruction
        DeleteCriticalSection( &m_criticalSection );
    };

    // This method is over-simplified on purpose
    ////////////////////////////////////////////////////////////
    BOOL SafeMethod( )
    {
        // Claim ownership of the critical section
        if( ! TryEnterCriticalSection( &m_criticalSection ) )
        {
            return FALSE; 
        }

        ////////////////////////////////////////////////
        // Thread-safe code goes here
        ////////////////////////////////////////////////

        // Release ownership of the critical section
        LeaveCriticalSection( &m_criticalSection );

        return TRUE;
    };
private:
    CRITICAL_SECTION m_criticalSection;
};

Anyone that’s done any work in threading recognizes this pattern. We’ve declared a class named CConcurrencySample. When any code instantiates an instance of CConcurrencySample, the constructor initializes the private CRITICAL_SECTION instance in the object. When any code destroys an instance of CConcurrencySample, the CRITICAL_SECTION is also deleted. With this pattern, we know that all members have access to the critical section and can claim ownership of the critical section at any point. For my purposes, I’ve created a method named SafeMethod that will take the critical section lock when it is called, and release the lock when it leaves the method. The good part about this pattern is that each instance has a way to lock SafeMethod. The downside is that if client doesn’t need to call SafeMethod, then the CRITICAL_SECTION is created, initialized and destroyed without need — uselessly taking up memory (24 bytes on a 32bit OS) and processor cycles.

The CLR implemented this pattern and even expanded on it. They also tried to fix the waste incurred with non-use of the critical section. The CLR implementation works as follows. Each System.Object created by the CLR contains a sync block index (4 bytes on a 32bit OS) that is defaulted to -1. If a critical section is entered within the object, the CLR adds the object’s sync block to an array and sets the object’s sync block index to the position in the array. With me so far? So how does one enter a critical section in .NET? Using the Monitor class, of course. The following example emulates the same functionality as the previous C++ sample by using C#.

public class CConcurrencySample
{
   bool SafeMethod( )
   {
      // Claim ownership of the critical section
      if( Monitor.TryEnter( this ) )
      {
          return false;
      }

      ////////////////////////////////////////////////
      // Thread-safe code goes here
      ////////////////////////////////////////////////
    		
      // Release ownership of the critical section
      Monitor.Exit( this );

      return true;
    }
}

Monitor.Enter and Monitor.Exit are supposed to provide similar functionality of entering and leaving a critical section (respectively). Since all System.Object’s have their own SyncBlock, we just need to pass our current object to the Monitor.Enter and Monitor.Exit method. This performs the lock I described earlier by setting the sync block index. Sounds great, but what’s the difference between that C++ example and the C# pattern? What issue can you see in the .NET framework implementation of this pattern that you don’t see in the C++ sample I provided?

Give up?

The answer simple. In the C++ sample, the lock object (the CRITICAL_SECTION field) is private. This means that no external clients can lock my object. My object controls the locks. In the .NET implementation, Monitor.Enter can take ANY object. ANY caller can lock on ANY object. External clients locking on your object’s SyncBlock can cause a deadlock.

    public class CConcurrencySample2
    {
        private Object m_lock = null;

        bool SafeMethod ( )
        {
            if ( m_lock == null )
            {
                m_lock = new Object( );
            }
            if ( ! Monitor.TryEnter( m_lock ) )
            {
                return false;
            }

            ////////////////////////////////////////////////
            // Thread-safe code goes here
            ////////////////////////////////////////////////

            Monitor.Exit( m_lock );

            return true;
        }
    }

With this approach, we are declaring an plain, privately-declared object in the class. Be careful when using this approach that you don’t use a value-type for your private lock. Value type’s passed to a Monitor will be boxed each time the methods on Monitor is called and a new SyncBlock is used — effectively making the lock useless.

So what does this have to do with IIS? Nothing just yet. But I plan to cover some asynchronous web execution patterns in future posts and I figured this would be a great place to start.

This information is pretty old but sitting in class I realized it might not be completely obvious to everyone just yet. If you were someone in-the-know about this information since WAY back in 2002, please forgive the repitition.

Back in business…

I haven’t blogged for some time now.  This in large part has been due to heavy workload, close deadlines, and the fact that I was alone in my workload.  Over the past few weeks, I’ve been able to get my head above water.  While our open position on the team is still “open”, we’ve filled our contractor position. Not only have we “filled” it, we’ve actually brought in one of our old contractors who is more than capable.  He is definitely helping to alieviate my workload already.  I’ve finished my Vista RTM handoffs and that has taken off some more pressure.  I’ve also completed my first review at Microsoft and, while I definitely see much room for improvement in the process, I was pretty pleased with the outcome.

All three of these events have helped me free up time to start blogging again.  In fact, this new found freedom has given me some time to start taking some classes at Microsoft. As we speak, I’m typing this blog post up during a break of a class on managed code threading.  Those of you that know me may be saying, “Didn’t you write books on threading? Why would you sit in a class on that very topic?”  Well, I’m attending for two reasons. The first of these reasons is that the class is being taught by Jeffrey Richter.  No matter how much you think you know about anything, I guarantee you that Jeffrey Richter can make you feel like a “n00b”. OK, there may be a small percentage of you out there that know more about obscure printer driver hacks, but even there, I’d defer to Mr Richter.  If you ever get a chance to sit in on one of Wintellect‘s classes, I recommend you take advantage of that opportunity.  If you can’t afford it, I’d recommend you read the many books published by Wintellect employees.  The second reason I’m sitting in this class is because I think threading is increasingly important. When I co-authored my first book on this topic, I believed that the multi-core and multi-processor industry would be growing by leaps and bounds making threading knowledge extremely valuable.  This is proving true as Intel has just announced that they will have 80 core processors by 2011.  If you don’t know how to use multi-threading techniques PROPERLY, I highly suggest you start learning.  Despite my involvement in three books on the topic of threading, Richter’s class, in my opinion, is one of the best means to get solid, current multi-threading advice today.

I hope you’ll forgive the silence on my blog from the past few months.  I also hope you’ll come back often and trust me to provide you with some relevant articles on a more regular basis.

Accessing IIS 7 APIs through PowerShell (Part I)

I’ve caught the PowerShell bug. In between stints with my ever-expanding code samples, I play with PowerShell a lot.  I thought I’d share a quick example of how to load Microsoft.Web.Administration.dll and use it to perform some basic tasks.

Note: I’m running these samples on Windows Vista RTM, but I have no reason to believe this will not work on the PowerShell release candidates for the Vista RC* builds that are available now

So let’s get started.

First, PowerShell has no idea where Microsoft.Web.Administration.DLL is so you have to tell it how to load it. Anyone who has written code to dynamically load an assembly should be familiar with this syntax.  Type the following command

PS C:> [System.Reflection.Assembly]::LoadFrom( “C:windowssystem32inetsrvMicrosoft.Web.Administration.dll” )

The path to your assembly may change depending on your install.  I’ll show you later how to use environment variables to calculate the correct path.  In the mean time the out put of the line above display something like the following:

GAC  Version    Location
---  -------    --------
True v2.0.50727 C:WindowsassemblyGAC_MSILMicrosoft.Web.Administration 7.0.0.0__31bf3856ad364e35Microsoft . . .

Once the assembly is loaded you can use PowerShell’s “New-Object” command to create a ServerManager object that is defined in Microsoft.Web.Administration.

PS C:> (New-Object Microsoft.Web.Administration.ServerManager)

This doesn’t give you much except the list of properties the ServerManager exposes:

ApplicationDefaults      : Microsoft.Web.Administration.ApplicationDefaults
ApplicationPoolDefaults  :
ApplicationPools         :
SiteDefaults             : Microsoft.Web.Administration.SiteDefaults
Sites                    : {Default Web Site}
VirtualDirectoryDefaults : Microsoft.Web.Administration.VirtualDirectoryDefaults
WorkerProcesses          : {}

To get more detail, you need to use the properties and methods of the ServerManager object to drill down and get the information we want. The ServerManager provides access to all of the sites on your machine through a SiteCollection object. This SiteCollection is made available through the “Sites” property of the ServerManager. 

PS C:> (New-Object Microsoft.Web.Administration.ServerManager).Sites

Which will produce a list view of all the sites and their associated property names/values.

ApplicationDefaults        : Microsoft.Web.Administration.ApplicationDefaults
Applications               : {DefaultAppPool, Classic .NET AppPool}
Bindings                   : {}
Id                         : 1
Limits                     : Microsoft.Web.Administration.SiteLimits
LogFile                    : Microsoft.Web.Administration.SiteLogFile
Name                       : Default Web Site
ServerAutoStart            : True
State                      : Started
TraceFailedRequestsLogging : Microsoft.Web.Administration.SiteTraceFailedRequestsLogging
VirtualDirectoryDefaults   : Microsoft.Web.Administration.VirtualDirectoryDefaults
ElementTagName             : site
IsLocallyStored            : True
RawAttributes              : {name, id, serverAutoStart}
ApplicationDefaults        : Microsoft.Web.Administration.ApplicationDefaults
Applications               : {DefaultAppPool}
Bindings                   : {}
Id                         : 2
Limits                     : Microsoft.Web.Administration.SiteLimits
LogFile                    : Microsoft.Web.Administration.SiteLogFile
Name                       : Test Web Site 1
ServerAutoStart            : False
State                      : Stopped
TraceFailedRequestsLogging : Microsoft.Web.Administration.SiteTraceFailedRequestsLogging
VirtualDirectoryDefaults   : Microsoft.Web.Administration.VirtualDirectoryDefaults
ElementTagName             : site
IsLocallyStored            : True
RawAttributes              : {name, id, serverAutoStart}

Of course, this isn’t the easiest view to read, so let’s say we just list the site names by piping our site list to the “ForEach-Object” command in PowerShell and display a list of site names only:

 PS C:> (New-Object Microsoft.Web.Administration.ServerManager).Sites | ForEach-Object {$_.Name}

This looks much more concise:

Default Web Site
Test Web Site 1

We could also use the Select-Object syntax to query the list into a table format:

 PS C:> (New-Object Microsoft.Web.Administration.ServerManager).Sites | Select Id, Name
        Id Name
        -- ----
         1 Default Web Site
         2 Test Web Site 1

Now lets use PowerShell to manage application pools. We can fit several commands on one line by using the semi-colon.  The following command-line is actually four different operations: Storing the application pool collection into a variable, displaying the name and runtime status of the first application pool, stopping the first application pool, then displaying the name and status again.

PS C:> $pools=(New-Object Microsoft.Web.Administration.ServerManager).ApplicationPools; $pools.Item(0) | Select Name, State;$pools.Item(0).Stop(); $pools.Item(0) | Select Name, State

Running this sample should display the following:

Name                                         State
----                                         -----
DefaultAppPool                               Started
Stopped
DefaultAppPool                               Stopped

This is nice, but we can do this already with appcmd.exe right? Well, to some extent.  We don’t get the features of PowerShell that allow us to format our output the data to our liking. Also, as a developer, I find it much easier to use the API syntax I’m already familiar with than to remember appcmd.exe syntax.  Furthermore, PowerShell allows us to use WMI alongside our managed code calls, and unlike appcmd.exe, you can extend PowerShell and cmdlets. PowerShell gives you the ability to easily manage multiple servers from one command prompt on one machine.  Watch the PowerShell/IIS 7 interview on Channel9 if you want to see this remote administration in action.

One last thing that PowerShell brings to the table is the ability to “spot-weld” our object models (as Scott Hanselman quipped). You can create/modify/extend type data and formatting to your hearts desire.  For more information on this, check out the PowerShell documentation found in the PowerShell install, or in the PowerShell documentation set.

So, I would be remiss in this post if I didn’t try to make your PowerShell / IIS 7.0 life easier.  As such, I’ve created a profile script that loads all the IIS 7.0 managed assemblies for you.  The script is simple and contains more  echo commands than actual working script lines.

To install this script run the following command inside PowerShell:

PS C:> if ( test-path $profile ) { echo “Path exists.” } else { new-item -path $profile -itemtype file -force }; notepad $profile

This will create a profile path for you if you don’t already have one, then open up your profile in notepad.  If you haven’t added anything to the file, it will obviously display an empty file.  Paste the following in notepad when it opens:

echo “Microsoft IIS 7.0 Environment Loader”
echo “Copyright (C) 2006 Microsoft Corporation. All rights reserved.”
echo ”  Loading IIS 7.0 Managed Assemblies”

$inetsrvDir = (join-path -path $env:windir -childPath “system32inetsrv”)
Get-ChildItem -Path (join-path -path $inetsrvDir -childPath “Microsoft*.dll”) | ForEach-Object {[System.Reflection.Assembly]::LoadFrom( (join-path -path $inetsrvDir -childPath $_.Name)) }

echo ”  Assemblies loaded.”

Now, save the profile and close notepad.  You will likely have to sign this script or change your script execution policy to something very weak to make this script run properly (obviously I’m not recommending the latter). To find out more about signing scripts, type “get-help about_signing” in PowerShell. The instructions to create a self-signed certificate found in that help file are as follows:

In an SDK Command Prompt window, run the following commands.
The first command creates a local certificate authority for your computer.
The second command generates a personal certificate from the certificate authority:

makecert -n "CN=PowerShell Local Certificate Root" -a sha1 `
   -eku 1.3.6.1.5.5.7.3.3 -r -sv root.pvk root.cer `
   -ss Root -sr localMachine
makecert -pe -n "CN=PowerShell User" -ss MY -a sha1 `
   -eku 1.3.6.1.5.5.7.3.3 -iv root.pvk -ic root.cer
   MakeCert will prompt you for a private key password.

Go ahead and make a certificate for yourself following those instructions. To sign your profile, within PowerShell type the following:

PS C:>Set-AuthenticodeSignature $profile @(get-childitem cert:CurrentUserMy -codesigning)[0]

So far, you’ve created a certificate and signed your script. Now, you will have to change your script execution policy down at least one level from the default.  The default doesn’t allow scripts at all.  To get scripts to execute, at the minimum you’ll have to set it to “AllSigned” to allow only signed scripts to execute.  In this mode, each time you execute a script from a new publisher, you’ll be asked what level of trust to assign to the publisher (unless you respond to the prompt to “Always Run” or “Never Run” scripts from that publisher)

Do you want to run software from this untrusted publisher?
File C:UsersTobinTDocumentsWindowsPowerShellMicrosoft.PowerShell_profile.ps1 is published by CN=PowerShell User
and is not trusted on your system. Only run scripts from trusted publishers.
[V] Never run  [D] Do not run  [R] Run once  [A] Always run  [?] Help (default is "D"):

Now, each new instance of PowerShell that you run will automatically load the IIS 7.0 managed assemblies.  I know it seems like a great deal of work, but it really isn’t once you’ve made a few rounds around inside PowerShell. Consider that you only have to create the script once and then you have full the full range of the managed IIS 7.0 SDK at your fingertips inside PowerShell. 

If you have problems, feel free to leave comments and I’ll do my best to help you.

Why am I smiling?

I moved to Redmond just over four months ago.  In the time I have been here, my rental car was side-swiped, my truck was broken into, my headlight and bumper were damaged by someone in our own parking garage, someone stole my copy of “Professional Visual C++/CLI” from my office today (clearly someone missed the “corporate values” talk at New Employee Orientation), and my relocation to the great Pacific Northwest has been less than smooth or swift — waiting for the insurance company to assess and pay my claim for the furniture damaged by the movers.   I’m the only PW in my group and we’ve been unable to find anyone else that can fill the shoes for our open position. I have deadlines looming with tons of work to do and not enough time to do it all by myself.  Bill Gates has announced he is reducing his role here, both Windows and Office have announced schedule changes and Microsoft’s stock has dropped over four dollars since I arrived on campus.

So why am I smiling?

In four month’s time I’ve learned so much.  I’ve been able to look at the technologies we will implement in the future before most people even know they are in the pipeline. I sit in on meetings and get to give real feedback that can influence products used by more people than I could have ever imagined.  I have taken over ownership of an internal tool our team uses and have written a few of my own.  I’ve gathered customer feedback and helped several customers personally or got them in touch with others who could help them. The amount of responsibility piled on me is less of a burdon and more of a compliment, in my opinion.  Who puts that amount of pressure on someone if they feel they can’t handle it?

Apart from all the benefits provided by Microsoft there are other reasons I’m happy to be working here. I’m in a technological heaven.  The people are brilliant and open-minded (except when it comes to “Red State” ideas, but give me time — I’m still working on it).  I pass those same brilliant people in the halls every day.  If I have a question about something, I can go hit our Global Address Book and track down the person who owns the feature to discuss the matter with them personally. 

I also get to see the company make huge changes in the way it delivers software.  With the industry changing so quickly, its awesome to see a company of this size roll with the punches and adapt. 

It’s hard to explain why I’m so happy to work here. The only thing I can say is that you can tell that the majority of people working here love working here and finding new ways to make customers happy.  That reason alone is enough to make me love working at Microsoft.

Enjoy the weekend!