Category Archives: Technical

These posts are technical in nature. The casual reader may not care about these posts. These are for my fellow geeks.

Fiddler Script for Accept-Language Testing

It’s been a while since I’ve blogged about anything so it’s only fitting that I post about doing something fairly trivial. I say it’s trivial because Fiddler makes my job so gosh-darn easy!

I often find myself testing that web sites work in various other languages. In many cases, changing the Accept-Language for a site will do the trick and luckily Fiddler’s custom rules will let you do the trick. The software allows you to specify Japanese as an option under the Rules menu.

That said, simply testing one language isn’t usually enough. I’d find myself hitting CTRL+R to open the CustomRules.js file, editing the Accept-Language and waiting for another day before I’d need to do this again.

I got tired of this today and completely removed the single-instance Japanese support option from Fiddler. So I replaced this code:

    // Cause Fiddler to override the Accept-Language header 
    public static RulesOption("Request &Japanese Content")
    var m_Japanese: boolean = false;

With this code:

    // Cause Fiddler to override the User-Agent header with one of the defined values
    RulesString("&Accept-Languages", true) 
    RulesStringValue(0,"None", "")
    RulesStringValue(1,"German", "de")
    RulesStringValue(2,"English", "en")
    RulesStringValue(3,"Spanish", "es")
    RulesStringValue(4,"French", "fr")
    RulesStringValue(5,"Italian", "it")
    RulesStringValue(6,"Japanese", "ja")
    RulesStringValue(7,"Korean", "ko")
    RulesStringValue(8,"Dutch ", "nl")
    RulesStringValue(9,"Polish", "pl")
    RulesStringValue(10,"Portuguese (Brazil)", "pt-br")
    RulesStringValue(11,"Russian", "ru")
    RulesStringValue(12,"Swedish ", "sv")
    RulesStringValue(13,"Chinese (PRC)", "zh-cn")
    RulesStringValue(14,"Chinese (Taiwan)", "zh-tw")
    RulesStringValue(15,"en-us, fr-fr", "en-us, fr-fr")
    RulesStringValue(16,"ru-mo, de-at, zh-tw, fr", "ru-mo;q=1,de-at;q=0.9,zh-tw;q=0.8,fr;q=0.6")
    RulesStringValue(17,"&Custom...", "%CUSTOM%")
    public static var sAL: String = null;

And later in the file, I replaced this code:

if (m_Japanese) {
    oSession.oRequest["Accept-Language"] = "ja";

with this code:

// Accept-Language Overrides
if (null != sAL) {
    oSession.oRequest["Accept-Language"] = sAL; 

Now every time I open Fiddler, I can just select the language I want to use.


Kinect for Windows SDK

I read the other day that we released a new version of the Kinect for Windows SDK that is supported by Windows 8. With as busy as I’ve been working on IE10, I’ve been a bit of a slacker when it comes to trying new things so I decided that it was time to dive in and take a quick peak at what I could do with Kinect.

In a matter of a few minutes, I managed to play around a bit with the depth image sensors and the color image sensors. I also mucked around a bit with the elevation and accelerometer data. I’m not going to get fancy with this post so hang onto your hat and follow along. If you’re the one random guy that stumbles on this post and actually reads it and has questions, feel free to ask. I mean, I won’t know how to answer it, but you can always ask!

Here goes.


There isn’t much sense in doing anything if you haven’t purchased a Kinect for Windows device already. I am very specific in stating Kinect for Windows. While there are some folks out there that provide you with the ability to use your Kinect for Xbox 360, I wouldn’t recommend it. Go get yourself one of these devices and we’ll all wait patiently till you come back.

[tapping foot]


Download and Install the Kinect for Windows SDK. Make sure you get the latest one from October 8th, 2012.

Plug your device into your computer and make sure to plug the power into the device. Plug and play is your friend here as there is nothing to install to make the device available.

Start Coding

Setting up your Form

  1. Open up Visual Studio 2012
  2. Go to File | New Project
  3. In the dialog, select C# | Windows | Windows Forms Application (obviously you can do this in other languages, but I’m using C# here)
  4. Right-click the References tree node in Solution Explorer and select Add Reference
  5. Press Ctrl+E, type “Kinect“, check the “Microsoft.Kinect” assembly and click OK
  6. Add two PictureBox controls and a TrackBar control to Form1
  7. Set the TrackBar Maximum to 27 and the Minimum to -27. These are the range values for the camera elevation angle.
  8. Add a Timer control to the Form and set the interval to 1000ms (1 second)
  9. Double click Form1 to go to Code view


Add a couple of using statements at the top of the form:

using System.Drawing.Imaging;
using Microsoft.Kinect;
using System.Runtime.InteropServices;

Define a variable to reference the Kinect sensor inside the Form class. I also added some variables to hold my sensor data:

public partial class Form1 : Form
    KinectSensor sensor; 

    // Depth sensor readings
    int depthStreamHeight = 0;
    int depthStreamWidth = 0;
    int depthStreamFramePixelDataLength = 0;

    // Color sensor readings
    int colorStreamHeight = 0;
    int colorStreamWidth = 0;
    int colorStreamFrameBytesPerPixel = 0;
    int colorStreamFramePixelDataLength = 0;

Set the reference to your sensor. We’ll be lazy here and just reference the first in the array. Then make sure to enable the depth stream and color stream. Here’s the code inside my form load handler:

private void Form1_Load(object sender, EventArgs e)
    sensor = KinectSensor.KinectSensors[0];
    sensor.DepthFrameReady += sensor_DepthFrameReady;
    sensor.ColorFrameReady += sensor_ColorFrameReady;

You’ll notice that I also hooked up two event handlers here. These handlers fire when a frame is ready for rendering or otherwise processing. In our case, we’re just going to muck around with the data and dump it into the picture boxes. After we’ve set up our handlers, we go ahead and call start() on the sensor.

So now we need to create our handlers. In our handlers, we are going to:

  1. Attempt to open the frame
  2. Read the stream data for height, width, pixel length and bytes per pixel
  3. Convert the raw data to a bitmap
  4. Assign that bitmap to the picture control

Here’s my code for the sensor_DepthFrameReadyhandler:

private void sensor_DepthFrameReady(object sender, 
    DepthImageFrameReadyEventArgs e)
    DepthImageFrame imageFrame = e.OpenDepthImageFrame();
    if (imageFrame != null)
        depthStreamHeight = sensor.DepthStream.FrameHeight;
        depthStreamWidth = sensor.DepthStream.FrameWidth;
        depthStreamFramePixelDataLength = 
        pictureBox2.Image = GetDepthImage(imageFrame);

The GetDepthImage function is doing the conversion work for us. This is the code that does the conversion from the raw image frame data to a bitmap (btw, if you know an easier way to do this, please chime in. I’m not a graphics dev).

Bitmap GetDepthImage(DepthImageFrame frame)
    // create a buffer and copy the frame to it
    short[] pixels = new short[sensor.DepthStream.FramePixelDataLength];

    // create a bitmap as big as the stream data. Note the pixel format.
    Bitmap bmp = new Bitmap(depthStreamWidth, depthStreamHeight, 

    // Create a bitmap data, size appropriately, and lock the data
    BitmapData bmpdata = bmp.LockBits(
            new Rectangle(0, 0, depthStreamWidth, depthStreamHeight),
    // Wave your magic wand and unlock the bitmap
    IntPtr ptr = bmpdata.Scan0;
    Marshal.Copy(pixels, 0, ptr, depthStreamWidth * depthStreamHeight);

    // return the bitmap
    return bmp;

It’s important to note the pixel format in this function because, when we create a similar function for the color frame, we will use a different format. You’ll want to make sure you use the right format depending on what data you are getting.

Here’s my code for the sensor_ColorFrameReady handler:

void sensor_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
    ColorImageFrame imageFrame = e.OpenColorImageFrame();
    if (imageFrame != null)
        colorStreamHeight = sensor.ColorStream.FrameHeight;
        colorStreamWidth = sensor.ColorStream.FrameWidth;
        colorStreamFramePixelDataLength = 
        colorStreamFrameBytesPerPixel =
        pictureBox1.Image = GetColorImage(imageFrame);

Here’s the code for the GetColorImage function:

Bitmap GetColorImage(ColorImageFrame frame)
    // Create a buffer and stuff the frame pixels into it
    byte[] pixels = new byte[colorStreamFramePixelDataLength];
    // Create your bitmap. Notice the PixelFormat change!
    Bitmap bmp = new Bitmap(
            sensor.ColorStream.FrameWidth, colorStreamHeight,

    // Create a BitmapData objet and lock
    BitmapData bmpdata = bmp.LockBits(
            new Rectangle(0, 0, colorStreamWidth, colorStreamHeight),
            ImageLockMode.WriteOnly, bmp.PixelFormat

    // Wave your magic wand again and copy the data. Notice this time
    // we multiply by the bytes per pixel? This is important to get the right
    // data size copied.
    IntPtr ptr = bmpdata.Scan0;
    Marshal.Copy(pixels, 0, ptr, 
            colorStreamWidth * colorStreamHeight * 

    // Return the bitmap
    return bmp;

All done well, you should now be able to run your application and get something like the following screen shot:

Kinect For Windows Sample - ScreenShot

Kinect For Windows Sample – ScreenShot

Next, we want to add some control for the elevation of the Kinect. Of course, the Kinect does have limits so it can only move so far. In this post, we’ll just move the X component of control. We will use the TrackBar control that we added to the form at the beginning of this post.

First, a little caveat. We don’t want to move the elevation of the Kinect very often. In fact, the API documentation for Kinect tells us that it is very bad to move it often as you’ll wear out the controls. For that reason, we don’t want to move the Kinect on every event received from changing the TrackBar. That would cause problems. Instead, we will update a variable when the TrackBar is changed, and use a timer that fires every second (added earlier in the project) to do the actual validation of the data and updating of the elevation.

So, add the following declarations to your form:
"]public partial class Form1 : Form
KinectSensor sensor;
int lastElevation = 0;
int elevation = 0;

At the very bottom of your Form_Load event tell the timer to start:


Now, create an event handler for your trackBar’s ValueChanged event as follows:

private void trackBar1_ValueChanged(object sender, EventArgs e) {
    var value = trackBar1.Value;
    var reading = sensor.AccelerometerGetCurrentReading().X;
    if (value > (sensor.MaxElevationAngle + reading)) {
        value = sensor.MaxElevationAngle;
    else if (value < (sensor.MinElevationAngle + reading)) {
        value = sensor.MinElevationAngle;
    elevation = value;

What we are doing here is first reading what the current reading is on the Accelerometer’s X component. We do this because that can have an affect on how far the sensor can move. Next, we will do some validation to make sure that the elevation angle added to the reading isn’t greater or less than the maximum or minimum values allowed (currently 27 and -27 respectively). When we change the trackBar, it updates our elevation variable. Now we need to do something with that variable.

Create a Tick event handler for the timer control you added. Add the following code:

private void timer1_Tick(object sender, EventArgs e)
    if (lastElevation != elevation)
        sensor.ElevationAngle = elevation;
        lastElevation = elevation;

Again, we are just doing validation to see if the last elevation is different from the current elevation. If it isn’t, then we go ahead and move the sensor and update the current position.

All this should get you to a state where you can change the sensor elevation.

There’s a lot more fun to be had, but it took me longer to write this post than it did to write the code. I suspect you can take it from here faster than you could read my post. Happy coding!

Cross-browser event hookup made easy

I seem to be on a roll. I’m coding and might as well be sharing some of what I’m working on while I do it. In that sense, here goes another small bit of code that I’ll be adding to jsSimple on github.

We all know how to hook up events cross-browser. We know that we have both attachEvent and addEventListener. Both have varying syntax and we’ve all written this code a million times. Let’s just make this simple. Here’s how to call this:

Event.add(element, 'click', callback, false);
// .. 
Event.remove(element, 'click', callback, false);

The remove looks like you’re passing around the same data, so I added a return value to “add” that allows you to capture the event and pass it in as a single object to the remove function as follows:

var evt = Event.add(element, 'click', callback, false);
// .. 

So, without further delay, here is the object definition:

 * jsSimple v0.0.1
 * Copyright (c) Tobin Titus
 * Available under the BSD and MIT licenses:
var EventObj = function (target, type, listener, capture) {
  return {
    'target' : target,
    'type' : type,
    'listener' : listener,
    'capture' : capture

var Event = (function() {
  'use strict';
  function add (target, type, listener, capture) {
    capture = capture || false;
    if (target.addEventListener) {
      target.addEventListener(type, listener, capture);
    } else if (target.attachEvent) {
      target.attachEvent("on" + type, listener);
    return new EventObj(target, type, listener, capture);
  function remove(object, type, listener, capture) {
    var target;
    if (object && !type && !listener && !capture) {
      type = object.type;
      listener = object.listener;
      capture = object.capture;
      target =;
    } else {
      capture = capture || false;
      target = object;
    if (target.removeEventListener) {
      target.removeEventListener(type, listener, capture);
    } else if (target.detachEvent) {
      target.detachEvent("on" + type, listener);
  return {
    'add' : add,
    'remove' : remove

You can see this all in action on jsFiddle.

JavaScript animation timing made easy (requestAnimationFrame, cancelRequestAnimationFrame)

I often find myself writing a lot of the same code from project to project. One of those pieces of code is around using requestAnimationFrame and cancelRequestAnimationFrame. I wrote a quick and dirty animation object that allows me to register all of my rendering functions, start the animation and cancel it. Here’s an example of how to use this object:

function render () {
   // Do your rendering work here

It can also be used to add several animations before and even after Animation has started, you can add new rendering routines on the fly. This is useful in a number of instances where objects have their own render routines and you want to add rendering routines at different times.

I put in a few workarounds. The typical hack to allow setTimeout/cancelTimeout fallbacks are in there. There is also a work around to deal with the missing mozCancelRequestAnimationFrame and the fact that mozRequestAnimationFrame doesn’t return an integer per spec.

So here’s the code for the Animation object:

 * jsSimple v0.0.1
 * Copyright (c) Tobin Titus
 * Available under the BSD and MIT licenses:
var Animation = (function() {
  "use strict";
  var interval, moz, renderers = [];

  function __mozFix (val) {
    /* Mozilla fails to return interger per specification */
    if (!val) { 
      moz = true;
      val = Math.ceil(Math.random() * 10000);
    return val;

  function __renderAll() {
    var r, length;

    // execute all renderers
    length = renderers.length;
    for (r = 0; r < length; ++r) {
     if (interval) {
       interval = __mozFix(window.requestAnimationFrame(__renderAll));
   function __registerAnimationEventHandlers() {
     var pre, prefixes = ['webkit', 'moz', 'o', 'ms'];
     // check vendor prefixes
     for (pre = prefixes.length - 1; 
          pre >= 0 && !window.requestAnimationFrame; --pre) {
      window.requestAnimationFrame = window[prefixes[pre] 
            + 'RequestAnimationFrame'];
      window.cancelAnimationFrame = window[prefixes[pre] 
            + 'CancelAnimationFrame'] 
            || window[prefixes[pre] + 'CancelRequestAnimationFrame'];

    // downlevel support via setTimeout / clearTimeout
    if (!window.requestAnimationFrame) {
      window.requestAnimationFrame = function(callback) {
        // could likely be an adaptive set timeout
        return window.setTimeout(callback, 16.7);

    if (!window.cancelAnimationFrame) {
      window.cancelAnimationFrame = function(id) {

  function add(renderer) {
    if (renderer) {

  function isRunning() {
    return !!interval;

  function start() {
    interval = __mozFix(window.requestAnimationFrame(__renderAll));

  function stop() {
    if (!moz) {
    interval = null;

  return {
    "add": add,
    "isRunning": isRunning,
    "start": start,
    "stop": stop

I’ve put this code on jsFiddle to test out.

I’ve also put this on github as jsSimple. I’ll likely add a few other bits of code to this as I move along.

CSS: Opacity vs Alpha (via rgba and hsla)

Once again, I was working on my new blog theme again and ran into the need to set the opacity of a background div without affecting the children. The support for opacity is spotty and slightly unpredictable. The other downside is that setting opacity via CSS affects the children of the transparent object. You can get around this by using an rgba background colors and specifying the alpha channel.

background-color: rgba(24, 117, 187, .5)

This specifies a background color of red: 24, blue: 117, green: 187 and an alpha of 50% transparency. I put up an example of this on jsFiddle here:

Here’s a side-by-side example of the differences between opacity and rgba.

Figure demonstrates opacity vs alpha

The div on the left has set an opacity of .5 on the div container. On the div on the right, I’ve set the background color with an alpha of .5 using rgba. Notice that the sample on the left has affected the transparency of the child div, but the sample on the right has not.

You can also set colors using hsla (hue, saturation, and lightness).

background-color: hsla(206, 77%, 41%, .5)

Of course, this only works in browsers released in the last few years:

  • Safari 3.1 in March 2008
  • Firefox 3 in June 2008
  • Chrome 4.0 in January 2010
  • Internet Explorer 9 in June 2011

Detecting CSS3 Transition Support

I’m in the process of developing a new blog theme for myself. I was trying to use CSS3 transitions but I wanted to fall back to JavaScript if transitions weren’t supported. I really want to avoid Modernizr if I can so I needed a quick and dirty test for detecting them. Here’s what I came up with:

var Detect = (function () { 
  function cssTransitions () { 
    var div = document.createElement("div"); 
    var p, ext, pre = ["ms", "O", "Webkit", "Moz"]; 
    for (p in pre) {
      if ([ pre[p] + "Transition" ] !== undefined) {
        ext = pre[p]; 
    delete div; 
    return ext; 
  return { 
    "cssTransitions" : cssTransitions 

It’s simple and does the trick. Simply check the return of Detect.cssTransitions().

if (!Detect.cssTransitions()) {
   // Do JavaScript fallback here

You can also get the extension used in the current user-agent back from this method (ms, O, Moz, or Webkit).

Powershell: Selecting files that don’t contain specified content

I had a quick task today that required that I search a couple hundred directories and look in specific files to see if they contained a certain piece of code. All I wanted was the list of files that did not. PowerShell to the rescue!

I’ll break down each of the calls I used and provide the full command-line at the end for anyone interested.

First, we need to recurse through all the files in a specific directory. This is simple enough using Get-ChildItem.

Get-ChildItem -include [paths,and,filenames,go,here] -recurse

Next, I needed to loop through each of those files using ForEach-Object :

ForEach-Object { … }

Next, I need to get the content of each of those files. That’s done using Get-Content:

Get-Content [file]

I then need to be able to determine if the contents of the file contains the string I’m looking for. This is easy enough with Select-String:

Select-String -pattern [pattern]

That’s pretty much all the commands you need to know — we just need to put them together now.

PS> Get-ChildItem -include *.html,*.xhtml -recurse |
    ForEach-Object { if( !( select-string -pattern "pattern" -path $_.FullName) ) 
                     { $_.FullName}}

Community Question: Synching Remote Locations

I’ve been thinking about a problem that I’ve solved before. I’ve solved it a million ways but I’ve never really thought about which approach is best. I’ve decided to request some input on the subject. I’d appreciate your thoughts.

The Scenario

You’re a business with multiple locations and a single website. You want be able to allow customers to log into your site, and order something from one of your locations, but you want to give the customer immediate confirmation that their order was received by the store location — not just that the website received the order.

That means you only want to let the user know they can rest at ease knowing their order has been received by the people directly responsible for holding/delivering/shipping your product.


The scenario above could describe any number of business. This could be a retail business. You may want to make sure that the store knows to hold a piece of inventory for a customer who is on their way to get it (think Best Buy’s “in store pickup”). It could also be something much more loosely connected by even legal boundaries like OpenTable does when it allows you to make reservations for a number of restaurants for open times.

Solution Approaches

There are obviously a number of ways to solve this problem and the application may very well dictate the solution you pick. There are three main approaches, and within those approaches, there are several technical ways to solve the problem.

Centralized Approach

In the OpenTable application, you could, for instance, require all reservations be made through OpenTable’s website. In that case, it means that even if someone called a restaurant, they would need to use the website.

The downside to this is what happens when the internet connection between the restaurant and is interrupted. People calling the restaurant wouldn’t be able to make reservations but people on the web would. Alternatively, the restaurant and the web would be out of sync and more reservations would be made than are available.

Distributed/Brokered Approach

This approach says that each location is responsible for it’s own allocation of inventory/product/time slots. The website simply acts as a broker and sends orders to the location. This means that the website must collect the intended details and get them to the store location. There are two problems with this approach.

First, the communication from the centralized approach is still problematic. In the reservation scenario, you can’t simply store the request at the server and send it when communications return. The reservation may not be available. Ditto that with the inventory issue. If that elusive Tickle-me Elmo doll is in stock, you want the customer to be able to buy it and secure the inventory for in-store pick-up.

The second issue to determine is whether you want a push mechanism from the website to the remote location, or if you want a pull mechanism that polls for details on a regular basis. Because the inventories are disbursed, there is also some synchronization that needs to occur. You need to make sure that if you say there is an item in inventory, that there is. If it’s purchased at the store, you need the website to reflect that. If it’s purchased/reserved on the website, you need the item/reservation to be updated at the location.

Delegated Approach

There is another approach which is a bit less risky. This approach delegates a certain amount of guaranteed inventory from a remote store location to the web. This allows you to treat the web as it’s own store, handle it’s own inventory, and simply sync with the remote locations as you can. This is a problem for retail locations that do this, however, because you have to maintain two different inventory statuses — in inventory, but allocated for in-store pickup. It’s not as big of a deal for things like restaurant reservations because they aren’t “real” entities that require inventory shuffling.  There are definitely benefits to this approach, but it wouldn’t work for something like, for instance, grocery delivery.

My Solution

The solution I’m inclined toward is to use the distributed approach. When someone submits a request, it will be queued up, and an attempt will be made to submit it to the remote location, when the remote location receives the order, it will return a confirmation which will then be returned asynchronously to the user (either through polled HTTP requests or other means — Twilio, for instance). If the location cannot be reached, it will simply return an error to the user with instructions on how to contact the store directly. This is a fairly common approach.

Questions to the Community

So now that I’ve laid out my thinking, I’d like to ask some questions to you all. If you feel so inclined, please feel free to post your responses in the comments. Thanks very much in advance.

  1. Am I missing any approaches above? Is there another approach I haven’t considered?
  2. Are there any downsides that I’m missing?

I’m looking forward to hearing from you.

Taking on New Challenges: Joining the IE Team

How I Started

I’ve been into computers since I was a kid. The story is long, but I essentially started writing software in the 5th grade at North Elementary School in East Liverpool, Ohio. We had a quick introduction to computers. Our teacher discussed high-level and low-level languages and then introduced us to BASIC programming. I wrote my first program:

20GOTO 10

(Aside: For those that don’t know, I begrudgingly admit I went by “Toby” back in those days).

800xl[1]My eyes filled with wonder as my name scrolled down the screen. I immediately looked at the next instructions, and the next, and the next, and the next. My dad picked up an ATARI 800XL for the house. I spent many a full day at home working on that computer — painstakingly putting Microsoft ATARI BASIC and ATARI Assembler instructions from programming books and watching with wonder as the screen flashed exactly what I told it to do and the four voice, 3.5 octave sound screamed at me. I’ll spare you the details, but those were the beginnings of my fascination with computers. Some years later I had, for a number of reasons, stopped programming on the ATARI. When it was time to go to college, my parents got me a Packard Bell 486 DX2 66 computer. I went to college for Graphic Design – not computer science. However, when I found BASIC on the computer, I suddenly found myself hanging out with the C.S. folks more than my own classmates.

The Wonder of the Web

packardbell[1]After three semesters of college, I ran out of cash and my choice of Graphic Design over computer science made it hard for me to want to go back to college. I moved to Greenville, SC in 1994 and started playing with my Packard Bell. I mostly used it to talk on BBS’s and occasionally AOL when I could afford the time (for those of you that don’t know/remember back that far, you used to get charged by the minute for AOL use). I spent a bit of time writing small desktop applications with my copy of Visual Basic 3, patching software to get it to do what I wanted, or adding photos of myself inside my favorite games to my friends amazement. But I also started playing with HTML – something that had not yet standardized, but that I wanted to be a part of. I’ll never forget my first webpage. It had a picture of Alicia Silverstone on it – the “chick from that Aerosmith video”. The thing I liked about the web is that it combined my love for graphic design with my need for technical understanding.

I’ll again spare you the details and skip forward a few years. I was working for a consulting company called Metro Information Services as a contractor for AIMCO – at the time it was the largest property management REIT in the country. I worked on several web-based intranet applications in the 1999-2000 timeframe getting use something new called XML Data Islands, DHTML behaviors and something new called XMLHttpRequest. This allowed me to create applications that appeared more responsive and did work behind the scenes on users behalf. I fell in love with this technology with Internet Explorer and probably overused it. With just HTML and JavaScript, I felt back then that eventually most applications would become web-based with very little need for ActiveX or Java fillers. I knew that the server was still an important part too so I continued to work on enterprise middleware and database development. But I knew that the web – both server and client — was where I was meant to work.

Moving to Microsoft

Skipping forward to 2006, I had worked contract after contract, making a lot of money but starting to get bored and burned out. I was going to take a year off work and “play” with the technologies I loved most. Then Microsoft called. They had an opportunity to work with the teams that built ASP.NET and IIS. This seemed like the best opportunity so I took it. I joined as a Programming Writer for IIS. You could summarize this as simply a “documentation writer” position if you didn’t know any better. That said, I came to realize that this job required skills as both a great communicator and a great programmer. You needed to learn how to analyze API code that was written, and describe how to use it without any help. I loved my job when I first started it. However, the continued pressure to document every exposed method no matter how trivial it was also began to weigh heavily on me. I loved the web and this position seemed like the perfect position for me – till you find yourself writing documentation such as “The IAppHostSectionDefinition::Name Property Gets the name of the current configuration section definition.”

loginI had a growing desire to fix MSDN. As a developer, there were so many things that I felt I could help MSDN with, I thought I couldn’t go wrong. I loved working with the MSDN and TechNet teams. I helped redesign the home page based off of customer feedback, metrics we collected and the overall direction the company was heading. I started the MSDN News program which allowed us to aggregate content from various places inside and outside of the company and disseminate it across multiple locations. I designed and MSDN Insiders program. At every turn, however, I just felt like I was building someone else’s vision rather than making the impact I wanted to. I also, strangely enough, wasn’t getting to use the latest, greatest technology to build MSDN. I had a heavy desire to build HTML5 sites, mobile sites, and incorporate features like pinning. It’s not that I didn’t believe in what was being built, but my passions for it were not there.

Introduction to IE

UntitledI started diving into HTML5, CSS and JavaScript pretty heavily again. Inspired by the type of work that Rey Bango was doing, I wanted to reestablish my passion for the web. I began talking to people about this passion and paying more attention to what the community was saying about standards, development challenges, and of course, browsers. As I read more books I noticed that IE was rarely mentioned and if it was, it was in a bad light. Some of this was perception but some of it was reputation earned long-ago on the backs of still-lingering down-level versions of the product. I became angry when I saw a cartoon being passed around twitter. The cartoon, which many of you may have seen, was of three boys dressed in browser costumes. A Chrome-costumed boy had a Firefox-costumed boy in a headlock and they were clearly battling it out – each shouting the name of their product while holding a determined look on their face. In the corner was a baby-blue pajamad boy with a baby-blue helmet on. The helmet, of course, had an “e” on it to indicate Internet Explorer. This boy wasn’t battling at all. He was eating paste! This made me mad because I knew that the IE team was actually listening for feedback and had made considerable investment in battling back since IE6. I knew we still had our problems in IE, but certainly they didn’t deserve the “touched boy eating paste” image.

I sent a well-thought out email to the VP of IE and expressed my frustration with not only the cartoon, but the idea that we weren’t even considered a player for many people. I expected to get a pink slip for this “career-limiting email”. How often have you heard about someone telling a VP that their product needed work to be rewarded for it? Not often, in my experience. However, I was surprised to see an email come back from the VP fairly quickly. At the end of my email I asked “what can we, as employees, do to help with these problems?” The answer for this VP was pretty simple – “come and fix it.” At the VP’s request, I met with a man from IE. He listened to me and what I was passionate about and what I felt needed fixed. He asked me if I had worked with HTML5 canvas yet to which I told him I did not. He asked me to go play with it for a couple weeks or a month and let me know if I liked it and what I thought about working with it for a while. I wrote a few quick[1] demos[2] in a few days with HTML5 and shot them over to him. They weren’t great so I expected him to hate them. I hadn’t optimized my code. I wasn’t using standards. I wasn’t doing all the things someone interested in performance should do.

darkbookWe met again for coffee and he listed some of these things on the board as we chatted. At the end of the conversation he said “these could be your commitments.” Long story short, after a few more discussions I was given an offer to come join the IE performance team. I’ve got my work cut out for me. The people I met with in IE are top notch guys. They know their stuff inside and out. They know not only the code base and how it works, but how it should work. I’m told it’s going to take me about a year just to ramp up and be slightly effective. These guys have been working on the product for several years on average. I’m humbled to be able to work with them. I doubt I’ll ever be as smart as them, but I’m hoping to make my small impact on the product.

I was thankful to work with amazing people in MSDN. I’m thankful for this opportunity to work with great people again in IE. I’m excited about this move and the work ahead – something I desperately need to feel again about the work that I do. I’m excited about not only making IE better, but making the web better along the way for all of us.

My Request to Each of You

I’m going to be looking for feedback on IE. I don’t mind brutally honest feedback. If you’ve got it, I can take it. I really don’t know what work will look like in IE entirely or exactly what I’ll be able to do with most of the feedback – particularly so soon after joining the team. However, I’ll always be looking for feedback and will do my best to feed that back into the appropriate people inside the team.

Thanks for indulging me as I reminisced.

[Update] Just for clarity sake, I’m not sure how transparent I can be about what I’m doing or what the team is doing. You’ll never hear a new announcement about features or release dates or anything else coming from this blog. That’s really up to the team to communicate on official blogs. Just want to set expectations correctly.

Building a Better Web Application: Part 3

Note:  This is part 3 of a series of posts on building better web applications. If you haven’t done so already, please read parts 1 and 2:


Where We Left Off:

In part one of this blog post series, I explained a four-step process to shape the direction of your solution:

  • Define the problem you solving
  • Define the purpose your application will play in solving the problem
  • Identify the audience will you be a hero to
  • Identify the functionality your application will have

In part two, I explained how to gain a significant amount of advantage for your applications by working on a loose methodology involving:

  • Innovation – brainstorming on every idea we could think of to solve for each piece of functionality.
  • Emulation – analyzing competitors and innovative web sites to determine how they solved for various pieces of functionality.
  • Integration – Combine your innovation and emulation findings to create a set of workflows that will be needed to support your functionality.
  • Authentication – Verify the validity of your workflow by documenting your proposed workflows

In this part, I’m going to help you define your success. No seriously. In doing this, you will help refine your problem statement, and make smart investment decisions for your startup as it matures. Failure to take the time to work on step 8 WILL result in a failed startup – either because your customers aren’t successful and you won’t know how to correct it, or because your investors who don’t understand the problem as well as you will define a metric that doesn’t make sense for your business model, and then will judge you by it! Stick with me here. I know I’m long-winded.

Step 8: Define Success


“What gets measured gets managed” — Peter Drucker

It’s true that you can do the work I’ve presented to you in the previous posts, set yourself up beautifully for a project, and then fail miserably. Time and time again, we’ve seen new companies or products make a big splash with marketing and an influx of curiosity from customers, only to quickly fade away. Any of you still using Friendster?

What is it that causes a company with, what seems like a great idea to fall apart so quickly? In my view, it’s a failure to define for yourself what success means, and then do everything you can to become successful by your own standards.

Now, many of you may be ready to throw the towel in on this post saying “Oh no, here comes the mission statement and the meaningless metrics.” We ALL have reason to hate metrics. Time after time again, we’ve been given a not-so-clear goal of improving a random metric because someone decided that it would determine whether a project was successful or not. To your dismay, the metric didn’t improve, so the stakeholder of your project decided to dump money into a campaign that would help improve that metric. Your metric improved, but your project failed or was dumped. No wonder developers hate measuring things and loath the topic any time it comes up. Can’t we just follow the same mysterious voice Kevin Costner did in “Field of Dreams” – ‘if you build it, they will come’, right? Well, sure. Honestly, if you’ve done everything right to this point, you’re going to attract attention and people WILL come. However, even if you do solve a problem for someone, your ability to track how well you are doing at solving that problem and then learn from each success or failure your customers have in using your product to solve that problem is what will keep your project on track.

Let me ask it to you this way:

Would you give up time with your friends and loved ones and invest your life savings into a project that was just going to attract people who looked at your project then walked away? There are very rare circumstances where this is a good idea and a startup usually isn’t in that position to meander that long. I think many of you would agree that would just be foolish. Yet, what is it that many of you want to track the minute your web application goes live? Say it with me: page views!

“Page Views" and Other Roads to Ruin

I’m sure that somewhere in the silicon valley is a homeless shelter filled with failed entrepreneurs who will shiver any time you mention the phrase “page view”. The page view metric is today’s equivalent of window shoppers. Window shoppers don’t make your business successful. Hell, people walking in your store don’t make your brick-and-mortar a success by almost any standard. you won’t stay in business long if you just try to improve the traffic in your store. Yet marketing organizations in big companies are judged successful if they can just drive up page views on your site.

If you’ve set up your project using the method I’ve described in the previous post, and you decided that you’re going to stop at nothing to drive up page views, you might as well just start handing me your money. No seriously, there’s a donate link right … here. By the time you are done reading this post, you’ll understand that page views, and other related metrics, are really just symptoms of a well or poor performing application or web project.

Refining your “problem” by defining the metric

So if not page views, what should you measure? To answer that, let me step away from software and give you another example and then we will come back to your application – weight loss.

According to BusinessWeek, Americans spent $40 billion a year on weight-loss programs in 2008. Today, AOL Small Businesses claims that number is now $60 billion spent on products, and services of various types. It’s all a gimmick. Jenny Craig plasters images of dieters like Carrie Fisher who lost 30 pounds in 18 weeks. Weight Watchers, a company with the absolutely wrong metric in their company’s name, boasts that it is also rated the #1 best plan for weight loss. These companies are set up to help customers solve a problem – their weight. Seems noble. It makes a good business industry, obviously. Yet it also leaves many people stuck in yo-yo diets because they set their expectation and focus on the wrong measure of success – weight. So what’s wrong with measuring weight? That’s the goal for most of us, right? Not really. I challenged myself to 21 days of exercise and proper diet just to kick start my own personal fitness plan (blog post forthcoming). The first couple of days, my weight just shed off. After that, I had some days where I weighed more than I did the days prior. It would have been frustrating if my metric for success was weight. Those on those diets from Jenny Craig and Weight Watchers would have said that the program just wasn’t working. However, I also had the sense to track several other metrics such as:

Sure, I do track weight. But weight is a symptom of solving the problem. The real problem is that I was doing all of the things above wrong. I wasn’t moving enough in the day. I wasn’t eating right and wasn’t getting exercise. An amazing thing started happening the day that I started tracking those metrics instead of just measuring my weight – I felt successful with my every act, not just when I stepped on a scale to see if my weight dropped. By valuing the right metrics, I get instant gratification when I make the right food choice, go for a short walk just to keep my steps up for the day or burn calories on a bike ride. The thing that Weight Watchers and Jenny Craig are missing here is that they are solving a problem for customers – they are just solving the wrong one. Their metric only let’s their customers see success once a week when they step on a scale – and often too slowly to keep the customer interested. My metric allows me to see success early and often. You can see that by defining what success means, I actually have a way to refine and focus my problem statement, improve my business model, and make customers more successful on a regular basis.

So What’s Being Measured ?

So now let’s back to the world of web applications. Page Views are not a metric to concentrate on improving. Increased PVs are just a symptom of a web application that is solving a problem correctly for customers.

Many businesses incorrectly assume that they can game the system by spending their money to gain more views. This is a trick to try and get investors fooled into putting their money in your company. They can see your upward tick of page views and determine that you’re app must be doing well. You’d never catch Dave McClure or Garry Tan making that mistake.  A good investor can tell if a business is wisely spending their money by asking what metrics they are tracking. If page views are it, then they know the business is just gaming the system. The business might even try to throw in a “conversion rate” or other traffic-centric metric. Those too are easy to spot as useless because, as stated above, what get’s measured, gets managed. You can tell a lot about how a company is managed based on what they are measuring. If the metrics are centered around customer success, that company is destined for good things. If the metrics are centered around a one-sided win for the company, you can be sure that company is hoping for an early windfall and an exit strategy. If nothing is being measured, that company is in deep need of help!

If you want to be successful, you MUST DEFINE SUCCESS — for the customer, for yourself, and for the potential investor. You have to dig deeper into your problem and determine what metrics will give you more insight into determining if your customers are successful at solving their problem with your solution.

Let’s pull our example from the previous two blog posts into play here. My fake application is a content curation system. The problem , if you recall, is as follows: “The content that is most relevant to a developer’s evolving needs is buried in a pile of useless material and is never discoverable when they need it.” My purpose is: “to curate the best developer content targeted at an individual’s needs and to deliver it to the customer how they want and when they want it.” My audience consists of professional and experienced developers who build standards-compliant web applications and are passionate about software development. The functionality that I determined was important was: low-barrier community contribution, rewarding community interaction, compulsive browsability, personalizable content, and diverse delivery.

To me, the metrics for this application seem pretty clear, but this is an acquired skill that didn’t come easily for me as a developer. What would you choose to measure to determine success? Go ahead and jot some ideas down. I’ll wait.

Seriously go ahead.

OK, so what’s being measured? If what you jotted down had anything to do with how successfully the customers used the functionality that you determined was important to successfully solve the problem, then you can give yourself a pat on the back. All of your metrics should indicate if customers are successful with your functionality. The best place to start would be to go back to your Evernote (or other planning system of your choosing) and look at your functionality notes. Remember that we speced out “Low-barrier community contribution” and we determined one workflow is:

  • User posts a link to twitter using a cell phone and uses hash tag #CR8ME (curate me).
  • User receives a response tweet instructing them to connect their twitter to the app
  • User clicks on link, authorizes app, and new post is added to curation queue

Your metrics should then include things like

  • “Number of customers using #cr8me” tag to initiate sign-up process
    • % who are sent an authorize link but do not click
    • % who click on authorize link
    • % who click on authorize link by do not authorize
  • “Number of customers using #cr8me” tag to add new items to queue

Why would you do this for every feature? Because, if you truly believe that “low-barrier community contribution” is vital to the success of your project, and a month after launch you determine that people aren’t using your solution to that problem, you can manage that however you feel best. You could either supplement or substitute your functionality with additional functionality. You could determine that people aren’t aware of the feature and market it better. The point is, you how can manage the things that are relevant to the solution you have defined.


What Doesn’t Get Measured

What doesn’t get measured is a question very frequently not asked. More data is always good, right? Wrong. There are times in life when it’s better to not allow data to be collected. Collecting data that means nothing allows someone to later look at data that has no relevance to your project and start rat-holing on it. For instance, many web analytics packages allow you to track things like “cost per end action”. This is great if you are actually running a marketing campaign, have defined end-actions, and want to determine the effectiveness of your campaign. If you haven’t done those things, but are just collecting and reporting that metric because you can, an investor, stakeholder, or customer can look at it and say “well, they can’t be that good because their CPA is really bad”. Additionally, going back to the adage, “what gets measured, gets managed”, you’ll find yourself trying to improve a metric that has no meaning to you. It will waste your time and resources. If you don’t have a need to track it, and you do anyway, you or your investors will say “well, maybe I should be running a marketing campaign so we can track this better.” Then you’ll spend countless hours working on a plan and process and marketing campaign so you can improve your numbers. All the while, those resources could have been better spent tracking and improving metrics that are relevant to the problem you are trying to solve and the functionality you determined solves that problem.

If it ain’t planned, don’t track it!

Document your Metrics

Document all of the metrics and reasoning behind metrics in your project document system (again, I’m using Evernote, but you might just have hand-written paper and a file folder or two). For me, in my Evernote, I added the metrics under each workflow, such as in the following screenshot:





Measuring success of a web application is not a cookie-cutter process. If you haven’t defined success for your application before you started coding, you’ll likely find yourself trying to get meaningful data when it’s too late – like when your business is failing or an investor comes calling and wants more information. With any initiative in life, it’s always good to know what the end game is to keep you focused on your vision and to help refine your problem statement.

  • track workflow metrics – Your workflows are tried to functionality which in turn is tied to your problem statement. So it makes sense to start by looking at the workflows you created for each piece of functionality and determining what data is needed to track your success or failure.
  • set your limits — Determine in advance what values or trends you’ll accept as “success” and what values mean you need to regroup.
  • avoid mindless collection – Don’t track what can’t be tied back to a problem-solving functional workflow, or to a process or marketing program that supports your current business plan. Collecting “just in case” will waste resources, time and may make you lose out in the future.
  • document your metrics – Keep track of what metrics you intend to collect and document them with the rest of your project. Once again, showing an investor or other stakeholder what you are doing to solve the problem, and how you are tracking success with your approach gives you credibility and your stakeholder gains confidence.

I’m “counting” on your feedback (ba-dum dum). I intend to revise this post and keep it updated based on community feedback so please give some! Thanks for reading.

I would appreciate if you would tweet, “like” or otherwise recommend this post if you feel it has been helpful. I’m not tracking these as metrics for success, but it’s always nice to know when someone finds your work helpful. Thanks again for reading!

Post Navigation