Category Archives: Technical

These posts are technical in nature. The casual reader may not care about these posts. These are for my fellow geeks.

Visual Studio 2005 Community Launch Tools

This year, I was asked to be the INETA community launch champion for my local user group. Essentially, the job is to present at least two topics related to Visual Studio 2005 and/or SQL Server 2005 and/or BizTalk 2006. I was honored to be picked for this role and have done my best to provide the best possible experience for those who come to listen. This month’s topic is “Managing the Software Development Life Cycle with Visual Studio 2005 Team System”. WOW! Here’s the blurb that the UG’s sent out on this topic:

“Today’s software projects have one consistent trait – they fail. They fail to meet budgets. They fail to meet deadlines. In many instances, they fail to even make it to implementation. In 2000, only a fraction of software projects succeeded. That rate did not get much better in 2004. Industry demands such as more complex business requirements, government regulations, and standardization of components will make success all the more challenging. In this presentation, we’ll explore how you can utilize the new tools found in Visual Studio Team Services to increase your own success track record. This presentation will show how requirements can be gathered early, managed, modified, tracked, and reported on all the way through the software development life cycle. Come see the powerful new tools that are provided for architects, developers, testers, project managers, business analysts, and even project stake holders!”

While its always a good thing to prepare for a meeting, I realized while sitting in my hotel room that there are quite a few tools that I use to give presentations. I thought I might post pictures and informationa bout the tools I use so that others who are just getting started in their presentation careers can get a glimps into their future.

The Laptop

Its an HP Pavilion zd8000 series laptop that has been customized for the best performance I can get out of it. It has 2GB of ram, a 100 GB hard drive, Lightscribe DVD burner, and the like. I love this laptop because it has 4 USB ports (5 if you count the HP usb digital drive port), firewire, built in wireless, bluetooth, 5-1 media reader, built in speakers, and the list goes on. It even has a media remote control (meant for use with Windows Media Center Edition) that works very well for remotely moving forward and backward in powerpoint slides. Its a varitable swiss army knife of laptops and the kicker, of course, is the 17″ widescreen LCD.

notebook

The External Hard Drive

I use a Maxtor 300GB exernal hard drive to keep all of my VPC images on. This serves as both a repository and a backup for my Virtual Server (or Virtual PC) images. Having your VHD image on a hard drive other than your system drive is essential for performance. This drie is particularly useful because I can use either firewire or USB to connect to any system. Using Maxtor’s software, I can also use this device to automatically backup files from any of my systems too — and literaly at the touch of a button!

external hard drive

The Bluetooth Headset

On occassion, depending on the room setup, I can use this Motorola headset in place of a mobile mic. I pair the headset with my laptop and use the mic to output to my speakers at a podium. If I bend the podium mic to the speakers, I have a virtual walking mic. Obviously if a mobile mic is available, I use that instead for better clarity.

bluetooth headset

The Tablet PC

You may be asking why I use both a laptop and this new Gateway Tablet PC. Actually, this makes great sense if you ask me. I can set up the tablet as a sort of teleprompter during my presentations. The tablet can hold my demo scripts and walkthroughs (in case I lose a bolt during the presentation and need to remember where I’m at). Also, I didn’t buy the top of the line tablet. While the 14″ widescreen LCD makes this item look expensive, I only paid $1300 for this one — and that’s standard pricing. It only has 512 MB of ram, but thats all I need for my tablet PC needs — particularly when doing presentations. This tablet has a directional mouse-like input device on the left hand side that allows me to easily scroll up and down in my document without using the stylus.

tablet PC

The Thumbdrive

While I do have wireless connections, bluetooth and infared that can share data, I prefer to use the tumbdrive for quick transport between the laptop and the tablet when neccessary. I also carry a copy of the presentation materials on it in case someone asks me for them. Then its as easy as plugging the thumbdrive into their machine and letting them drag them onto their desktop.

thumbdrive

The Smartphone

I use the smartphone to help keep my timing in presentations. I leave the phone clipped to my belt and set up vibrating reminders to tell me when I should be at a certain point in my presentation. If I’m not there, I can speed things up. If I’m off to the races, I can slow things down and take a few extra questions when needed. I wouldn’t recommend buying one of these if you just want it for these purposes. I just happened to already have this phone so its what I use.

SMT5600

I love my tools. Its taken me a little while to learn what works and what doesn’t. I would imagine what works for one person wont neccessarily work for another. Let me know what you think.

Hackers Attack via Chinese Web Sites

I don’t know if anyone caught the Washington Post story a few days ago titled “Hackers Attack via Chinese Web Sites“. It seems to have slipped past everyone in the news. Of course, The WP has become so disreputable and biased that it shouldn’t surprise me that no one paid attention. However, we were warned well in advance, so it should be no surprise.

This begs the question, however. The government has to be, without a doubt, one of the largest consumers of computer goods and services. While I am not an advocate of increased tax spending, this area could use some. Perhaps its time to take a different approach with that spending, however. We have tried many things over the years: Internet War Games, hiring our own elite forces, and even creating more laws and policies. These are certainly deterrents, but security in depth is the key here.

Any good football team has a good offense, a good defense, and a great different game plan depending on who their enemy is. We have the offense now (as mentioned above), we have ‘some’ defense as well. Laws do us no good when dealing with hackers in foreign countries. So what is the answer? I will not purport to have that ‘nail in the coffin’ answer to cyber terrorism and anyone that claims they do is selling you a bill of goods (And they will typically have the abbreviations “Sen.” or “Rep.” in front of their name). However, there are a few other things we need to explore. One mark of a great football team is that they have the ability to surprise and misdirect their enemy. Making the opponent attack in the wrong direction has often led to victory in some of the best games I have seen played. In IT, the misdirection can be supplied with honey pots and misinformation. We can take a trip from the tabloids and start putting out information that sounds correct and feasible, but is nothing more than fodder for the masses. For instance, sending out communications that will most likely be intercepted to “expose” a weakness that is actually a strength can cause a huge failure on an attacker.

This is also not a new concept in typical warfare. Many of you may remember the move “The Patriot”, which was a loose description of the revolutionary war battle in Cowpens, SC, made use of a “double envelopment” strategy, which essentially used a perceived weakness to entice the enemy into a trap. Obviously, this was not the first use of the strategy either, but is highly notable due to the movie’s popularity. Honey pots and misinformation are highly useful in this same context. We strengthen what we may now understand is a weakness, and then taunt the enemy with the weakness again. While the enemy attacks, we have a better chance of pinpointing their location, and perhaps sending them a nice drone-delivered “ACK” to their received packets.

The football analogy works to some degree when trying to put together a cyber security policy. However, we do not “play” against one enemy at a time. We play every team out there — known and unknown. This is why defense is our most important aspect of policy. Our defense needs to be highly educated, state of the art, and driven. We have no way of knowing who is going to attack and when. There is no way we can be prepared for every attack possible. However, we can at least provide some misdirection while we shore up our defense and plan our counter-attacks.

Engineering for Reliability: Learning from a chair

The other day, I posted an article about Engeneering for Usability. I hadn’t intended to make a series of this, but it may very well turn out that way as today’s post is about engeneering for reliability. Who knows what future posts may hold.

I was sitting in a doctors office the other day, waiting very patiently to be called in. I fidgetted as I always did. Leaning forward, leaning sideways, flopping around like an idiot just trying to reconcile myself with the fact that I didn’t have a laptop or something in front of me to do. But then it sort of hit me. I was abusing that chair to death and it was taking it.

Lets think about the first sentance of this blog post. I said that I was sitting in the doctors office. I’ll bet not one person reading this said “Oh man, what kind of chair was it?” or “Oh my goodness, what if it breaks?” or “Holy cow, that guy is really rolling the dice there isn’t he?”. All of you most likely pictured someone just plopped down on a chair. No one was really fixed on the idea that the chair could fall apart or that it wasn’t an appropriate height. The reason behind this is that, in general, we tend to think that a chair will work when we use it. When was the last time you tested a chair to see if it would hold your weight? I’m sure under the right circumstances, you just might — like if the chair looked visibly weak or damaged, but once again, the general concensus is that its going to work out for you.

When was the last time you could rest this easy with the software we use on our desktops? Do we ever hear someone talk about a piece of software crashing and say, “wow, that’s unusual”. No, in fact, many computer vendors design their software to handle crashing more gracefully. Microsoft themselves have implemented the ability to report errors to them that occur (with a prompt to the user, of course). We have event logs, log files, dump logs, and the like. As developers we spend more time preparing for our software to crash than I think we should sometimes spend making sure our software doesn’t crash to begin with. But while we should spend a great deal of time testing and thinking through our applications — making sure they can work, that isn’t even half of what is required to develop a reliable software system. There are things that are out of our control such as network connectivity, power failures, hard drive crashes, and other hardware failures.

For these very reasons, desktop software most likely never will be as reliable as a chair — its too dependant on too many things in the environment. We have to depend on outside influences such as universal power supplies, raid controllers and backup network connections. These are not typically all seen in your typical home system and even if they are, you can’t make those things prerequisite to installing your software because your available user base would sink considerably. The chair, on the other hand, has everything it needs on hand to appease its user base. Despite the fact that there are a million chair varieties, they all come with their own support system and don’t depend on anything but gravity to make it work right (there’s that gravity constant again).

When it comes to desktop software and even small business applications we are forced to plan for and handle as many failures as we can forsee. We have to accept the fact that outside influences such as memory corruption and IO errors may cause our application to misbehave. Obviously with enterprise software, the guys with the big bucks pay us to implement systems with hardware failovers, double, tripple or quadruple redundancy, and the like. We can make most of those applications truely reliable and nearly as reliable as a chair. However, till we see Dell shipping every system out the door with a UPS, backup and restore operations, RAID and free connectivity to multiple providers, prepare your applications well to handle these errors.

So what can you catch? What can you do to make the user trust your application in spite of these obvious physical problems?

  1. Call it as you see it when a failure occurs. Make sure before you crash that you point the finger at the culprit: “An I/O error has occurred, shutting down to prevent data loss.” or “This application was shut down unexpectedly, would you like to restore?” are common dialogs that you see in highly reliable desktop applications such as Microsoft Word. Which brings us to our second point.
  2. Provide data recovery after failure. This typically means that you have to save state in your application often. One of the most famous uses of this was already noted above. Microsoft word can recover from an error by presenting you with a recovered document as of the last autosave. Obviously, you dont’ want to automatically save to the file that was opened. Microsoft Word creates a temporary file with a similar name to the opened file (placing a “~” at the beginning of the name and setting the file attributes to hidden). That way, when the application crashes, and someone attempts to open the document again, Word can recognize that changes were made to the file. Office knows then to alert you that you can recover the changes to the document you are trying to open.
  3. Provide failover storage durring periods of communications disconnection. When you cannot communicate with a remote database, consider writing changes to a local storage repository and allow those changes to be uploaded when communications are restored. This allows the user to trust that they can continue working on the application and their changes wont be lost or have to be retyped should communications go down. This store and forward style of communication is even seen at lower levels of the OSI layer in routers, switches, and firewalls. Packets are received, stored, and then routing is attempted. If the routing fails, the attempt can be made again because the packet was stored in the device.
  4. Restore network communications automatically. Take note of applications like ICQ, MSN, Yahoo, and the like. They can detect when an internet connection is available and automatically reconnect to their respective services when communications are restored. Don’t expect the user to reconnect every time. This can get annoying if a network connection is having particular difficulty.
  5. Provide logging and feedback capabilities so you can determine what errors occur and how user experience is impacted. Not every user will use it, in fact most will opt out. But for the users that are willing to take the time to fill you in on what’s happening, you should take it. Consider it free QA. Make sure you aren’t just collecting this information. Be sure to respond appropriately with hot fixes and service packs that address the issues you find. Let the user’s know that you cared enough about their input to make sure it didn’t happen again.

These certainly isn’t a comprehensive list, but it should be enough to get you started. You’ll never be able to meet the same reliability standard of a chair with software, but you should be able to instill confidence in your application’s users. Let them know that they can “sit back and relax” knowing that you, your application and their chair will be there for them when they expect it to be.

Engineering for usability: Vending machines

Today, I used a vending machine. Anyone that has met me will surmise that this was obviously not my first attempt to purchase an unhealthy snack from such a contraption. In fact, not only have I excessively used of vending machines, I’ve written code for kiosks that use the same currency and coin accepting hardware as vending machines. Suffice it to say this long paragraph was meant to proclaim my self-appointed title of “Certified Vending Machine Professional” (CVMP).


However, with all of my years of experience, today’s experience was different. Today, I approached the vending machine, tired from lack of sleep the past month, distraught over having to get my cat a permanent tracheotomy, cranky from a back problem, and distracted by tons of work-related issues running through my head. I carefully stood in front of the machine contemplating what I would get this time — as though the product selection would have changed between now and the last time I looked. Well, I am supposed to be on a diet, but due to the massive amount of life-issues I’m having at the moment, I decide a “healthier” vending machine snack of pretzels will suffice this time. “D10, 65 cents, ok.” I think to myself as I begin to press the numbers out on the keypad “D, One, Zero — CRAP!”. As it turns out D-10 expected me to press D and then press a key they had specifically set to 10 — not 1 then 0 as I was thinking. I knew this. I’ve used enough vending machines in my life to know how they work. But today, I failed the test. I immediately began to blame the vending machine and started re-engineering a smart vending machine in my head.

Here were the problems I saw with the current design.


  • The keypad was placed at the bottom of the machine, forcing me to bend my already soar back over to press the numbers. I’m not exactly tall but these numbers were way the @#$* down there!

  • The vending machine made me bend down even further to retrieve my calorie-clad selection from a bin.

  • The numbering / item selection routes were flawed with that D10 vs D1 conundrum.

  • The selection sucked

  • They only accepted cash or coin which I often don’t carry.

The “lets design something cool” personality in me immediately started thinking about solutions to this problem.


  • Put a camera on the vending machine that recognized the height of the customer. The vending machine should immediately raise or lower the keypad based on the user’s height.

  • Place the items on a selection belt where the item is dispensed into a bin and procured to the user at a height that’s reasonable — again based on the height of the user requesting the product.

  • At the minimum, remove single-numbered selections and replace with all double digit numbers — force the user to press the number sequence out. At best, prompt the user to press “submit” when they are done selecting their item number. Don’t automatically dispense the product based on the first match of a selection.

  • Provide a digital feedback center where you could request the vending machine carry your favorite flavor of twinkie.

  • Place a credit card slot on the machine

But then the “practical architect” personality interjected with complaints to these solutions. First off, lets think about this. Who is the customer of a vending machine. Was it me? Surprisingly, no. The vending machine “customer” was the company who bought it with the intent to make a profit from it. Sure, I may use the vending machine, but ultimately, its the vendor that wants the mechanism that accepts payment and dispenses product. We unwittingly provide our services as testers of the product and give feedback on it (based on our use or non-use of the product). That whole argument is reserved for another post though. Blindly accepting my argument that the vendor is the customer of a vending machine, your next question is to ask, what are the vendor’s requirements for the machine.


Using a lose set of some of those spiffy PASS MADE criteria from the MSF exam (70-300) for your MCSD, lets take a look at those requirements.



  • Provide a fast transaction from which a user can request a product and get out. Quick response to user request for product.(Performance)

  • Provide a usable interface from which to conduct unmanned transactions. (Accessibility)

  • Provide a safe way to keep money received from the transactions. (Security)

  • Provide the ability to easily add more product to the machine — for those of you that don’t know this, they actually can add additional slots next to the vending machine without adding additional currency acceptors. (Scalability)

  • Provide a mechanism that works nearly every time with little intervention required by the vendor. (Maintainabilty)

  • Provide 24/7 access to any would-be customers. (Availability)

  • Allow these items to be shipped easily without fear of breaking during transit or movement from one location to another (Deployability)

  • Allow alternate methods of payment, exchange of hardware, and various types of product to be dispensed. (Extensibility)

So lets apply these characteristics to my “solutions” from above. We’ll find out these were not solutions at all, but fixes to perceived problems from a single-minded personality. My Architect (as I’ll refer to that voice in my head), says this.


The product isn’t on a belt because the time spent between asking for the product and obtaining it could potentially take a lot longer (Performance). The keypads were not arbitrarily placed at waiste level to tick me off, they were blatantly placed there to provide easier accessibility to those who are height-challenged or otherwise requiring accessibility. The credit card reader isn’t placed on the machine because it would be too easy to intercept. These vending machines can be placed anywhere, and sometimes in really poor choices (rest areas, for instance). While having a vending machine broken into would be bad, having credit card data compromised from one of these locations would be worse. (Security) Placing the items on a belt would make it harder to provide additional items in the physical range of motion of the device. It would be harder to add more product to the machine this way — and much more expensive (Scalability). Those devices would also have a large amount of moving parts that could break. Putting the keypad on a moving device as well as the product would reduce the useful life of a vending machine while raising the cost of it — bad for any investment. It would also be susceptible to significant breaks and massive amounts of upkeep (Maintainability). Putting these items in a bad area would be an even worse move. They would be much more expensive to replace and fix if someone vandalized them — much more, that is, in relation to a simple spring delivered product dispenser that relies on gravity to get the product to its location. The parts would definitely have a much higher dead-on-delivery rate than normal vending machines — causing massive replacement shipping/delivery costs (Deployability). This is a stretch, but by placing additional overhead on these machines, you actually reduce the available room to provide additional hardware to the machine. You could be restricted to a smaller area and some additional hardware may interfere with other pieces of hardware causing more issues than it solved (Extensibility).


Availability, therefore, seems to be the only area that I wouldn’t have negatively impacted by my decisions for a vending machine. As software engineers, we need to make sure we aren’t over-engineering our products. Many times the “cool” solution isn’t the right solution. For instance, the delivery mechanism, while annoying for anyone that is tall or has a back problem, is fairly consistent. I mean, who has a better system than gravity? Last I saw it was a constant! Sure, occasionally product gets stuck in the machine before gravity can do its part, but I guarantee the mechanical delivery system would be much more prone to mistakes than the simple “twist and drop” method of most vending machines. While I may have potentially solved “my” problems, I’ve caused many more for my real customer — the vendor. In actuality the dependability of my vending machine would have cost more money to purchase and maintain, caused dependability problems due to mechanical failure, and most likely annoyed everyone more than “New Coke” ever did.


Yes, its great to solve your problems in cool ways, but make sure the problems you are solving are for the right person and actually advance your product design before adding more features than are necessary.

Charleston Code Camp: Saturday September 17th

OK guys and gals. Some pretty big names are rolling into Charleston on Saturday, September 17th to present, free of charge, all those fancy topics we love to yap about so much. This is Charleston’s first code camp. Chris Williams tells me that enrollment is low. What would it take to persuade a few more folks to take a nice weekend trip to a college/beach town, watch some cool technology in a demonstration and go have a few drinks?

Check out the sessions and the speakers. If you don’t find anything you are interested in, come anyway and enjoy the beach with your fellow geeks! Please register quickly!

Escape From Yesterworld!

Since I ended up on the INETA Community Launch Team, I guess I should start promoting a few things about Visual Studio .NET 2005 and SQL Server 2005. After all, it can’t ALL be about JAXASS (Javascript And XML Accessing Services Simply) , can it? Let me start by pointing you to this new tidbit from Microsoft. Its an awesome play on some well known comic movies of the past called Escape From Yesterworld. Prepare to spend a little time there, but be prepared to laugh int he process. Someone sunk a LOT of time into this promotion video/site, so the least you can do is click on the main page and check it out.

Does standardizing technology stifle creativity?

I’ve been talking with a few people recently about COmega, C# 3.0, C# 4.0 and the like. On top of that, I try to get my hands on anything “new” that I can well before it becomes of interest to the general public. I keep getting asked why I like to sit on the razors edge (or at least pretend to). I usually tell them its because its the only way that I can continue to remain creative in this field. The rest of this post articulates my rationale behind this idea.


I’m definitely a child of the PC computing industry. I’ve grown up around computers. I’ve seen one innovation after another. I have to say the most amazing times in my programming life have been spent hunched over a computer all night hacking out a new way to solve a problem. Even later in my professional career, during the “high-tech late 90’s” I had a blast playing with xml data islands before example one hit the list server (wow, does anyone remember learning new concepts on list servers? yikes I feel old). Whatever the case, what was interesting was that everyone was learning something new because everyone was solving technology problems on their own; sometimes for better and sometimes for worse. But every programmer had to learn to be creative to solve problems.


In recent years, the big push is to standardize technologies at the earliest possible moment — locking most programmers into one way to solve a problem. Failing to follow the standard usually lands a host of glaring architect eyes on you. You can bet if you do come up with a new way to do something, someone is already trying to create a standard to the contrary; again, most times for the better and sometimes not. Sometimes those standards, at least initially, fall far short of being a good solution for more than a handful of people (i.e. WSE). But as more people jump on board, those standards morph into something more usable by others (i.e WS-* added to WSE-n). Sooner or later, design-time components are emassed by third-party companies like a bad mildew stain around these standards. We are all sort-of stuck using those tools and those standards until Microsoft decides to devour that component industry and write their own wrappers around the standard.


There are only a handful of people who then have any say on the standard and the self-proclaimed “right way” to do things. Sure, these standards do let us focus more on solving the business problems instead of technology problems, but who wants to do that? We can sit back in our spare time and “play” with our own solutions, but when you get back to work in the morning, its back to WSE-2, RSS, AJAX, and other various paterns and practices.


I do understand the benefits, so don’t get me wrong. I’m so much happier to have a tool generate my WSDL , at least from a starting point, than for me to do it myself. I do get to look at some other cool technologies because I can look past some of those questions that the standards answer for us. However, with all the real benefits that standardizing brings to us, there are times when I miss having the freedom to innovate on my own without looking how everyone else does it first.


That is the treaty that the business folks have signed with us geeks — we get to play with their expensive toys and data centers but only if they get to make us “more productive” with standards bodies.

Throw-away projects in .NET

There are tons of new features in Whidbey. Enough so that I feel that the learning curve going from 1.1 to 2.0 can be easily 50% that of the effort going from to 2.0 (if not more). You can spend your time learning generics and predicates and partial classes and reliability contracts for threading and , well, you name it. However, sometimes its the little features that mean so much. VS.NET 2005 provides a little known feature that allows you to create projects, run them, test them, etc. However, unlike you might be currently used to, when you close the project, you can make the whole thing go away. In other words, you can open VS.NET to test a theory, run the project, do your debugging, and then dump the project so it isn’t wasting file space of virus scanner time. To impliment this feature, go to Tools | Options to display the options dialog. Under this dialog, choose the Projects and Solutions tree node.Uncheck this “save new projects when created” checkbox. Apply the changes.


Now, when you close VS.NET with an open/active project, it will ask you if you want to save it. If you click no, the folders are complete gone. Isn’t that awesome!?


Well, I liked it anyway.

Atari 800XL – Old school fun

The other day, I posted an article about not having any gadgets to play with anymore.  So as I was sitting here debating what to buy, I started searching ebay and somehow came across an old Atari 800XL – Still in the box! This was the first computer I ever owned. My parents gave mine away several years and I haven’t been the same since. I wrote some of my first text-based games in 5th grade on that computer. In any case, I purchased the Atari and won it at a very hefty sum for what it was. I then went a little nuts and bought the assembler editor for it, Atari BASIC (my old programming language), and a few other extras. In case you are wondering, yes, those cartridges will fit into an Atari 2600. They wont do much though in the game console. I have to see if I can get a 1010 casette drive or a 1050 disk drive. I wouldn’t mind the 1030 printer either. I started surfing for some information on how to program these things, because its been 20 years since I’ve seen them. Low and behold, I hit the jackpot at www.atariarchives.org. This site is unbelievable. Tons of books on Atari programming online at no charge! They even had one of the books I had as a kid: Atari Player-Missle Graphics in BASIC. I don’t remember what the other book was, but I’ll continue digging when I’m more coherant. 
This is great stuff to play with, particularly if you are trying to learn the internals of a computer and how to program at machine, assembler, or high level language levels. Because this is an 8 bit primitive system, its much easier to learn how to manipulate the machine to do what you want. You can then scale that knowledge out to 16, 32 or 64 bit systems. I cannot wait to get this item shipped to me, and I’m dying to get it on my desk at work. 
So maybe this isn’t the high-tech I imagined myself buying as a first swat against the scurge of gadgetlessness, but it sure is going to be a very nice one for me. I promise to buy something more meaningful soon.

Overrides vs Shadows keywords in VB.NET

When most folks come from a VB 6 background (and only VB6) they tend not to quite grasp object oriented programming right off the bat. Its one of the reasons why inheritance topics are fairly common on the Visual Basic .NET forums and newsgroups. Today was no different. I woke up and browsed the MSDN forums to find another such question. The user was asking what the difference between an Overridable method being overriden with the “Overrides” keyword, or an ordinary method being overriden by the “Shadows” keyword.

I decided to write a quick article about this and post it for future reference.

First off, lets define the similarities between these two keywords. Both Shadows and Overrides define a means to decorate our methods so we can redefine the functionality provided by a base class. So if I create a BaseClass, and then create a SubClass that inherits from BaseClass, its possible that I might want to redefine how the base and sub classes behave and interact with one another. While I can use either of these keywords, they are completely different in their orientation.

Let’s jump right into it by writing ourselves a base class. This class will have two methods — one that has an overridable method, and one that uses an ordinary method.

' BaseClass impliments an overridable and ordinary method
Public Class BaseClass
  Public Overridable Sub OverridableSub()
    Console.WriteLine(" BaseClass.OverridableSub() called")
  End Sub
  Public Sub OrdinarySub()
    Console.WriteLine(" BaseClass.OrdinarySub() called")
  End Sub
End Class

When you see the overridable keyword in a piece of Visual Basic code, it is an indicator that the designer of this class intended or expected that this class would be Subclassed (overridden) at some point and that “OverridableSub” was a likely candidate for functionality redefnition. You can also deduce that the designer didn’t intend for OrdinarySub to be overriden.

For a bit of advanced education, lets look at the IL produced by these methods.First off, the OrdinarySub looks like any other subroutine we’ve seen in IL before.

.method public instance void OrdinarySub() cil managed

We have a public instance method. Look what happens, however, with our OverridableSub method signature.

.method public newslot virtual instance void OverridableSub() cil managed

Notice we have two additional method characteristics added : virtual and newslot. The virtual characteristic should be familiar to most C# developers. It indicates that the item can be overridden. With that definition, it should now be obvious to VB.NET developers that this means the method has the Overridable modifier. The second characteristic is newslot which indicates that the method will always get a new slot on the object’s vtable. We’ll get into this more later.

So if we want to test our BaseClass, we might write a piece of code something like this in our Main method:

Module Module1
    Sub Main()
        ' Create an instance of the base class and call it directly
        Console.WriteLine("+ Calling BaseClass Methods")
        Console.WriteLine("--------------------------------------------------")
        Dim bc As New BaseClass
        bc.OverridableSub()
        bc.OrdinarySub()
        Console.ReadLine()
    End Sub
End Module

And of course executing this code we would get our exepcted results as follows:

+ Calling BaseClass Methods
--------------------------------------------------
   BaseClass.OverridableSub() called
   BaseClass.OrdinarySub() called
As you'll notice it states that we executed both of our methods against the BaseClass instance. Sounds great, now let's say we now want to create a class that inherits from BaseClass:

' SubClass impliments an overriden BaseClass with overrides and shadows modifiers
Public Class SubClass
    Inherits BaseClass

    Public Overrides Sub OverridableSub()
        Console.WriteLine("   SubClass.OverridableSub() called")
    End Sub
    Public Shadows Sub OrdinarySub()
        Console.WriteLine("   SubClass.OrdinarySub() called")
    End Sub
End Class

To be able to override both of these methods, and stop the compiler from barking at us, we needed to use the overrides keyword on our OverridableSub and the Shadows keyword on our OrdinarySub. This is obviously because of the way we have implemented these methods in the base class. Shadows is a fairly appropriate word because what we are doing is putting the original method (OrdinarySub) in the proverbial shadow of our new method. Our new method stands over top of the old method and executes any time we call against a direct instance of SubClass. Let’s exand our Main method to execute both our SubClass and our BaseClass to see what the difference is:

Module Module1
    Sub Main()
        ' Create an instance of the base class and call it directly
        Console.WriteLine("+ Calling BaseClass Methods")
        Console.WriteLine("--------------------------------------------------")
        Dim bc As New BaseClass
        bc.OverridableSub()
        bc.OrdinarySub()
        Console.WriteLine()

        ' Create an instance of the sub class and call it directly
        Console.WriteLine("+ Calling SubClass Methods ")
        Console.WriteLine("--------------------------------------------------")
        Dim sc As New SubClass
        sc.OverridableSub()
        sc.OrdinarySub()
        Console.WriteLine()
        Console.ReadLine()
    End Sub
End Module 

It follows that because we’ve overridden both of our methods our output would look as follows:

+ Calling BaseClass Methods
--------------------------------------------------
   BaseClass.OverridableSub() called
   BaseClass.OrdinarySub() called

+ Calling SubClass Methods
--------------------------------------------------
   SubClass.OverridableSub() called
   SubClass.OrdinarySub() called

We now are executing code that executes against a BaseClass instance, and we can tell by the output that we are calling the base class methods. We are also executing code against a SubClass instance and we can tell that this is executing our new functionality because the method states its executing SubClass.OverridableSub() and SubClass.OrdinarySub(), not BaseClass.OverridableSub() and BaseClass.OrdinarySub().OK, so we still haven’t seen a difference. Both of these methods really accomplished the same thing didn’t they? We had a base class behavior and both methods overrode that behavior with our newly created methods, right? True enough, but lets look at the consequential differences when we pass our reference around in different ways. We can see example after example in the framework where a method will take a parameter as a general type, but it really expects us to pass in a more specific type. For instance, if a parameter is typed as an XmlNode, we can pass in XmlElements, XmlAttributes, XmlConfigurationElement etc. The list goes on. Why can we do this? Because generally speaking, if I’m an XmlElement, I am also an XmlNode. An XmlElement is a more specific implementation of an XmlNode, so while the XmlElement has more functionality, it still has the base functionality provided by its inherited type XmlNode. So what happens if we create methods that takes BaseClass as a parameter and we pass in an instance of a SubClass as follows?

' Calls the OverridableSub method of the instance passed to it
Public Sub CallOverridableSub(ByVal instance As BaseClass)
    instance.OverridableSub()
End Sub

' Calls the OrdinarySub method of the instance passed to it
Public Sub CallOrdinarySub(ByVal instance As BaseClass)
    instance.OrdinarySub()
End Sub

Notice that we have too methods that both take an instance of BaseClass, not the more specific SubClass. Additionally, we are calling the OverridableSub() method in one of these methods, and OrdinarySub() in the other. Let’s add more code to our Main method to execute these methods passing in an instance of a SubClass. The full code for our example should look like this:

Module Module1
    Sub Main()
        ' Create an instance of the base class and call it directly
        Console.WriteLine("+ Calling BaseClass Methods")
        Console.WriteLine("--------------------------------------------------")
        Dim bc As New BaseClass
        bc.OverridableSub()
        bc.OrdinarySub()
        Console.WriteLine()

        ' Create an instance of the sub class and call it directly
        Console.WriteLine("+ Calling SubClass Methods ")
        Console.WriteLine("--------------------------------------------------")
        Dim sc As New SubClass
        sc.OverridableSub()
        sc.OrdinarySub()
        Console.WriteLine()

        ' Pass the SubClass instance to a method passed as a BaseClass reference
        Console.WriteLine("+ Calling SubClass Methods passed as BaseClass")
        Console.WriteLine("--------------------------------------------------")
        CallOverridableSub(sc)
        CallOrdinarySub(sc)
        Console.WriteLine()
        Console.ReadLine()
    End Sub

    ' Calls the OverridableSub method of the instance passed to it
    Public Sub CallOverridableSub(ByVal instance As BaseClass)
        instance.OverridableSub()
    End Sub

    ' Calls the OrdinarySub method of the instance passed to it
    Public Sub CallOrdinarySub(ByVal instance As BaseClass)
        instance.OrdinarySub()
    End Sub
End Module


' BaseClass impliments an overridable and ordinary method
Public Class BaseClass
    Public Overridable Sub OverridableSub()
        Console.WriteLine("   BaseClass.OverridableSub() called")
    End Sub
    Public Sub OrdinarySub()
        Console.WriteLine("   BaseClass.OrdinarySub() called")
    End Sub
End Class

' SubClass impliments an overriden BaseClass with overrides and shadows modifiers
Public Class SubClass
    Inherits BaseClass

    Public Overrides Sub OverridableSub()
        Console.WriteLine("   SubClass.OverridableSub() called")
    End Sub
    Public Shadows Sub OrdinarySub()
        Console.WriteLine("   SubClass.OrdinarySub() called")
    End Sub
End Class

What happens when we execute our code now?

+ Calling BaseClass Methods
--------------------------------------------------
   BaseClass.OverridableSub() called
   BaseClass.OrdinarySub() called

+ Calling SubClass Methods
--------------------------------------------------
   SubClass.OverridableSub() called
   SubClass.OrdinarySub() called

+ Calling SubClass Methods passed as BaseClass
--------------------------------------------------
   SubClass.OverridableSub() called
   BaseClass.OrdinarySub() called

Notice that we are passing the same instance into both methods. Both methods are accepting “BaseClass”, not “SubClass” as a parameter. Our results are completely different. For our OverridableSub, our SubClass method is still called even though we passed this in as a BaseClass instance. However, for our OrdinarySub that used the Shadows modifier, we are getting results from our BaseClass. That is because of some fundamentals in Object Oriented programming. When we override behavior, its overriden no matter how we pass the class to a parameter (with some exceptions made during type casting). However, when we override using the Shadows modifier, our old functionality is only “lurking in the shadows” — not completely overriden. Remember this when trying to override functionality that wasn’t designed to be overriden as such. Your Shadows modifier may help you out when calling against your SubClass, but not when executing against a BaseClass instance.