Welcome to Dirteam.com/ActiveDir.org Blogs Sign in | Join | Help

Sarah Chronicles

Copyright (C)2008-2009 Sarah Chana Mocke. You may republish this material as long as explicit credit is given to the author.
Practical Architecture at TechEd Online
Franco Rother, a Solution Specialist at Microsoft, and I discuss practical architecture in a video recording for Microsoft TechEd Africa 2009.

Click here to view

The Good, the Bad and the Ugly of Architecture (Part 1)

Recently, at TechEd Africa 2009, I spoke to a rather large audience about a topic I entitled ‘The Good, the Bad and the Ugly of Architecture’.

It was essentially a redelivery of material created by Miha Kralj (upon whom much credit is graciously bestowed), but with a local spin and additional material and examples relevant to our local geography.

I find the concept of patterns (predictable, recurring events) in architecture quite intriguing, but also one the most difficult elements of architecture to sustain when we add external pressures that I spoke about in terms of ‘anti-patterns’ in my presentation.

It’s quite ridiculous really. Architecture is usually developed by Architects who all have different backgrounds, are intuitive and forward thinking in nature, and whose roles, to which the title refers, are different depending on the IT organization within which they find themselves.

Abstract thinking and architectural vision is not usually valued as highly in organizations as technical detail skills may be. The reason is usually because technical detail people usually create something tangible, and often this is perceived to be from nothing. While they may pull the proverbial rabbit from the hat, we often forget who defined the hat… and the rabbit.

In essence, an Architect is what an Architect does!

Architect is what Architect does 

Architects often come in two extremes, and from these two extremes anti-patterns are allowed to surface. These are:

  1. The crowd pleaser. The architect who works the room, has the fancy title, has lunch with the right guys and is always working on more than one absolutely strategic thing at once. They generally produce lots of stuff, but that stuff is rarely used and if it is used, rarely useful or helpful.
  2. Practical, technical, cynical guy. The type that’s always ready to sow the seeds of destruction into any work anyone else does, moves straight to implementation from a high-level design of his own making and doesn’t know what all the fluffy stuff is about.

Between those two extremes there are, naturally, many good and bad examples of architects. It’s hard to identify them, but we can start looking at processes and results to help identify them in retrospect. The bad process stuff and the bad output stuff can be considered anti-patterns, and as I step through them it will become clear to you if you’re seeing them within yourself or your organization. The idea is to walk the fine line between the extremes and present something useful.

The three basic roles of an architect can be summarized as follows:

  1. Requirements gathering
  2. Requirements analysis
  3. Creating an implementable design (not depth technical, but something engineers can use)

In this post I will provide some anti-pattern extremes for each of the above, in the hope you can identify and avoid these in your organization

Requirements gathering:

Anti-pattern: Cognitive Dissonance

The typical "we know it all, we know what they want, it’s just a cookie cutter re-delivery kind of approach”. Maybe even, “I don’t use it, so why would they need it”

I’ve encountered so many architects in my many years of consulting that love quoting the old mantra, “we only use 10% of the functionality in Office”. Who is this we to which they refer?

The answer for Office specifically lies in SQM (pronounced “squim”) data. Software Quality Metrics allow Microsoft to understand what features are made use of in a product. It’s interesting to see what we lose if we cut back to 10% of the features. The most commonly used features on Office are cut, followed by paste, but if we cut out 90% of the lesser used features we lose Word Art. Do you know anyone in your company that uses Word Art? Would it be important to them? How about pivot tables?

Architectural cognitive dissonance such as the above can lead to circumstances where technology decisions may lead to inappropriate tools for the staff that need them most.

I must add, before I proceed, that there may be instances where the types of technologies below apply, but they’re not one size fits all.

The “10% of the features” thinking often leads to acquisition of the cheapest tools offering a “similar” (but often substantially reduced) feature set. For example as an architect we should know that before generating an architecture for Office Productivity that bases the business on a web-based platform that things like browser limitations exist, that not even HTML5 can address. Do we adequately understand the three ways to work within a browser, namely:

  1. The EXEC command, in which apps call the browser control e.g. a text editor.
  2. JavaScript – applications within the browser driving the browser
  3. Push to server (server-based processing)

All three have limitations that need to be factored in when choosing to go from the extreme of full suite that is installed locally to being 100% browser dependent.

Perhaps as an architect a range of solutions would be better to establish.

This is one, super simple example to illustrate the point. Imagine how many complex examples there are. Understanding the actual requirements rather than assuming what they are or basing them on your own experiences of the work day are critical in determining an appropriate solution.

For every anti-pattern, there is an opposite anti-pattern. In this case it’s the Obedient Butler.

Opposite Anti-pattern – The Obedient Butler

This is the, “yes sir, yes sir, three bags full sir,” anti-pattern. Basically if they ask for it we can do it.

“You want to check your home alarm status from Notepad with a simple click of the File Menu”

“You want the cappuccino machine to make kosher, pork sausages”

NO PROBLEM.

When we agree to everything, we forget that we’re gathering requirements and that those requirements need analysis before we agree. Being an obedient butler means you’re a little light on the analysis in your eagerness to please.

I’ve seen IT companies do that constantly. A typical example. Company X wishes to migrate their aging, *nix based email system to Microsoft Exchange. Everything looks okay… “I mean how hard can *nix mail be right?”

So the IT company says they can do it without any problems. Then they find it’s Xenix Mail running on a file system they don’t understand and cannot connect to, running a protocol that isn’t POP or IMAP and using a directory service they don’t understand.

NO PROBLEM. “We’ll do it” while thinking quietly that there must be freeware tool on the web or that we can just write a script.

Never mind the kosher, pork sausages.

Next anti-pattern please….

Anti-pattern: Napkin Doodle

This one has really developed…

That is, from the smoky, restaurant table written on the white side of the shiny tin foil found in old cigarette boxes to the clinical environment of the smokeless restaurant napkin. How times have changed! At least architecture no longer involves getting my long blonde hair full of smoke!

Basically it goes like this

Dude, “I need to put together a collaboration platform for our company.”

Architect, “What do you want to do?”

Dude, “Email obviously, oh… and document sharing, and IM”

Architect, “Do you have a WAN”

Dude, “Yes”

Architect, “Uh oh, but we can solve it, here’s what you need to do”

Napkin Doodle 

The problem is obvious. Requirements gathering may be detailed, or may be minimal (my guess, given the setting is that they are minimal in this instance), but the analysis is terrible! Architecture is a skill. Architects form the analysis bridge between business requirements and business analysts and the engineering side of the business. MOSS doesn’t mean much to an engineer, so they might just end up doing a standard deployment rather than engineering it to meet useful criteria for the organization. Ensuring the requirements are adequately listed and captured is a requirement of any architectural analysis.

The next phase after gathering of requirements is analysis. Without adequate gathering of requirements and an adequate understanding of what is needed, analysis will fail and any further outputs and dependencies will just be exponentially worse.

Assuming the requirements are light without actually testing them to any degree is a big mistake. It also does a huge disservice to the role and perceptions of architects in general and demonstrates an arrogance that should be the sole preserve of fresh out of college developers!

Opposite Anti-pattern: Documentation Overkill

Creating architecture can be complex, but that doesn’t usually mean taking every letter in the phone book, sorting them in alphabetical order and then attempting to recreate the complete works of Shakespeare. Tomes are best left to people who are excited by writing and reading them.

My 'favourite architectures' are those that look at things from every conceivable angle; whether it necessary or useful or not! They are not often created on the basis of adequate requirements gathering, but rather on the underpinnings of analyst reports, magazine opinions and whatever every product which might fit into the relevant category might offer.

This is why in my time as an Architect at Microsoft I will often respond to RFP’s for solutions that will ask questions such as those below. This would represent a minute subset of an RFP that had nearly 60 pages of questions just like these:

  1. Does your product support Active Directory?
  2. Does your product support LDAP?
  3. Does your product support NIS?
  4. Does your product support eDir?

These exact questions came to me from a customer who utilizes the following directories:

  1. Windows Server 2003 Active Directory
  2. CA Top Secret*

I have to assume that before a customer releases a complex and detail technical RFP they must at least have had some architecture in place… and they did. It was enormous. It didn’t take into account what was actually in place in the organization, or what might be. It just included every single thing that could possibly every need to exist in an identity management system.

Can you imagine the 18 months of trade shows, conferences and information gathering that went on to finally produce an RFP. Surely gathering the requirements correctly and then creating an architecture accordingly is more time efficient and practical. Ultimately the idea is to get to a tangible result, rather than creating a huge additional workload for all those downstream and an indeterminate result.

A good architect might have created something more specific. So let’s revise the questions to what they could have been if the architecture had been correct (again a limited subset, and assuming AD is the primary authentication and authorization engine).

  1. Does your product make use of Active Directory for authentication, authorization or both
  2. Does your product makes use of Kerberos for authentication to Active Directory. If not, what protocol does it use.

That’s all for now. In my next post I will cover some of the anti-patterns found in the analysis stage.

This post is governed under the site terms of use and by the Creative Commons Attribution-NonCommercial-NoDerivs license. Original work of Sarah Chana Mocke. You may republish this work as long as explicit credit is given to the author.

All images in this post are Copyright ©2009 Sarah Chana Mocke

*Top Secret is a registered trademark of CA Inc. All other products are registered trademarks of their owners.

Making Audio a Podcast on my Zune

First of all, I have to say I’m back! Hopefully my future posts will be a lot more technical than this one.

News:

The beta of Hyper-V Management Pack for System Center Operations Manager has been made available.

Send email to MPCC (click the link) to ask for a registration code in order to be able to get hold of the beta on http://connect.microsoft.com

This has been a personal bug bear of mine for a while. I use my Zune in certain ways and it’s not really catered for my needs very well; until now that is!

The functions I most use on the Zune are playlists, shuffle (for songs) and podcasts. They all work really well, but I’m always stuck with a dilemma. Often I download classes and other audio material from the Internet that do not come in the form of podcasts. Usually when I copy those to the Zune they end up in the music section, and then as I operate the song shuffle mode a “tune” will start playing that is actually some news report or class!

I wanted a way to access those on demand without having them mixed up in my music library. My favourite time is work around time :)

Obviously I could take the files on host them on an Intranet site and create podcasts for each category of file, but that seemed like too much work for such a simple thing. This method is much simpler, but not as pretty.

Fortunately the solution was pretty simple in this instance. I learned from hunting around on the web that all that is necessary is to change a simple property on each file and set the “genre” to podcast.

After a bit of tinkering it all worked out well, but there are some gotchas to be aware of.

  1. Modify the file properties before you copy them into a storage location the Zune software scans. Once you’ve copied the files there the Zune software seems to randomly reset the properties on files leading to some pretty unexpected results at times.
  2. The podcast name is the name of the Album property of the file. If you have multiple files that need to fit into the same podcast section then set the album name the same for all of them.

File Properties

It works brilliantly. I still need to figure out how to get the sort order of the files correct under each podcast, and also how to apply a logo to the podcast.

Deep Zoom Composer Technology Preview early experiences

Deep zoom composer helps mere mortals take advantage of the deep zoom capabilities in SilverLight 2. Obviously it has more detailed users for developers taking advantage of SilverLight 2, but it's fun for the average user too.

In testing the product I build a really simple deep zoom with two high resolution photos and the posted them to the Microsoft Windows Live PhotoZoom site. You could save them as SilverLight files if you wished to, but in this case I've chosen to publish them online and the link to them off a web page.

It all worked rather well, although there is a 4 billion pixel limit in the technology preview that did drive me a little insane! It meant resizing some pictures to get more than a single zoom function to work, which I did not enjoy. I would have preferred to have been able to resize the pictures within the tool rather than worry about using a 2nd tool to perform such a simple function.

In addition and preview function within the tool would also have been helpful, rather than having to export and see the result.

Regardless, the tool is quite intuitive and I produced a result quite quickly.

In order to follow the steps I've had provided below you will need:

  1. Two or three images of your own
  2. Deep Zoom Composer, which you can download at http://www.microsoft.com/downloads/details.aspx?FamilyID=457b17b7-52bf-4bda-87a3-fa8a4673f8bf&DisplayLang=en
  3. A Windows Live PhotoZoom account which can be created at http://photozoom.mslivelabs.com/
  4. SilverLight 2 installed in your browser, in order to test the outcomes. Although it is installed automatically when you access SilverLight 2 content, you may want to preinstall it by visiting http://www.microsoft.com/SILVERLIGHT/

Building a simple deep zoom

After all of these are installed, open Deep Zoom Composer and choose to create a new project.

You notice that you're presented with a blank page and a very basic toolbar at the top of the screen:

 

You want to Import some pictures first. Click add image on the far right of the toolbar and multi-select a few.

For the purposes of this blog I selected two.

Once they are imported, click Compose on the toolbar. This is the main area of the application and the place you will create your deep zooms.

In this example I have two photographs of Table Mountain in Cape Town, South Africa, that I will be using. My intention is to have the viewer zoom into the upper right of the mountain (where the cable car station is located) and then find a photograph located there that is a close-up of that area of the mountain.

The first step is simply to drag the first photo onto the canvas.

I dragged the image of the whole of Table Mountain onto the canvas, as pictured below)

 

At the bottom of the Deep Zoom Composer, another toolbar is visible. In this instance we will make use of the Arrow (to select), the Hand (to move the canvas around) and the Magnifying Glass (to zoom in on the picture).

In this instance I will zoom in on the upper right of Table Mountain, by selecting the magnifying glass and repeatedly zooming until I've achieve sufficient granularity to paste the next picture into a relevant location without it being immediately noticeable to the viewer when the image is zoomed out. Use the hand to move the picture around when zooming, if your target area moves off the canvas.

I think proceeded to drag the close-up picture into the zoomed in location, as pictured below. You'll notice I've taken the close-up picture and dragged it in the location of the cable car station on the upper right of the mountain. The picture in the background is the zoomed in version of the original picture of Table Mountain.

 

To see what how the initial result will appear to a user, click the Fit to Screen button, next to the magnifying glass.

In my example the picture appeared as below.

 

Believe it or not, you're good to go. We can now export the result.

Select Export on the toolbar.

In this example I'm publishing to the PhotoZoom website. In order to do that you must create an account before attempting to log in via Deep Zoom Composer. I will be creating a new album called Table Mountain.

Once logged into PhotoZoom, I filled out the new album name, "Table Mountain" and clicked Select Album Cover. The latter step is not entirely necessary, but helps to add further context to the content of the album.

I chose to leave the image format as JPG, but you can also use PNG (which is a lossless compression format). In my example my photos probably aren't worth using the extra storage, but I have left the JPEG quality at 95% anyway, which is pretty high.

 

All that's left to do is click, "Upload" and then view the results at Windows Live PhotoZoom.

I rather like the PhotoZoom site, because it provides an iframe link to allow one to insert their albums directly into their own web pages elsewhere. That functionality works exceptionally well, although I did find I needed to set the width and height manually on some pages to ensure that it worked properly on pages with DHTML layers.

 

Overall, and especially for a technology preview, this all works rather well. I'm so impressed, and it's a real wow factor thing to add to personal homepages and blogs :)

The end result is shown below. Zoom in on the upper right of the mountain, and eventually the next photo will appear in full resolution.

The MCA Board Review Unravelled

Two weeks ago I joined the board that reviews candidates for certification as Microsoft Certified Architects: Infrastructure and Microsoft Certified Architects: Solutions.

I also attended a very insightful talk about the MCA Experience that was delivered by George Cerbone from Microsoft Learning.

In compiling my experiences on the board both in reviewing and interviewing candidates and deliberating their certification outcomes, the experiences I gained from achieving the certification myself and the lessons learned from George’s presentation, I thought it would be wise and useful to publish some additional material regarding certification from architects wishing to attain the certification.

As a primer, I suggest you read me previous post which provides some insight into the actual process. This post is located on my old blog at MCA certification process and details my personal experiences at being run through the process and achieving certification. It contains many insights regarding the process and should be considered to be additional content to be absorbed over an above this post.

Preparation:

In preparing for the review it is critical you realize your time and the board’s time is valuable. The board does not get paid for their work, and typically take time out of their own working day to certify the candidates. It’s a labour of love, and we’d really like to pass you.

So really expend the energy you need to to choose the best possible case study that demonstrates all of the competencies required of you as well as the architecture skills you’re expected to demonstrate. The seven competencies you will be rated on are extremely important, namely:

  1. Leadership,
  2. Communication,
  3. Organizational dynamics,
  4. Strategy,
  5. Process and tactics,
  6. Technology breadth ,
  7. Technology depth.

Many candidates fall down because they’re less focused on those listed in 1-5 and almost seem to make the assumption that if they’re successful in 6 and 7, that the rest will come naturally. This is patently not true. It is critical you are able to demonstrate leadership in the teams you work in (either positional leadership e.g. team lead or as an influencer e.g. architect <—> CIO and CEO conversations), that you are able to communicate effectively in articulating your approach, solutions and strategy to a wide audience, that you are able to deal with conflict and that you take a strategic (rather than break/fix or tactical) approach in the work that you do. You will need to be able to demonstrate these skills.

The best place, I feel, that you can demonstrate these is in your case study. If you’ve done it before in the real world, it is very likely you’ll be comfortable in dealing with difficult situations in the QA. Don’t ignore them and don’t let them derail you.

One of the most obvious areas of communication that people seem to ignore are the frameworks and constructs required of an architect. It is pointless having ideas in ones head if they can’t be translated into a format that other architects or engineers can understand. Typical examples of these are people missing the ability to demonstrate skills in UML or any other format for describing process and flow. It is incumbent on the candidate to demonstrate that they have the means to adequately describe the architectural components in a format that can be easily understood by other architects and members of a solution delivery team. In failing to do so the candidate is actually demonstrating that they have no methodology for gathering and articulating these details. Put something in your case study that will provide actual evidence that you do this and that you know what you’re doing. Under no circumstances include material of this nature in your case study unless you were intimately involved in the creation of the material.

Remember, the most important part of your case study, other than it being a project or solution of enterprise scale, technically relevant (no 1957 case study of supercomputing in Area 51) is evidence, evidence, evidence and something that benefited the business not just IT! Don’t pick the coolest project your were involved in, pick the case study which demonstrates responsibilities and accountabilities you carried and architecture strategies and deliverables you created.

At a basic level, you must articulate the business problem you were addressing, the business requirements that were defined, how success was measured, the organizational dynamics you encountered how you derived and refined an architecture and how you drove that architecture into the business. Naturally each of these will require some artifact you created as a proof point of the architecture work you have performed. It is strongly recommended that you follow the template provided to ensure you’ve covered all areas appropriately. It is there to make your life simpler.

In your area of breadth knowledge, take the time to understand the other solutions that are out there. The board does not expect you to know each and every competing solution, but they would like to see evidence that you can compare and evaluate solutions for fit in a number of scenarios. Try to remember that even though this is a Microsoft Certified Architect certification it is actually technology agnostic. People from Microsoft’s competitors that are not nearly depth experts on Microsoft technologies have been certified and some sit on the board. You might even encounter one.

Lastly, reading up about a subject is unlikely to see you through the process. You’re far more likely to succeed with the few things you actually know intimately than a general awareness of stuff in general.

As a general note, proof read your submission a million times if necessary. Before each board review the material is well read by the board members. The material submitted helps the board members form an initial impression of the individual as well as formulate the questions they may ask. Spelling mistakes, grammatical errors, incomplete paragraphs and bad structure may all lead the board members to wonder what type of material is being produced in your own organizations and customers.

You don’t know what you don’t know:

After you’ve completed your documentation you’re ready for the board. In my experiences so far the presentations have gone reasonably well. It’s the question and answer sessions that do not go so well if the candidate is not suitably prepared! See above!

Typically when things have gone wrong it’s because the candidate has misrepresented the scope of their work and is in essence trying to paint a picture of their involvement in the project that is far greater than their actual involvement was.

Be aware that it is extremely likely that the board will consist of highly competent subject matter experts that will easily be able to assess your level of competence in your claimed depth, breadth and vertical areas of knowledge. They will not ask unreasonable questions, but they are likely to ask you to draw diagrams of your architecture, run through how you got to the architecture you did, discuss the critical conversations you had with various stakeholders in the business and IT departments and will check your knowledge in your breadth and depth technology areas. Many of the members of the board will have performed those duties numerous times and also have interviewed prospective staff for employment at their own organizations. The breadth expertise on the board will extend beyond Microsoft’s technologies too. For example, my breadth expertise extends to many areas, but as an example extends across Active Directory, Novell eDirectory, OpenLDAP, IBM Directory Server and many Identity and Access Management solutions. People who claim directories and security as breadth areas are pretty likely to be well tested by me, despite my being an employee at Microsoft!

So, speak about what you know. Be competent in the areas expected of you and… be very aware of falling into a trap of claiming depth knowledge in a subject area where you only have breadth knowledge. It is always better to say, “I don’t know,” than to fall into a pattern of wasting the time you have at your disposal to prove you’re architect material.

Before you came into the room you were already under pressure from the mind games you had been playing with yourself. You will come into the review with a great deal of uncertainty and one of the most common outcomes is that you will try to prove you know everything! Rather recognize the pressure you’re under, accept that it’s there and focus on what needs to be done only. It’s normal to feel that way, but you need to consider that in conversations at your company and with customers you will also be under pressure and the board needs to ensure that pressure will not make your judgement questionable. The questions will be fair and they are not designed to trick you. All they will be doing is probing your expertise. It is never a good idea to get too defensive or becoming aggressive. These can often be perceived as arrogance. The board knows very well the pressures that you’re experiencing. We’re you’re peer industry architects and have been through the same process ourselves.

A fantastic hint at what is to come can be found in the InfoPath/XML self-assessment form you complete as part of the application process. Take some lessons from that. Figure out where you can improve and even consider delaying the board review until such time as you think you have covered the areas you consider important.

Also, realize the board is also human and can make mistakes. You can never be sure what type of question they may ask, but if you think the approach they’re taking in solving a breadth or depth problem is less suitable to one you think may work then be prepared to say so convincingly i.e. evidence, expected outcomes and benefits. In my own review I did not like a scalability question that was thrown at me, so I asked clarifying questions in order to understand what was trying to be achieved and then gave the board an alternate solution I felt was more appropriate. I’m not quite sure how they took it, but they did pass me! I’ll bet they found my approach fascinating and that they learned from it.

Don’t do it because it sounds cool:

Sure, being an MCA sounds like an important certification. It is; but it’s important for a specific type of skill, it is not the be all and end all of certification! As an example Microsoft also provides Microsoft Certified Master (MCM) certifications for specific technology domains. It is essentially the equivalent to an MCA. Whereas an MCA is a certification for the architecture domain of knowledge, the MCM certification for Messaging encompasses the messaging knowledge domain, and MCM: Messaging candidates will be expected to be able to architect messaging solutions based on Microsoft products (email, scheduling, unified communications technologies) and have depth technical knowledge of the products. The MCM is just as hard to attain and is meant for people who have the career sights set on being depth technology experts.

Pick the certification that suits your skills and career goals. Both the MCA and MCM certifications are advanced level certifications that require deep skill and years of experience in their knowledge domains. It is very rare someone would be competent at both because, at the risk of sounding absurd, the types of people that chase them are typically different characters all together.

Lastly, don’t assume because your title is Architect in your current organization that you’re a natural fit for the certification. The certification is an industry benchmark, not an intra-company grade scale or peer benchmark. Check the competencies, check what work you’ve done and be sure that those map to what is expected of you.

This post is governed under the site terms of use and by the Creative Commons Attribution-NonCommercial-NoDerivs license. Original work of Natasha Anne Mocke.You may republish this work as long as explicit credit is given to the author.

The Mystery of Hyper-V's Limit Processor Functionality? (Part 3 - Officially)

I found out a bit more from the product group. It seems some legacy operating systems, like Windows NT 4.0, bugcheck if they perform a CPUID and have more than three leafs returned. The checkbox is there to allow us to try and get them working. So I was on the right track after all. I still think this functionality does limit what functions of the processor can be detected and suggest you try and leave it off if you can.

It is important to note that the operating systems are not necessarily supported by Hyper-V.

For more information about how Microsoft defines what operating systems are supported on Hyper-V checkout the Windows Server Virtualization Blog at http://blogs.technet.com/virtualization

And in case you didn't know yet! Hyper-V RC1 was released to Windows Update last week!

The Mystery of Hyper-V's Limit Processor Functionality? (Part 2 - Final)

In my previous post I discussed how I went about trying to determine the differences in processor functionality provided by Hyper-V's Limit Processor Functionality (LPF) checkbox. You probably want to read through that in order to get the necessary background to understand this final instalment.

In this post I discuss how you can determine:

  1. If your operating system is running on a hypervisor,
  2. The processor feature differences presented for an operating system running directly on hardware versus a parent partition operating system on a hypervisor,
  3. The processor feature differences presented for a child partition operating system running without LPF set versus one that does.

In essence I ran a number of tools and found some minor discrepancies in the results I received from each. The basic premise I followed was to run them on Windows Vista x86, Windows Server 2008 (Parent partition with the Hyper-V RC0 role enabled) and then in a child partition running Windows XP SP3 with the LPF checkbox enabled and disabled.

I felt I was getting nowhere using the tools, and couldn't arbitrate between the results because I had no view of what they were doing to determine the processor information they did. So I write a tool, and in the process learned a lot!

In order to do that I had to determine a way to find out the processor identification and the features that they supported. Fortunately both Intel and AMD (and I assume other processor manufactures that provided x86 and x64 support) provide an instruction called CPUID to do this. "Brilliant", I thought, "I'll just use that and find out everything I need to know." And so I did!

I used a combination of C and Assembly to write a simple command line program that I could run in all three environments that could haul out and detect the information I required. I did not go to in-depth, but did manage to find out some really interesting bits of information.

As with the last post, I'm only going to focus on the differences I found between the various environments.

The first comparison, below, provides the differences found when running Windows Vista x86 directly on the hardware versus Windows Server 2008 with the Hyper-V role enabled. It makes for interesting reading!

image

The first and most notable difference that presents itself is the number of CPUID registers that are presented in each environment. There are 10 when Windows runs on the bare metal and only 6 with the Hyper-V hypervisor enabled. These registers are important, because they store the lists of processor capabilities, and calling CPUID with EAX set to 0, will tell an operating system how many registers to query in order to determine the processor functionality. Effectively this already limits the set of processor functions that the parent partition can determine versus an operating system running directly on the hardware without a hypervisor. The missing features relate to direct cache access and performance monitoring capabilities. Although not that interesting for the purposes of this blog entry, they are missing when running on a hypervisor.

What is more interesting is the startling differences that present themselves when the feature flags (the bits that report what the processors capabilities are) are queried. Although it is common sense, it remains interesting that the processor reports that it supports Virtual Machine eXtensions (VMX) when Vista is run, but does not do so when run on a hypervisor. Presumably this prevents a hypervisor from running in a child partition, because the child operating system will not detect the processor capability.

I found it interesting the MONITOR/MWAIT is supported on the hardware, and that SYSCALL/SYSRET is only present when the operating system is run on a hypervisor. AMD documentation describes SYSCALL/SYSRET as follows:

"SYSCALL and SYSRET are instructions used for low-latency system calls and returns in operating systems with a flat memory model and no segmentation. These instructions have been optimised by reducing the number of checks and memory references that are normally made so that a call or return takes less than one-fourth the number of internal clock cycles when compared to the current CALL/RET instruction method."

I assume that SYSCALL/SYSRET are enabled when VMX is active to help speed up the performance of the child partitions.

MONITOR and MWAIT instructions are described by Intel as follows:

"The MWAIT instruction is designed to operate with the MONITOR instruction. The two instructions allow the definition of an address at which to ‘wait’ (MONITOR) and an instruction that causes a predefined ‘implementation-dependent-optimised operation’ to commence at the ‘wait’ address (MWAIT). The execution of MWAIT is a hint to the processor that it can enter an implementation-dependent-optimised state while waiting for an event or a store operation to the address range armed by the preceding MONITOR instruction in program flow."

In researching this topic I came up with an interesting set of documentation called Hypervisor Virtual Processor Execution at http://msdn.microsoft.com/en-us/library/bb969750.aspx. It's a bit of a pity I had to do so much work of my own just to discover the documentation, but at the same time I learned quite a lot of new information!

You'll see in the screen shot above that I had no definition for a feature called Bit 31. Bit 31 was not documented in the Intel CPUID documentation, but is set for systems that have a hypervisor running! This is a great way for applications and operating systems to determine if they are running on the hardware directly or on a hypervisor!

Before visiting the Limit Processor Functionality differences further I ended up segueing to find out more about this function. I did some further research and discovered that Microsoft provides a new set of values at 0x40000000, which return the processor identification, vendor, features and minor and major release details of the hypervisor. What a discovery! I modified my code slightly to include querying the range and it returned the following when run:

CPUID (40000000):
           Vendor string: Microsoft Hv

That was kind of cool, because now not only could I determine if I was running on a hypervisor, but I could also find out who the vendor of the hypervisor was.

Finally I needed to determine the differences for child partition operating systems running with the LPF setting disabled and enabled. The resulting differences are presented below:

image

At this point I was more than a little disappointed, so I decided to look at the Intel CPUID documentation once more, and see what features could be defined by CPUID(3) to CPUID(6) that may help explain what could be different. For those that are technical, my "Register Index" above is actually the largest standard function number returned when I call CPUID with EAX set to 0.

Function 3 provides the Processor Serial Number. This was only provided in the Pentium III, and so does not really explain any key differences that could have been caused by enabling LPF.

Function 4 provides deterministic cache parameters. This is a little more interesting because a BIOS (yes, even the BIOS for Hyper-V's partitions!) will use this to determine the number of processor cores in a specific processor package. If you look at the results I provided for SiSoft Sandra Lite in my previous blog post, "The Mystery of Hyper-V's Limit Processor Functionality? (Part1)", you will see the results differ when LPF is enabled or disabled. This could help explain why.

Function 5 provides further information about MONITOR/MWAIT support. It's an interesting function not to have provided when LPF is enabled, because limited MWAIT support can be provided to a child partition, but obviously not with LPF enabled.

At http://msdn.microsoft.com/en-us/library/bb969743.aspx it says, "Partitions that possess the CpuPowerManagement privilege can use MWAIT to set the logical processor’s C-state if support for the instruction is present in hardware". If my research is correct this could only be true if the partition possess the CpuPowerManagement privilege and LPF is not enabled.

Function 6 provides details about the Digital Thermal Sensor. Intel Core 2 Duo processors have a new Digital Thermal Sensor than older processors. This is provided to allow the system to determine the processor temperate for each core and adjust clock speed and voltage. Systems can slow the processor to reduce operating temperature. I'm not particularly sure how this is useful in a child partition and why it should not be present in an environment where LPF is enabled, but it's there.

So, in an LPF environment it appears Deterministic Cache Parameters are not present, MONITOR/MWAIT functionality can never be used and the digital thermal sensor information is not available.

Lastly, according to SiSoft Sandra the Maximum Physical and Virtual Address space for a child partition without LPF enabled versus one where it is enabled is 40-bit and 48-bit versus 36-bit and 32-bit respectively. This would indicate an LPF enabled child partition can address far less memory than child partitions that do not have it set.

I'm sure there is more to this. If and when I find out more information I'll be sure to share it.

This post is governed under the site terms of use and by the Creative Commons Attribution-NonCommercial-NoDerivs license. Original work of Natasha Anne Mocke.You may republish this work as long as explicit credit is given to the author.

The Mystery of Hyper-V's Limit Processor Functionality? (Part 1)

Recently I became rather intrigued with Hyper-V's Limit Processor Functionality (LPF) function. One little checkbox became such an obsession that I start wasting hours of my time trying to find out exactly what it does. The dialogue says, "Limit processor functionality to run an older operating system such as Windows NT on this virtual machine".

That sounds pretty plausible because the newer multiple core processors with hardware accelerated virtualization were not around in the old days these operating systems were around.

image

Simple I thought. I'd find a selection of system information tools and run those with the check box flagged and without. I'd then compare the processor details and get the answer. So I chose three popular tools I thought would help me out. They were Everest, SiSoft Sandra and CPU-Z. They actually ended up adding to my confusion!

To understand the results, it is important to understand what platforms I was testing on.

In order to test a base OS without a hypervisor, I ran Vista 32-bit and tested the CPU results on that.

I then ran tests on the parent partition for Windows Server 2008 with the Hyper-V role enabled, and then two child partitions running Windows XP with the Limit Processor Functionality checkbox on and off. I used XP SP2 because it is not enlightened and is not aware of VMBus. I also tried Windows XP SP3 with the Integration components for Hyper-V, but that made no difference to the result sets, so the results I'm providing apply to any version of Windows XP. The child partitions were only provided with a single CPU, even though I have a Core 2 Duo processor because Hyper-V only provides multi processor support for Windows Server 2003 and 2008 child partitions. The tools gave the same results for the Windows Server 2008 parent partition and the Windows XP child partition without LPF set, so the former is not shown.

So here's the breakdown in tabular format for CPU-Z v1.44.3. I've only listed what was different, and not each item in the tool. That could take forever!

  Windows Vista SP1 Windows XP Child - Default settings Windows XP Child - LPF Enabled
CPU-Z      
Processor Name Intel Mobile Core 2 Duo 7100 Intel Core 2 Duo Intel Core 2
Code Name Merom Conroe <blank>
  Socket P478 Socket 775 LGA Socket 775 LGA
Cores 2 1 1
Threads 2 1 1

At face value, CPU-Z, showed differences, and these difference encouraged me. I did expect to see some differences in the Instructions line, which shows the processor instruction sets, but did not. They all listed: MMX, ,SSE, SSE2, SSE3, SSSE3, EM64T.

Thinking I was on to something I decided to look at other tools with a view to finding out a deeper level of information to truly explain what the LPF function does.

I tried the freeware version of SiSoft Sandra Lite XIIc v2008.1.12.34 next, and that where things started getting interesting! Instead of enlightening me, I just got more confused! The results it gave were as follows (again just those that differ are shown).

  Windows Vista SP1 Windows XP Child - Default settings Windows XP Child - LPF Enabled
SiSoft Sandra Lite XIIc      
Type Mobile, Dual-Core Dual-Core <blank>
Cores per processor 2 2 1
Threads per core 1 1 2
Package FC µPGA (Socket P) FC LGA775 FC LGA775
Maximum Physical / Virtual Addressing 36-bit / 48-bit 40-bit / 48-bit 36-bit / 32-bit
HTT - Hyper-Threading Technology No No Yes
VMX - Virtual Machine extension Yes No No

The results left me confused. SiSoft Sandra had picked up some results that conflicted with what I had seen in CPU-Z. I was expected the same results and more, but now what I had was two tools in conflict on thread capability, and also a Hyper-Threading result I really wasn't expecting.

I decided to look further and find a tool that would help me close the gap. Basically I thought if two tools were in agreement out of the three, then those results were possibly the correct results.

So I made use of an evaluation version Everest Ultimate Edition v4.20.1170. The results were starting to gain more clarity, and at the same time this tool starting helping me what understand exactly what was going on. Unfortunately it's results really only provided me with a starting point, but wow, what a starting point it was!

  Windows Vista SP1 Windows XP Child - Default settings Windows XP Child - LPF Enabled
Everest Ultimate Edition      
CPU Type Mobile DualCore Intel Core 2 Duo T7100, 1782 MHz (9 x 198) Mobile DualCore Intel Core 2 Duo, 1800 MHz Mobile Intel Core 2 Duo, 1800 MHz
Motherboard Name Dell Latitude D630 Microsoft Virtual Machine Microsoft Virtual Machine
CPU Type Mobile DualCore Intel Core 2 Duo T7100 Mobile DualCore Intel Core 2 Duo Mobile Intel Core 2 Duo
Motherboard Chipset Intel Crestline-GM GM965 Intel 82440BX/ZX Intel 82440BX/ZX
HTT / CMP Units 0 / 2 0 / 0 0 / 0
MONITOR / MWAIT Instruction Supported Not Supported Not Supported
SYSCALL / SYSRET Instruction Not Supported Supported Supported
Virtual Machine Extensions (Vanderpool) Supported Not Supported Not Supported
Hyper-Threading Technology (HTT)  Not Supported Not Supported Not Supported
CPUID (0) 0000000A-756E6547-6C65746E-49656E69 00000006-756E6547-6C65746E-49656E69  00000002-756E6547-6C65746E-49656E69 
CPUID (80000000) 80000004-00000000-00000000-00000000 80000008-00000000-00000000-00000000 80000008-00000000-00000000-00000000

I was starting to see some consistency in the results, and had removed the different results for HyperThreading (HTT) that SiSoft Sandra Lite gave, and decided that HTT was not actually supported. I still didn't know why the anomaly presented itself, and it took a lot of research and puzzling through things to find out why!

As it turned out, the last two rows of the Everest results, plus some other registers I've not listed from its result set started to point me to the reasons why...

For now, I knew there are differences in the way an LPF virtual machine views a processor, versus one that does not have LPF enabled.

That's it for now

In Part 2, the final part, I'll take a deeper dive into CPUID and also help you determine whether an operating system is running on a hypervisor or not. Remember, that applies to the parent partition too, where the motherboard name is not actually presented as a virtual machine. It is possible to tell if you're running in a virtual machine pretty easily, but finding out if you're running in a hypervisor is actually even simpler if you know how...

This post is governed under the site terms of use and by the Creative Commons Attribution-NonCommercial-NoDerivs license. Original work of Natasha Anne Mocke.You may republish this work as long as explicit credit is given to the author.

Hyper-V MMC for Vista SP1 is released to RC

Just a quick post today.

Good news! The RC for the Hyper-V MMC has finally been released. It's designed to work on Vista x86 and x64 SP1. SP1 is required.

Check out the Windows Virtualization Team Blog at http://blogs.technet.com/virtualization for more information

Vista x64 Edition: http://www.microsoft.com/downloads/details.aspx?FamilyId=450931F5-EBEC-4C0B-95BD-E3BA19D296B1&displaylang=en

Vista x86 Edition: http://www.microsoft.com/downloads/details.aspx?FamilyId=BC3D09CC-3752-4934-B84C-905E78BE50A1&displaylang=en

 

Using the Office Live Workspace Beta

My latest, most favourite tool on the web has to be Office Live Workspace Beta. It's an interesting extension to Microsoft Office and effectively allows one to create a free SharePoint repository for Word, Excel and PowerPoint document. It also provides add-in tools for Microsoft Word, Excel and PowerPoint to allow you to interact directly with your workspace, just as you would with Office SharePoint Server on an enterprise network.

You can also share your workspace with others and allow them to view, edit and comment on your documents and also collaborate on documents by accessing a shared screen with you.

Office Live Workspace beta works with Windows XP, Windows Server 2003 and Windows Vista using either Internet Explorer 6 or 7 and Firefox 2.0. If your using Apple OS X 10.2.x and Firefox 2.0 to access Live Workspace too. I tested it with Windows Vista x64 and both Internet Explorer 7.0 and Firefox 3.0 Beta 4. In fact some of these screen shots were done with Internet Explorer 7.0 and others with Firefox 3.0. I challenge you to try and determine which are which. I think Microsoft has done an excellent job of getting all the functionality to work in both.

To get started, visit http://officeliveworkspacecommunity.com, which is also redirected from http://workspace.officelive.com, a far shorter and simpler URL :)

The first thing you'll notice is that it's been selected as a finalist in CNET's Webware 100 Productivity Aware for Web 2.0 applications and services. If you feel it is a worthy winner then vote for it at http://www.webware.com/html/ww/100/2008/prod.html.

That aside, you will need to sign up with a Windows Live ID. It is a Live Workspace after all, and that's got to mean need a Live ID.

You then get to sign-up free of charge, which is always compelling in and of itself :)

Sign-up

Once you've signed up and signed in you will be presented with a screen similar to the following. Obviously your workspace will be empty. Mine has a few files stored in it already.

image

You will be able to create new documents, workspaces and add documents right away. It is possible to add multiple documents at once either by selecting the files directly or through drag and drop. I used drag and drop and it work very well. You can also install the Office Add-in tools which can be sure to extend your Microsoft Office applications to include menu commands for your Office Live Workspace, which enables your Office applications to work directly with your workspace. It's a pretty decent feature to have and it makes working with the workspace far more efficient and seamless.

I do, however, wish the add-ins extended to Outlook as well. As an example, I can create a list of contacts in my Live Workspace, but I can only do so by typing them in or importing them from a file. What I'd have greatly preferred is to be able to upload them directly from Outlook just like services such Plaxo and LinkedIn allow. It would have been a really effective way to upload all of my contacts and share them with others that I choose to invite to my workspace.

Installing the Office Add-In tools is really easy. Simple click the button on the toolbar and follow the prompts. If you're using Windows Vista it is important to read the information it provides at the end. There are updates you can install on Vista that will enhance your Office Live Workspace performance.

Office Live Addin

Office Live Workspace Update for Windows Vista

After having installed the Add-Ins you will find the Office menus will have new additions in Word, Excel and PowerPoint. These allow you to save and open files directly to and from your Live Workspace respectively.

Word Save to live - Sign In

Fortunately you are able to choose to cache your password in the dialogue presented, otherwise I think the Live Workspace offering may be a little less compelling. I understand the need for security, and also that being prompted for a password constantly is a small price to pay for being secure, but let's not pretend it would not be annoying :)

Word Save to live - Sign In Dialog

Finally you will be presented with a dialogue allowing you to open or save files to your workspace. In the example below I've chosen to open a document. You will see how the workspace appears in the open dialogue and how similar it appears in that dialogue to the actual site I've shown earlier in this post.

Word Save to live - Save

If you select a document and choose to share it by choosing the appropriate option on the toolbar, as indicated below, you will then receive a screen allowing you to specify the various option for sharing.

Select Share

The screen the follows has an extensive number of options. The best thing you can do is experiment with these. In a nutshell, you can choose people who get to edit the document, Editors, and people who can view the document i.e. Viewers. You can either type in the email addresses, or you can choose them from a list if you've added them to your Windows Live Address Book. I would have enjoyed better integration with my local Outlook client as well, because even as a home or small/home business user I'm more likely to store all my contact information there for synchronization with my phone and integration with other Web 2.0 services I may use. Still I think this is a significant start in the right direction.

Share Document

There are also some other compelling functions in the Share Documents screen. You are able to save versions of your document and recall them. This allows users to collaborate with the worries inherent in trying to recover previous version that email collaboration or direct access to a file share often presents.

You can also open a comments pane and make remarks about the current version of the document without having to place them within the document.

Sharing your screen requires the installation of Microsoft SharedView beta. It is only 3.2MB in size, and as such could hardly be considered to be an onerous download even for Internet bandwidth geographies such as mine. SharedView allows up to 15 people in different location to see what's on your screen. Naturally all of those people will require a Microsoft Live ID to access the environment.

SharedView

The SharedView functionality will be relatively familiar to those who have used Microsoft LiveMeeting before. It does has some differences, especially in that up to 15 people can participate at once. You can either run SharedView from the Start bar in Windows or you can invoke it directly from within your Workspace by selecting Share Screen from the Share button on the toolbar. When SharedView is run, the desktop real estate and OS window area is reduced and a toolbar is place at the top of the screen.

SharedView Toolbar

It provides some interesting functionality over and above those you would expect such as being able to invite participants and block people from contacting you. The interesting functionality includes being able to provide handouts, so you can ensure people have the right materials before collaborating. These handouts can essentially be any type of file, and as such SharedView smartly reminds users to respect copyrights, but does so without the usual nagging application modal dialogue boxes we're used to. The share button allows you to share just about any application, not just Live Workspaces. All in all it's pretty intuitive and appears to work well enough.

I did notice a screen flicker followed by a rather familiar task bar notification, but that's probably a small price to pay for the functionality.

Vista Basic

All in all, this offering from Microsoft is simple to use and a far more effective way of sharing documents with others than the usual browse and upload functionality provided by so many sites. It is compelling for me because it is so tightly integrated with Microsoft Office and also because it uses Live ID's that are quite ubiquitous in the world I inhabit.

This post is governed under the site terms of use and by the Creative Commons Attribution-NonCommercial-NoDerivs license. Original work of Natasha Anne Mocke.You may republish this work as long as explicit credit is given to the author.

Installing Hyper-V RC0

So things are starting to get more than a little exciting. RC0 is finally here, and pretty much bang on schedule!

Not so long ago I posted about installing Windows Server Virtualization on Windows Server 2008 RC0, now I'm posting about installing Hyper-V RC0 on Windows Server 2008 RTM. That's a lot of progress in a short time. It really looks like Hyper-V is going to ship on time!

Before we start it is important to bear a few things in mind:

  • All virtual machines/child partitions created in the beta need to be recreated.
  • VHD's can be migrated but not those that include the beta integration components for Windows Server 2003.
  • Hyper-V RC0 will only install on Windows Server 2008 RTM, not on pre-release versions of the operating system
  • There are new integration components to be installed from VMGuest.ISO for supported systems prior to Windows Server 2008. This includes Windows Vista SP1.
  • There is a QFE to install in child partitions using Windows Server 2008.
  • There will be an upgrade path for Hyper-V RC to RTM. Let's hope this is the last time we will be asked to recreate child partitions.
  • Upgrading of saved states and online snapshots will not be supported. You will need to delete saved states and merge snapshots before performing upgrades.

So on to the installation...

It is important to bear in mind the following regarding the hardware you wish to use for Hyper-V:

  1. It only works on an x64 based system with Intel VT or AMD-V extensions. That means you need to be able to configure hardware assisted virtualization in the BIOS! If you don't have the option and you know from the processor vendor's web site that your processor can do it, start hunting for a BIOS upgrade.

    On my Dell D630 laptop the option in the BIOS is located at POST Behavior/Virtualization - Enabled

    It is also important to note that on some systems you may need to completely disconnect the power from your system after you've saved the BIOS settings. So if you're experiencing problems with the setup steps below, it may be worth a try. Some BIOS/CPU combinations do not reset correctly. Remember to remove the battery too if it's a notebook system.
  2. You have to enable Data Execution Prevention on the BIOS. On AMD systems it's usually called No Execute (NX) bit and on Intel systems it usually known as Execute Disable (XD)

    On my Dell D630 laptop the option in the BIOS is located at Security/CPU XD Support - Enabled.

So finally after all that, I got around to installing Windows Server 2008 x64.

Windows Server 2008 actually has a pre-release of Hyper-V built into it. This is not RC0. Although Hyper-V RC0 will be available on Windows Update pretty soon, you can visit http://www.microsoft.com/hyper-v and download the bits right now!

HyperV Website

The next steps were simple. I opened Server Manager, went to the Roles Summary, and selected Add Roles.

Hyper-V was in the list! I simply selected it, and it began it's configuration.

Select Server Roles Old Dialog

Next, it prompted me for the appropriate network interface card(s) to use for my virtual machines

And then finally after a reboot, it proceeded to complete the installation and presented me with a results screen. Because my system was not connected to any network (it's a test system so why bother!) It warned me that Windows automatic updating was not enabled. It also gave me two informational messages stating that, "This is a pre-release version of Hyper-V", and also told me that Hyper-V was installed.

image

To configure the environment and create a virtual machine all that was necessary was to access the management console via the server manager or the Start Menu.

image

After loading the Hyper-V Manager, I had to select my system name and then on the right hand-side choose New/Virtual Machine. After a brief moment the New Virtual Machine Wizard appeared. It follows the same process as all wizards we're used to.

In the first dialogue you get some "Before You Begin" information to read, which you can also disable for future running of the wizard.

Thereafter you are asked for the name of the virtual machine. You can also use the default folder for the virtual machine or create your own.

image

The third dialogue simply asks how much memory you would like to allocate to the virtual machine.

The fourth dialogue asks if you want the virtual machine to be connected to the network, and if so, which network card to send the traffic through.

Then the fun starts. The fifth dialogue asks for the name of the virtual machine file, the location (again) and also the size. You can also use an existing hard disk, or attach a virtual disk later. Obviously using an existing hard disk has performance benefits.

You're then prompted for operating system details. This dialogue is interesting as it is different than Virtual Server 2005 R2 or Virtual PC. It doesn't ask you which operating system! It just gives you the following options:

  • Install an operating system at a later time
  • Install an operating system from a bootable CD/DVD-ROM (you can also point to an image file)
  • Install an operating system from bootable floppy disk (this is handy, but requires the floppy disk either be a real device or a virtual floppy disc, .vfd).
  • Install an operating system from a network-based installation server

In my case I chose to install from a bootable DVD-ROM. I also tested the .vfd format. Unfortunately most disc images I have are in .img format. After a bit of looking around I cam across some software called WinImage that can load .img and save them as .vfd files. The software is available at http://www.winimage.com. It is shareware, but well worth the money.

The last screen in the wizard is a summary screen. It also provides a check box to allow you to start the virtual machine once the wizard is finished.

And that's it!

I was returned to the Hyper-V Manager and right-clicked my virtual machine and chose start. Then nothing happened!

I had to right click the machine name again, and this time chose Connect... That opened a terminal server type session to the virtual machine and I was able to work within it.

image

Just for fun I thought I'd try a somewhat "unelightened" operating system. The result is below.

image

As a final note, the default key combination to release the mouse cursor from being trapped in the virtual machine is CTRL+ALT+LEFT ARROW. On my Dell D630 with an Intel 965 chipset and video adaptor this just happens to be the key combination that rotates the entire screen. I had to disable the video adaptor hot-keys functionality or change the key combination for Hyper-V to get my system working as I would have expected.

This post is governed under the site terms of use and by the Creative Commons Attribution-NonCommercial-NoDerivs license. Original work of Natasha Anne Mocke.You may republish this work as long as explicit credit is given to the author.

Practical Best Practices for Infrastructure Projects

In the many years I've been consulting as a Lead Technical Consultant, Architect and Engagement Manager there are a number of things I've learned from the school of hard knocks. Many things are obvious, and some less so. We can often find methodologies and material about how to run projects effectively, but rarely have I seen anything that discusses infrastructure design and implementation that includes information about how customers and consulting organizations can work together more effectively. This is a rather troubling situation, as many large enterprises work with external organizations to get projects delivered.

In this entry I'll discuss some of the main areas of concern I encounter in trying to enable my customers to operate more effectively. I'm focusing on the practical issues I encounter, rather than rehashing project management disciplines and methodologies. While those are important there are better places to read about them than on this blog.

1. Understand what an infrastructure project is?

Sounds obvious doesn't it, but most of the time it simply is not so. When I speak to customers, and even my peers, at the company I work for an Infrastructure project is usually some kind of storage, file and print, OS deployment, software distribution or messaging project.

Rarely do people consider that infrastructure, more often than not, provides the non-functional elements of an application development project. Some of my customers get it right, but most do not. Developers are not infrastructure people. They are more concerned with making it work, rather than being focused on redundancy, high-availability or operational environments. And rightly so! The problem is many people expect that developers can run SETUP and get things going, so they don't bother initiating an infrastructure project or sub-project in application development projects. This often leads to fantastic application development work not translating to success because the infrastructure belt and braces are simply not there to deploy and run the application successfully.

2. There is no such thing as a perfect RFP

Many of my customers create requests for proposals (RFP). Although this is an exciting time in their lives, and involves lots research and due diligence, the actual project delivery that results from the response they choose to ordain fails miserably. Most of the time it's because of simple things that could easily be corrected in their approach.

Firstly, try to remember that when you are creating an RFP you are asking an external entity to walk down a road with you. In partnering on any project of significant complexity it is better to understand what this might mean.

  • Be realistic with your deadlines. You might believe you can put pressure on external entities to help you deliver on your goals, but if you're doing so because you didn't put adequate planning in place or because you procrastinated it is unlikely anyone you partner with is going to be able to deliver in six months what you had to deliver in two years.
  • Give out as much information as possible. Many of my customers host a single RFP briefing session. These are normally one to two hours and they tend to talk through most of them with a bit of lip-service to questions external organizations that are responding may wish to ask. I think the question window needs to be longer. Consider who benefits from that the most. Why would a customer not want an organization to be genuinely interested in getting the best result for them. Clarifying questions are critical to providing a response that meets the actual needs.
  • Be sensitive to competitive advantage. Consulting organizations thrive on their IP. Its the secret sauces that gives them their competitive advantage. Often the very questions that make the most sense to be asked are simply not because the customer that issues the RFP insists on sharing all questions and answers with all parties. I recognize that some questions are obvious, such as, "how many workstations are affected by this project," and the customer may not want to answer those multiple times, but others are not. Consider that you may want to be engaging with an organization who is really prepared to think through the problems and resolve them, rather than an organization who grabs at IP and provides the cheapest bid. It's a tough balance to strike, but perhaps consider allowing people to ask questions in a way that they need not worry that their competitive advantage in IP is not thrown out of the window. Perhaps giving them the opportunity to ask up to five questions that will receive confidential answers may help out a bit.
  • Be prepared for detailed discovery. RFP responses only go so far, and they're only as good as the questions asked and the respondent's understanding of your organization. They can't possibly be perfect, and in many instances should not be considered to be binding obligations. At best you'll have a good feel that their technology will fit and that the chosen organization will walk down the road with you and deliver what you need. Let the chosen respondent do a detailed discovery after they are chosen, in order that they can provide more specifics with a more detailed budget. That way there will be no surprises half way through the project and you're more likely to succeed.
  • Do a proof-of-concept if you're battling to choose. Don't be afraid to request your short-list of companies to do a proof-of-concept (PoC) in your environment. Often the companies that are most likely to deliver will be baying for the opportunity. In doing so you will gain more confidence in choosing the correct supplier. Of course there is a caveat. You will need to plan and budget appropriately for your RFP. There are 'hidden' costs to an RFP you may not be thinking about. If a PoC is to be run in your environment you will probably need to have space to do so, connectivity, power and systems in places that can represent your environment to the degree that you can be confident the PoC is not a smoke and mirrors demonstration. If you're going to do it, do it properly.
  • Price is important, but... don't expect a Bugatti Veyron for the price of a Volkswagen Golf. Furthermore, don't expect a Golf to function like a Veyron! Price is always important, but be very conscious of marrying your executives requirements and perceptions with the solution you have chosen. If you promised them the world a few years ago, and then issued an RFP just before your delivery deadline it is highly unlikely you will have the budget or time you had then to deliver the Veyron you're still selling them now. The best result at the best price only comes from proper planning and a distinct lack of procrastination. You might believe you can put pressure on suppliers, but is unlikely what you get will ever fit one hundred percent.

3. Be Realistic

Now that sounds silly. It is a project and it has to be realistic. Unfortunately often this is not the case. There are a few things that we think are simple at the outset of a project, that simply end up derailing the project, and most especially the project schedule.

  • Understand the real risks. Don't outsource problems that cannot be resolved internally. Most often the problems I'm referring to here relate to internal politics, budget, management boundaries and business complexities. Put simply, if you don't have buy-in from key stakeholders for your project and you don't have the budget to do it, it is unlikely you'll find an external organization that can navigate the business complexities and also be cheap enough to solve your budget woes. Good consulting organizations can often help with one, but that will generally come at the expense of the other. Good resources are simply not cheap, but they're often more politically astute.
  • Understand your IT Rhythm of of the Business. Most large business have a IT lifecycle they adhere to. This includes budget and planning cycles, upgrade cycles, change request windows and freeze periods. In setting deadlines it is critical organizations consider these time periods when they start setting project deadlines. Do it up front. Don't suddenly be surprised by an external organization challenging your deadlines because you didn't plan up front. It really is not their fault.
  • Understand what you bring to the project. Your chosen supplier will be very dependent on internal staff contributions from your organization. This may be something as simple as input into workshops or document sign-offs, or more complex like working day to day with your project management office, depending on your company for procurement or for decisions that materially affect the project. In all cases these are critical dependencies for moving to the next phase of the project delivery. You will need to ensure the right people in your organization view the project with the right level of priority in order that you meet your project success criteria. It is far harder for external organizations to work with companies whose project sponsor is incapable of driving the right behaviour internally.
  • Avoid asking for extras before the job is done. The 'how about we do this and that' conversation really needs to occur when all parties are satisfied the initial project objectives are going to be met. I often sit with customers who suddenly see the potential of what they're implementing and then want more from it straight away. It's an exciting time, and it's wonderful to see the enthusiasm, but it's also important to remember what has been contracted and that the additional work you're requesting translates to project scope creep. Scope creep often entails more time, more budget and more resources. Scope creep discussions also often delay work in progress. Be careful, and expect to pay more at a minimum. Responsible consulting organizations will often point this out, but it rarely reaches a willing ear.

4. Architecture is good, but

An enterprise architecture is extremely important, I'm not suggesting otherwise. Often I encounter architects that create a desired state architecture for their organizations and then expect that it can be delivered within the businesses' constraints. Businesses have budgets and deadlines and require a return on their investments. Except in the case of legislative compliance, there is usually a requirement to compromise between desired state and the business constraints.

In implementing new technology, it is usually important to understand that it is positioned at a point in your roadmap to reaching a desired state. It is unlikely it will be the perfect fit. It will usually be the best fit for what is possible within the business constraints. Understand that and you're most of the way towards setting yourself realistic expectations of what can be achieved in the project you're undertaking.

5. Avoid Snake Oil

A huge problem in actually setting realistic project success factors is Marketechture. Often the hardball sales techniques employed by vendors and external organizations will look fantastic in a presentation or brochure, but may not translate to action or actual delivery as anticipated. We can all promise you the Earth with a "little" development work, but how many organizations can provide you that realistically. Take a careful look at what will actually work well for your architecture, get the technical and project people in your organization together with the vendor and external organization and do the research you can to try and make the best decision. Snake Oil rarely cures your ills.

6. Governance is not just a pain in the neck

Put governance in place for your projects, not just because it's a nice to have, but because it is critical to project success. As there is a lot of material about project governance available I'm not going into any detail on the subject here, save to say:

  • Put real decision makers into your governance structure. It is no use placing ineffectual or unempowered team members into decision making bodies on the project. This will just result in delays and frustration. Rather schedule decision making body meetings appropriately in order to ensure the decisions makers can be in the room when decisions are needed.
  • Insist on detailed project management and budget tracking. Sounds simple enough doesn't it. Yet it doesn't happen as often as you would think. Here's a hint. A spreadsheet of servers to be replaced or upgraded does not constitute a project plan; it's part of an implementation plan. Many of my customers work with external organizations and magically run out of budget at the eighty percent of delivery mark. It's usually because the budget wasn't being tracked effectively. Tracking the budget properly will help you determine if more is needed well in advance of it becoming an emergency. It will also ensure you meet your goals on time and within budget. Disciplined and well trained project managers will do this as a matter of course if they're given the time to do so. Ensure you support them in doing so.

7. Test the real deliverables

It's an obvious and short point, but a critical one. Spend time testing what you're expecting to be getting, not simply testing a product for what it should deliver. For example, testing an email system to see if it can send email from one user to another is probably a waste of time. Rather test it to see if it can send the types of emails you wish to send within the time frames you desire and on the bandwidth you have available. Test your customizations too.

Many companies will pay lip service about their project delivery methodology and will bang on about test leads and test cases, but many will also test the most rudimentary things only in order that they fulfil the test promises they made. This will result in expenditure, both in terms of schedule and budget, and will not help you achieve the desired results for your organizations.

Always allow your users to create the test criteria and participate in the actual testing. Never outsource those.

8. Failure to launch

Just as projects are about to be implemented, the failure occurs. It's great to have a perfect design, and have passed all the testing, but implementation disciplines and operational effectiveness are just as critical for your project's success. Once again, there is a lot of material about these disciplines out there. All I'm doing is highlighting those issues I encounter frequently.

  • Procurement needs to be done early. Be realistic though. Until a hardware design is completed you can't expect proper specifications for what is required. Put the hardware design tasks as close to the beginning of the project and create a finish-to-start task that kicks off procurement as soon as it is complete. Be realistic about your deadlines in this regards. Hardware procurement time frames and hardware shortages often cause delivery delays in the project.
  • Operations is part of the project. Operations is not something you tack on at the end of the project. That team should be an intrinsic part of the project from the outset. Their is nothing like a, "that will never work because the procedure for that is..." comment to derail the best laid plans. The more input you have and the more knowledge you disseminate to the operations team within your project the more likely you are to succeed. Furthermore, having the operations team in your project is more likely to help you understand what it will costs to train or acquire the skills necessary to operate your solution.
  • Real skill doesn't come from "knowledge transfer". Although on the job training and hand-holding is useful, all it really does is help operations staff become relatively proficient at the subset of the solution you're showing them. For staff who need to support the solution there is no substitute for formal training, certification and participation in creation of the test cases and performing the actual testing.
  • Operations staff are people too. Spend some time considering them when you're making a product choice. Although it might seem like a brilliant idea to implement a REXX-based application on IBM OS/2, you might have a hard time finding skills to maintain the solution or persuading operations staff to learn it because it's hardly a compelling step in their career. Attracting and retaining good operations staff to your organization is critical to keeping your environment running. In the short-term it may seem like you're pandering to their needs, but in the longer term it's actually an important consideration in maintaining your investment in the solution and its long-term viability.

I realize some of these points seem obvious, and others a little silly, but consider this. If they're so obvious and so silly then why do I encounter these issues in many of my customers constantly?

This post is governed under the site terms of use and by the Creative Commons Attribution-NonCommercial-NoDerivs license. Original work of Natasha Anne Mocke.You may republish this work as long as explicit credit is given to the author.

W2K8 RC0 - Windows Server Virtualization Installation Experiences

So the day finally dawned, and RC0 was released. That same day I got word it would be okay to blog and show screenshots of Windows Server Virtualization.

So I went and bought a new laptop first because mine didn't quite make the grade.

While I was waiting for the RC0 download to finish, I thought I'd install and check out Windows Vista x64. I liked it so much I decided to keep it. And that's where the trouble started!

I thought I'd just eval it, so I just wiped the whole hard disk, made on partition and installed it. Because Windows Vista x64 can be on the same partition as Windows Server 2008 I was left with a dilemma. So to my first problem in getting Windows Server Virtualization working. I had to resize the partition, so I could create another! Simple you might think, but find a tool out there that does it. If you're like me and you know this was your own stupid mistake you'd rather reinstall everything, than buy a commercial tool (if you can find one that works on x64 Windows!).

After much hunting around I recalled that Linux had some tools to do it, so I found an openSUSE disc, booted it and edited the partition table. Fabulous! I then had two primary partitions, which was exactly what was needed for me to install Windows Server 2008 without destroying my beloved Vista x64 installation :)

It is important to bear in mind the following regarding the hardware you wish to use for Windows Server Virtualization (WSV):

  1. It only works on an x64 based system with Intel VT or AMD-V extensions. That means you need to be able to configure hardware assisted virtualization in the BIOS! If you don't have the option and you know from the processor vendor's web site that your processor can do it, start hunting for a BIOS upgrade.

    On my Dell D630 laptop the option in the BIOS is located POST Behavior/Virtualization - Enabled

    It is also important to note that on some systems you may need to completely disconnect the power from your system after you've saved the BIOS settings. So if you're experiencing problems with the setup steps below, it may be worth a try. Some BIOS/CPU combinations do not reset correctly. Remember to remove the battery too if it's a notebook system.
  2. You have to enable Data Execution Prevention on the BIOS. On AMD systems it's usually called No Execute (NX) bit and on Intel systems it usually known as Execute Disable (XD)

    On my Dell D630 laptop the option in the BIOS is located at Security/CPU XD Support - Enabled.

So finally after all that, I got around to installing Windows Server 2008 x64 RC0. RC0 is important, as previous builds did not include the functionality at all.

After a successful installation I explored %systemroot%\wsv. In that directory there were two .MSU files called

  1. Windows6.0-KB939853-x64
  2. Windows6.0-KB939854-x64

WSVDir

I simply ran those in the order listed.

And that was it really. I was good to go.

The next steps were simple. I open Server Manager, went to the Roles Summary, and selected Add Roles.

Windows Server Virtualization was in the list! I simply selected it, and it began it's configuration.

AddRoles

Next, it prompted me for the appropriate network interface card(s) to use for my virtual machines

AddRolesNIC 

And then finally after a reboot, it proceeded to complete the installation and presented me with a results screen. Because my system was not connected to any network (it's a test system so why bother!) It warned me that Windows automatic updating was not enabled. It also gave me two informational messages stating that, "This is a pre-release version of Windows Server Virtualization", and also told me that WSV was installed.

Results

To configure the environment and create a virtual machine all that was necessary was to access the management console via the server manager or the Start Menu.

After loading the Virtualization Management Console, I had to select my system name and then one right hand-side choose New/Virtual Machine. After a brief moment the New Virtual Machine Wizard appeared. It follows the same process as all wizards we're used to.

In the first dialog you get some "Before You Begin" information to read, which you can also disable for future running of the wizard.

Thereafter you are asked for the name of the virtual machine. You can also use the default folder for the virtual machine or create your own.

WizName

The third dialog simply asks how much memory you would like to allocate to the virtual machine.

The fourth dialog asks if you want the virtual machine to be connected to the network, and if so, which network card to send the traffic through.

Then the fun starts. The fifth dialog asks for the name of the virtual machine file, the location (again) and also the size. You can also use an existing hard disk, or attach a virtual disk later. Obviously using an existing hard disk has performance benefits.

You're then prompted for operating system details. This dialog is interesting as it is different than Virtual Server 2005 R2 or Virtual PC. It doesn't ask you which operating system! It just gives you the following options:

  • Install an operating system at a later time
  • Install an operating system from a bootable CD/DVD-ROM (you can also point to an image file)
  • Install an operating system from bootable floppy disk
  • Install an operating system from a network-based installation server

In my case I chose to install from a bootable DVD-ROM.

The last screen in the wizard is a summary screen. It also provides a check box to allow you to start the virtual machine once the wizard is finished.

And that's it!

I was returned to the Virtualization Management Console and right-clicked my virtual machine and chose start. Then nothing happened!

WSVMgmt

I had to right click the machine name again, and this time chose Connect... That opened a terminal server type session to the virtual machine and I was able to work within it.

WS2K08

Microsoft Certified Architect Certification Process

So here it is, my first blog entry, finally!

Last week I got certified as a Microsoft Certified Architect: Infrastructure. It's quite an honor, especially considering just how different it is to other exam-based certifications out there.

In the case of MCA the process is a lot more rigorous and there are no exams.

As taken from http://www.microsoft.com/learning/mcp/architect/default.mspx"The Microsoft Certified Architect (MCA) programs identify top industry experts in IT architecture. Microsoft Certified Architects (MCAs) have proven experience with delivering solutions and can communicate effectively with business, architecture, and technology professionals. These professionals have three or more years of advanced IT architecture experience; possess strong technical and leadership skills; and form a tight, supportive community. Candidates are required to pass a rigorous review board conducted by a panel of experts. The MCA credential was built by and for industry architects."

I had to write a case study on a project I had performed an architect role in. That's not a deep technical role by the way. If you look at the description above you'll see strong leadership skills are required too. There is also a lot more to it than that. The board that reviews candidates actually looks for specific competencies and many of them are not technology related at all.

Take a look at http://www.microsoft.com/learning/mcp/architect/archcompetencies/default.mspx for more details.

These competencies include:

  1. Leadership,
  2. Communication,
  3. Organizational dynamics,
  4. Strategy,
  5. Process and tactics,
  6. Technology breadth
  7. Technology depth.

It's pretty pointless try to get the certification without understanding how to communicate with various stakeholders in a business including:

  • CIO's
  • CEO's
  • Architects
  • Operations Staff
  • Project and Program Managers
  • Technical staff

You will also need to demonstrate how you can work effectively in an organization mired in politics and embedded bureaucracy. It's a pretty difficult challenge, and you really need to have faced that multiple times in a project of the scope that you communicate with all the stakeholders in a business before you can be effective at putting together a case study.

So unlike MCSE and MCSD this is really not a technical certification. It is a bit in that you're expected to know what you're talking about and that you can act effectively in a technical environment, but it's also not because it's looking for a wealth of business and project expertise you just don't get by being a deep technical person and nothing else. There is absolutely nothing wrong with being deep technical; many people make a fortune doing so. This is just not the certification you should be chasing if that is what you are.

The board review process is pretty disarming. As much as you prepare and prepare, nothing really prepares you when you face your peer MCA's and get reviewed. They are looking for specific things and expect that you demonstrate them. Some things you might not expect to be questioned on but that are there include:

  • Solution Development Lifecycles
  • Operations (frameworks such as TQM, MOF, CMMI, CoBIT and ITIL. Also thinking about operations as part of your design cycle)
  • Architecture frameworks such as TOGAF, WSSRA, Zachmann

After putting together a case study, CV/resume and completing a self-analysis of your competencies you wait a little while with the MCA team reviews your submissions and determines if you're a candidate for the board review. The MCA process can stop at any time, and this is effectively the first point at which it can!

After you're selected, you're then informed of your review date, venue and time. You are then expected to prepare a 30 minute presentation on your case study. You will need to ensure your presentation includes your solution, your specific role in the solution (they're less interested in the project, than what you specifically did, but you still need to know the project details for the QA later), how you delivered the solution, what decisions were made and why and also demonstrate the competencies in line with what they are looking for.

They cut you off at exactly 30 minutes, so make sure you prepare your presentation properly and dry-run it multiple times! In my case I finished with 3 seconds to spare!

As a side note, but a rather important one, pick a case study for a project you know well. Pick something recent where you worked with as many of the stakeholders I listed as possible. Ensure the project you chose is not overly complex, because you're just not going to be getting the solution and the project dynamics across in 30 minutes. You do need to choose something interesting and challenging, but there is a fine line and you need to ensure you do not cross it. In my case I used a sub-project of a far larger project. I spent a couple of minutes defining the total project and then ensured they understood which piece of the overall project I was going to focus on.

The board then proceeds to ask questions that are mostly related to the case-study for 40 minutes. Each board reviewer has 10 minutes to ask what they like about your case study. You will need to remain cool, calm and collected and most importantly of all you better say you don't know something if you don't. They use precision questioning techniques and you can pretty much get yourself into a whole tangled web if you don't.

After the 40 minutes of questions, which is quite tiring, you have a five minute break while they strategize on what they'll be asking for the next 40 minutes questions round. If you're like me you'll walk out a little dazed and confused and completely unsure how you're doing. They are pretty poker faced so you will not get much of a read on anything or how well you're doing.

After that it's a free for all. Each board member has 10 minutes again, but they ask pretty much anything they like. So if they feel they'd like to drill into a competency or a depth technical area they do! It's even more daunting than the first round of questions and makes you feel like and even bigger idiot than before.

After that's over you have 5 minutes to say anything you like. In the case of my one other colleague that was accredited he did a sales job mentioning why he thought he was an architect. In my case I just said thanks and made some remarks about how I thought they could improve the room layout! Yeah I'm nuts. In my final five minute opportunity to ensure I got certified I chose that!

Anyway, I survived. Whether you pass or fail they give you amazing feedback from the process that can only serve to help you become an even better architect than ever.

A key learning for me was that the board members actually do want you to pass. They're looking for specific things and you must demonstrate your capability in those areas, but nothing they asked me was absurd or obtuse. They didn't try to pick holes in me. They were just interested in uncovering my skill areas, and I think they did it very professionally.

The certification is tough to get, but that's also part of its attraction. You cannot just learn some technology super well and get it. You need to have so much experience and faced many different situations to be able to achieve it. To me it's a breath of fresh air in an industry that is saturated with exams and boot camps pumping out inexperienced people. In a way the standard certifications are there to provide employers with a minimum set of standards for employing competent people. The architect certification is a master certification and demonstrates experience and expertise as well as demonstrating a number of soft skills and competencies. It's something you have to work for, not just learn for.

I hope these mutterings help someone, somewhere do well in their MCA review process. If it did drop me a note a let me know. I'd love to know I helped someone out there.

This post is governed under the site terms of use and by the Creative Commons Attribution-NonCommercial-NoDerivs license. Original work of Natasha Anne Mocke.You may republish this work as long as explicit credit is given to the author.