OSD – Live from the field my ‘Best Practices’ for Operating System Deployment

Working with technique often requires a certain focus to maintain. This focus lays mostly on the technical part.

When working with tools such as Microsoft Deployment Toolkit or System Center Configuration Manager, -the people who are working with such tools are mostly technical persons- you are already building, scripting, testing etc. And what I have learned since I started working with Microsoft Deployment Toolkit almost 4 years ago, and System Center Configuration Manager almost 3 years ago. Is that sometimes you need to get out of your (technical) zone.

The zone that I’m working in, was trying to find solutions from a technical standpoint. Solving everything with scripts, command lines, etc.

This is why today, I’m not writing a technical blog, but a blog which hopes to achieve to bring the logical and more sensible things to the table when it comes to Operating System Deployment. In my presentation which I did for my Microsoft Certified Trainer course, which was about Operating System Deployment, I accumulated some ‘best practices’ that I’d like to share, based on my own experiences.

For the TLDR people among us, you can read my ‘Best Practices’ here:

  1. Does the current state of the company’s infrastructure play a role?
  2. What release cycle do you want to maintain for your reference image
  3. What content is worthwhile putting in your reference image
  4. Copying raw data is much faster than installation time
  5. Microsoft Deployment Toolkit and System Center Configuration Manager -when it comes to Operating System Deployment- are tools for deployment, not the solution to World Peace!

1. Does the current state of the company’s infrastructure play a role?

At one of the projects where I was involved in creating a Operating System Deployment solution, the network capacity was a big constraint to use conventional solutions to distribute the product that was built. The company had a SCCM infrastructure for 22.000 clients based on a SCCM topology with 1 central site, 3 primary sites on each large continent (EMEA, APAC and AMAC) and approx. 120 distribution points all around the world.

Since we had to support 25 hardware models, Windows 7 Enterprise with 7 additionally embedded language and office language packs, the total size of the product exceeded 32 Gb. Far more then supported in SCCM 2007 R2 SP2 which lead us to a few challenges:

  • SCCM vs. MDT
  • Offline distribution instead of online to relieve the network becoming
  • Other limitations in SCCM

Since SCCM operates very differently opposed to MDT when it comes to distributing content and taking into account that in such a large organization you have to deal with other departments that are using the same infrastructure , and other departments that are sharing the same usage of SCCM as we do. You might want to reconsider if it would be wise to build your reference image with SCCM!

For example, when we first built our reference image on the company’s infrastructure based on a SCCM and Hyper-V basis, it sometimes took over 14 hours to run the entire task sequence, and an additional 10 hours to capture the image (Told you the network was a big constraint) so after some time, I optioned to build the reference image with MDT, since it has less dependency with the production network, and it could be done in any environment, bringing back the build and capture time from 24 hours to 8 hours. Which is a big win.

We decided to use this approach in the future: To build the reference image in MDT and then import in SCCM for distribution. Which I believe is also best practice Microsoft advises you to do.

Then a next obstacle arose, distribution, the network wasn’t capable of staging 22.000 clients, so we had to use offline media. A functionality that is both present in MDT and SCCM. And where MDT has as far as I know no constraints, SCCM 2007 has the following constraints:

Constraint 1

USB sticks need to be NTFS formatted on a Windows 7/Server 2008 system, then the media needs to be created on a Windows XP/Server 2003 console. Ridiculous right? Try to create standalone media on a Windows 7/Server 2008 system, the 2007 SCCM console will format your USB during the creation of the standalone media right back to FAt32, which can’t handle (image) files larger than 4 Gb!

Constraint 2

USB sticks used for standalone media with SCCM 2007 may not exceed 32 Gb size, SCCM can’t handle USB that exceeds 32 Gb. Just a fact, nothing you can do about it, or is it? In this case after consulting the stakeholders of the project we optioned to use ‘dummy’ driver package’s. Empty folders with only a text file present stating the name of a hardware model. So we could replace the contents in this folder with the actual driver contents that needed to be there. A least desirable option, since it requires manual operations, and wherever people are involved, mistakes are made. Could we have solved this any other way? -Yes, by creating multiple USB products, stick 1 would support hardware from A to M and stick 2 to support hardware from N to Z. Also not very efficient.

Constraint 3

Task Sequence XML files may not exceed certain amount of size in SCCM 2007, 2012, 2012SP1 and 2012 R2, again just a fact, nothing we can do about it. Does this also apply for MDT? not as far as I know.

This is something that need’s to be tested. On the other hand, if you have task sequences that large, you might want to reconsider if there are things that can be consolidated or simplified by using Task Sequence Variables or if you can’t avoid steps, scripting some additional actions into VB or PoSh.

I think I have proven my point enough already, sometimes you just need an open and clear view on the situation and ask yourself: “is this really the way to go?”

2. What release cycle do you want to maintain for your reference image

Aaahhh, point 2, this one can be a bit shorter (I hope)

Release cycles, release management, all large (mostly Enterprise) company’s have processes tied to the release of their IT operation. Windows updates, general application updates, new releases of an operating system build etc. Chance is, you need to conform to a release calendar, freeze windows etc.

These things don’t have to be an obstacle. Imagine the release calendar options for quarterly windows to release an operating system update, or half-yearly. There is enough time to develop and test the upcoming build. And there is also enough time to decide which content is worth putting into the reference image. Which brings me at point 3.

3. What content is worthwhile putting in your reference image

A picture says more than a thousand words, so here we go:

figure 1.1: OSD components stack

Pizza Time!

Building your image, isa like making a pizza (Super Mario accent)

Making a pizza is about having good dough, sauce, cheese and other delicious ingredients. When you have made the pizza to your taste, you put it in the oven, so that when the pizza is finished, you sprinkle some Italian herbs and other toppings over it, which finishes your pizza of.

In relation to my previous point, which content is worth putting into your reference image? Imagine, you have decided to embed Adobe Reader into your image, as we all know, Adobe Reader just like Adobe Flash, Adobe Shockwave, Microsoft Silverlight, Oracle Java, are generic applications which have a rapid release cycle. Almost every month an update for any of these applications is released. Would it be wise to put these applications in the image? Not immediately, -but hey, if there is any reason why it should be in there, that’s a decision for you to make.

figure 1.2: OSD Concept Build and Deploy

Build & Deploy

So back to my pizza making: The same thing counts when creating a reference image. The Windows operating system forms the dough. The generic middleware applications everyone uses acts as the sauce and cheese. The office suite are the mushrooms, and then the image goes in the oven for a little sysprep and capture 😉 and when the reference pizza is ready, you install some little Adobe Reader over it!

Never mind, I’m crazy about pizza! 😀

4. Copying raw data is much faster than installation time

In relation to point 2 and 3. Sometimes it can be convenient to put certain data into your reference image. For example to decrease deployment time. Regardless if you install your clients over the network or via USB media, copying raw data is much faster than installing applications from a distribution point or USB media.

Imagine you have a generic business application which has a huge front-end, a tool that everyone uses in a Enterprise environment, I’m just gonna shout something crazy: “SAP Gui”, a massive application which with a little bit of bad luck exceed’s more than 500mb of disk space when the application is installed. What do you think is faster? Installing this application during the deployment of your target computers, so you can sit and wait untill all DLL’s and other files are registered and configured on the system, or embedding the application in the image? Exactemundo!!!

5. Microsoft Deployment Toolkit and System Center Configuration Manager –when it comes to Operating System Deployment– are tools for deployment, not the solution to World Peace!

Last but certainly not least, I think even my most important rule, piece of advise: Deployment tools, are deployment tools, not the solution to World Peace. I once experienced a customer doing a project, where I was building a Operating System Deployment solution for, and he asked me:

Hey Rens, can we also enable Remote Desktop Protocol with MDT“, I said yes we can, I have the PoSh command right here. “OK, great“, next up he asked me: “Hey Rens, can we also configure the firewall rule for the Remote Desktop Protocol“, I reacted, yes we can I have the PoSh command right here “OK great“. Next question: “Hey Rens, can you also set the wallpaper to pink?“, I reacted, yes we can I have the script right here!

Suddenly I was configuring all kinds of stuff in MDT that had nothing to do with Operating System Deployment. So I told the customer, that these things could be better managed with Group Policy Objects.

The lesson which can be learned from this is that you need to use tools what they are intended for, and never let yourself be tempted or lose sight of what you are doing, and how much it affects the big picture, or have impact on the situation, environment, etc.

Using these tools is not only about having the technical skills, it’s also about, using logical sense, gut-feeling, intuition, and your experience in the field of Operating System Deployment to make the right and necessary decisions to perform your work to the best of your abilities!

So next time when you are stuck with a technical decision or issue. Take one step backwards, step out of the technical zone, leave the OSD tunnel and try to look at it from a different point of view, a fresh perspective to do what’s right and make the right decision.

There is no right and no wrong, only what’s best and what’s better!

If you have any ‘best practices’ to share, please feel free to contribute in the comment section!

Thanks for reading 🙂

6 thoughts on “OSD – Live from the field my ‘Best Practices’ for Operating System Deployment

  1. Diagg

    Hi Rens,
    In my opinion, Point 3 is more deeply linked to point 2 than you think : Not Adding Adobe and alike to your reference image seem natural due to intense release schedule from those company, BUT as you said in point 2, if you update your master every six month what is the point ? In the next six month you’ll pick up the latest stuff from adobe, and if the security team advice more regular updates for those tools, they will be managed via SCCM [Package/Application/WSUS/whatever…] outside the lifecycle of the reference image.
    One thing scared me: 32 GB image with all language pack!! That’s really too big for a proper management! I’ve done an OSD solution for an international company, the way we worked was like : use a neutral windows and office for the reference images, and allow entity to inject there preferred language (for disconnected/usb media) or Install it dynamically via SCCM.
    Thanks for this blog post, this kind of design guidance are really appreciated.

    Reply
    1. Rens Hollanders

      Hi,

      I completely agree with you, on that topic that 32 Gb is too much for proper management. In this case the WIM file only had a size of 14 Gb, other complimentary software and driver packs made the rest.

      Also in normal release management procedures it is indeed more efficient and desired to distribute that sort of software like you mentioned with SCCM, or any other software distribution tool.

      I thank you for your comment, it is much appreciated that you find this blog usefull!
      Thanks

      Kind regards,

      Rens

      Reply
  2. Matt Balzan

    Hi Ren,

    In your swim line concept, your drivers come after the OS image is deployed not before.
    🙂
    Matt

    Reply
    1. Rens Hollanders

      Hi Matt,

      Not sure what you mean. Drivers are injected prior to the copy and setup phase of the OS, not after.

      Cheers! Rens

      Reply

Leave a Reply to Sugunakar Cancel reply

Your email address will not be published.