866-764-TECH(8324) · Subscribe to Application Solution Providers, Inc.News FeedSubscribe to Application Solution Providers, Inc.Comments

The debate still rages on, Hosted Shared or Hosted Virtual desktops.  This is probably one of the most common question I hear. So which is it? Hosted Shared or Hosted Virtual? 

Guess what? The answer, for once, isn’t “It Depends”.  The answer is “Yes”, you will mostly likely need them both. For those of you who aren’t sure what the difference is, it is pretty straightforward:

  1. Hosted Shared Desktops: A published desktop on XenApp. Users get a desktop interface, which can look like Windows 7. However, that desktop is actually being shared by every user on the server. Although we can configure restrictions and redirections to allow users to have a smaller impact on each other, there is still a risk. Many users to one desktop.
  2. Hosted Virtual Desktops: A Windows 7/XP desktop running as a virtual machine where a single user connects remotely. One user’s desktop is not impacted by another user’s desktop configurations. Think of this as one user to one desktop. There are many flavors for the hosted virtual desktop model (existing, installed, pooled, dedicated and streamed), but they are all located within the data center.

The big reason why people want to figure out if they need a hosted virtual desktop is because of scalability, which equates to money. I can get 100-200 concurrent users on a Hosted Shared Desktop model and 50-100 concurrent users on a Hosted Virtual Desktop model with the same hardware. Seems like a no brainer, go with Hosted Shared Desktop.

Unfortunately, it isn’t an either/or answer. It is more of a 70% one way, 20% another way, and 10% a third way.

To do it right, you have to start by understanding your users. Citrix calls it User Segmentation Analysis, but it essentially means gathering information about my user groups to understand what they need to do their job. If you do this, you will see that the decision isn’t nearly as difficult as you expect. Here are a few examples and how I would align the groups with the most appropriate virtual desktop model (and I’m mostly looking at the application aspect, but we would also want to look at user location, mobility and end points):

Group Description Recommendation
Group 1 Users are mostly within one or two applications all day. This application is the main line of business application. Their performance is based on speed and accuracy. Hosted Shared Desktop
Group 2 Users have a core set of applications they require to do their jobs. Oftentimes, these users must be able to modify system-level settings like environment variables, or install their own applications Hosted Virtual Desktop (Dedicated)
Group 3 Users focus on content creation utilizing Microsoft Office and Adobe Photoshop. They users also browse for content and graphics online via a browser. Hosted Shared Desktop
Group 4 Users utilize a few applications that consume significant amounts of CPU resources when doing certain activities (video rendering or code compiling) Hosted Virtual Desktop (Streamed to Blade)
Group 5 Users require admin-level priviledges for certain applications Hosted Virtual Desktop (Pooled)

I prefer to start with the most scalable solution first, as long as it meets the user requirements. That is the key point… user requirements. Many users need additional functionality or capabilities that are not suitable for the hosted shared desktop model. Once you reach these users, you need to figure out how to provide them with the most appropriate desktop.

In the end, there is a balancing act that goes into the design. If I have a small group of users that can utilize 3 different models, and 1 of the models is already in place, then it only makes sense to have those users use that model. It simplifies the infrastructure and makes it easier to manage.

Daniel – Lead Architect – Worldwide Consulting Solutions
Twitter: @djfeller
XenDesktop Design Handbook
Blog: Virtualize My Desktop
Facebook: Ask the Architect
Email Questions: Ask The Architect

Have you thought about charging your “customers” for IT services you are providing? I bet you have and I thought about that model for quite some time.
The promise of cloud computing, virtualization, usage metering, and IT as a Service often spawn the thoughts of billing the end customer, i.e. business units in a corporation. This is a world where super flexible infrastructure can flip the switch on applications, server workloads, entire desktops and user accounts in a heart beat.
Niel Nicholaisen writes about the topic in this article?
Let me add a few of my own thoughts:
• IT departments can count on (or hope for) a small percentage of a company’s annual revenues as a budget for capex and opex. IT is asked to provide literally the entire workspace and infrastructure for all users and often has to do more with less compared to the previous year. In the healthcare industry, that number stands at roughly 3% of revenues in the US and only about 2% in Europe.
• IT departments often get frustrated, because they have to provide expensive and complicated applications to a handful of users that chew up a large portion of resources and expenditures to do so.
• With the dawn of desktop and broader application virtualization, IT departments are tempted to charge for their services on a per user or per application basis. $30 per month for a desktop, $20 per month for Internet access, $5 per month for anti virus, etc.
• The model is obviously tempting for two reasons: It discourages the use of complex and expensive applications and brings the true cost of computing back to the business and it also holds the promise of increasing the IT budget linearly with the services that are provided.

However, as Niel points out, this can alienate the users. First of all, as a user I may find that I get really shoddy service for the $70 per month or so for basic services per user. As a business, I don’t have the choice to go get my Internet access or email service from someplace else . Sometimes (as a business) I think I can, and I may go to a cloud-based email service or attempt to buy my own backup service, but all of that comes at the cost of increasing complexity and introducing expensive integration points.
Keep in mind that IT is just another corporate service. I am not getting charged for payroll processing, legal support, marketing support, etc. Larger companies tend to cross charge for internal consulting services and sometimes for recruiting activities, but that’s pretty much it.

So, here is my recommendation for IT: Go ahead and charge your business units. Be aware of the pushback this may generate. In order to prevent backlash, do the following:
• Be the best in the industry. That’s right. Users will be tempted to compare the service you are providing (at the price you are charging) to consumer-grade services that are available online and that are provided by much larger organizations with better economies of scale. The expectation for the quality of your service goes up as you start charging for it.
• Virtualize applications and desktops. This will not only centralize the data, but make cost more transparent and predictable. If you do this right, you can reduce costs. If you don’t, you can end up driving up your costs, so choose wisely.
• Consider using third party, cloud based services for certain types of apps. Just because you managed something in-house in the past, doesn’t mean that this is the best modality going forward. CRM and web hosting services are examples of apps that have been pushed (or elevated) to the cloud for a while now in the industry.
• Monitor your resource use and utilization to get a grip on the human cost of environment support. The smaller your organization, the more difficult this is going to be. After all, you can’t hire a fraction of a SQL Administrator.
• Ensure that you explain (via your executives) that you have much higher data availability and reliability standards to meet than any publicly available service and that the company is required to provide the services internally to maintain control and ensure compliance.
• Consider implementing a “Bring Your Own Computer” model. We’ve had it at my employer for a while and it’s great. I own the endpoint, and I can manage my computer just fine, thank you very much. I can now have my own desktop, anti-virus, and other consumer grade services to dabble around and get a corporate Windows 7 image (a virtual desktop) from IT with the key apps I need to do my work.
• Expect to get charged by your accountants for the support they may need to lend to you as part of this process

Questions? Comments? Let me know what you think and how you have been managing the cost of providing IT services.

Florian Becker
Twitter: @florianbecker
Virtualization Pulse: Tech Target Blog
Ask the Architect – Everything Healthcare

In another blog, I discussed Windows 7 services that you might wish to disable when going down the path of desktop virtualization. In this article, I’m now focusing on registry modification you will want to make to optimize Windows 7 for virtual desktops. I’ve broken it down into Recommended configurations, Standard Mode configurations (for Provisioning services), and Optional configurations.

As I learn more from upcoming Windows 7 implementations, I’ll be updating the following tables, so it might be worthwhile to stay updated with RSS or subscribe via Email. Now, for the good stuff…

Recommended Configurations

The following registry changes are recommended for all deployment scenarios and would almost always be desirable in a Windows 7 hosted VM-based VDI desktop implementation:

Configuration Optimizer Registry Modification (in REG format)
Disable Last Access Timestamp Yes [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem] “NtfsDisableLastAccessUpdate”=dword:00000001
Disable Large Send Offload No [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BNNS\Parameters]
Disable TCP/IP Offload No [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
Increase Service Startup Timeout No [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control] “ServicesPipeTimeout”=dword:0002bf20
Hide Hard Error Messages No [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Windows] “ErrorMode”=dword:00000002
Disable CIFS Change Notifications No [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer]
Disable Logon Screensaver No [HKEY_USERS\.DEFAULT\Control Panel\Desktop]

Note: The Optimizer column indicates whether this registry change is included in the XenConvert Optimizer tool that is installed with the Provisioning Services target device software.

Standard Mode Recommended Configurations

The next set of registry changes are recommended for images deployed using standard mode vDisk images with Citrix Provisioning services. Standard mode images are unique in that they are restored to the original state at each reboot, deleting any newly written or modified data. In this scenario, certain processes are no longer efficient. These configurations may also apply when deploying persistent images and in many cases should be implemented in addition to the changes recommended in the preceding section.

Configuration Optimizer Registry Modification (in REG format)
Disable Clear Page File at Shutdown Yes HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management]

Disable Offline Files Yes [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\NetCache]
Disable Background Defragmentation Yes [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Dfrg\BootOptimizeFunction] “Enable”=”N”
Disable Background Layout Service Yes [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\OptimalLayout]
Disable Bug Check Memory Dump Yes [HKLM\SYSTEM\CurrentControlSet\Control\CrashControl]
Disable System Restore Yes [Software\Policies\Microsoft\Windows NT\SystemRestore] “DisableSR”=dword:00000001
Disable Hibernation Yes [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Power] “Heuristics”=hex:05,00,00,00,00,01,00,00,00,00,00,00,00,00,00,00,3f,42,0f,00
Disable Memory Dumps Yes [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl] “CrashDumpEnabled”=dword:00000000 “LogEvent”=dword:00000000 “SendAlert”=dword:00000000
Disable Mach. Acct. Password Changes Yes [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters]
Redirect Event Logs No Set appropriate path based on environment.HKLM\SYSTEM\CurrentControlSet\Services\Eventlog\Application]


Reduce Event Log Size to 64K Yes HKLM\SYSTEM\CurrentControlSet\Services\Eventlog\Application]


Optional Configurations

This last set of machine-based registry changes is optional regardless of whether the image is deployed as a persistent or standard image. In many cases, the following configurations should be implemented; however, these configurations should be analyzed for suitability to each unique environment.

Configuration Justification Registry Modification (in REG format)
Disable Move to Recycle Bin Although the recycle bin will be deleted on subsequent reboots, disabling this service altogether might pose a risk in that users will not be able to recover files during their session. Although this setting is part of the optimizer, it might be advantageous to not disable the Recycle Bin. [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\BitBucket]

Note: These are only recommendations. You should implement these at your own risk

Remember, you can stay current with this and other Windows 7 virtual desktop recommendations via the Virtualize My Desktop – Windows 7 site.

Lead Architect – Worldwide Consulting Solutions
Follow Me on twitter: @djfeller
My Blog: Virtualize My Desktop
Questions, then email Ask The Architect

Speed.  More speed. And to get more speed with desktop virtualization, we hear more and more about how important IOPS are to being able to support the virtual desktop. Not enough IOPS means slowness. No speed. I’ve had a few blogs about it and plan to have a few more. What I wanted to talk about was an interesting discussion I recently had with 3 Senior Architects within Citrix Consulting(Doug Demskis, Dan Allen and Nick Rintalan).  There are 3 smart guys who I talk to fairly regularly and the discussions get quite interesting.

This particular discussion was no different.  We were talking about the importance of IOPS, RAID configs, spindle speeds with regards to an enterprise’s SAN infrastructure. (Deciding if you are going to use a SAN for your virtual desktops is a completely different discussion that I’ve had before and Brian Madden had more recently). But for the sake of this article, let’s say you’ve decided “Yes, I will use my SAN.” If your organization already has an enterprise SAN solution, chances are that the solution has controllers with plenty of cache. Does this make the IOPS discussion a moot point? If we simply use an IOPS calculator (at least the ones I’ve seen) and do not take into account the caching capabilities of the SAN controllers, won’t we over-provision our virtual desktop environment and end up wasting more money/resources?

Many of us who are familiar with XenDesktop knows that changes made to the golden disk image, when delivered via Provisioning services, is stored in a PVS Write Cache.  From numerous tests and implementations, we know that 80-90% of the IO activity from a virtual desktop will be writes.  If we configure the SAN Controllers to be 75% write (assuming we have battery-backed write cache controllers), we allow the controllers to allocate more cache for write operations, thus helping to offload the write IO to the disk, which raises the number of effective IOPS the storage infrastructure can support. Think of the controller’s caching capabilities as a large buffer for our disks.  If our disks can only support so many write operations, the controller cache stores the writes until the disk is able to write it to the platter. This cache allows the infrastructure to keep moving forward with new operations even though the previous operations were not written to the disk yet.  They are all buffered. Just remember, we aren’t reducing the total number of IO operations, we are just buffering them with the controller cache.

Think about it another way. If we encounter a storm where each user will require 10MB of write operations and the storage controller has a 4GB cache, that one controller can support 400+ simultaneous users for this particular storm, and we haven’t even talked about the disk IOPS yet!!!  With this scenario, wouldn’t a single disk spindle be able to support this particular storm because the controller is buffering everything? And what’s also interesting is those write operations are being flushed to disk continuously so the number of users the controller will be able to support would be much, much higher.

So if we have cache on our controllers, which most SAN controllers I’ve seen lately have, are we over designing the storage infrastructure by only focusing on IOPS?  (this is assuming you are using SAN and not local disks on your hypervisor which I talk about a lot as well).  Just remember that those write operations must eventually get written to disk. So if we know what our controller cache is capable of, and we know the amount of storage required for a particular storm (logon, boot, logoff, etc), can’t we support more users (and I mean a lot more users) on the SAN?

What do you think?

Daniel – Lead Architect – Worldwide Consulting Solutions
Follow Me on twitter: @djfeller
My Blog: Virtualize My Desktop
Questions, then email Ask The Architect

It almost sounds like I’m talking about personal finances. You better plan your cache appropriately or you will run out. I’m not talking about money; I’m talking about system memory (although if you plan poorly we will quickly be talking about money).

It comes down to this… system cache is a powerful feature allowing a server to service requests extremely fast because instead of accessing disks, blocks of data are retrieved from RAM. Provisioning services relies on fast access to the blocks within the disk image (vDisk) to stream to the target devices. The faster the requests are serviced, the faster the target will receive. Allocating the largest possible size for the system cache should allow Provisioning services to store more of the vDisk into RAM as opposed to going to the physical disk.

Not planning system cache appropriately is the 8th mistake made when deploying virtual desktops

10. Not calculating user bandwidth requirements

9.   Not considering the user profile

8.   Lack of Application Virtualization Strategy

7.   Improper Resource Allocation

6.   Protection from Anti-Virus

5.   Managing the incoming storm

4.   Not Optimizing the Desktop Image

Unfortunately, many environments are not configured optimally. Simply adding RAM to a Provisioning services server is not enough; the system must be configured appropriately.

Operating System
The operating system plays a large role in how large the system cache can become.

    * Windows Server 2003/2008 x32: 960 MB

    * Windows Server 2003/2008/2008 R2 x64: 1 TB

Because the 64 bit operating system can have a larger system cache, a larger portion of the vDisk can be stored in RAM, which is recommended.

Windows 2008 is recommended over 2003 because of the improvements in the memory manager subsystem, which has shown some improvements.
RAM 8-32GB of RAM

The more RAM allocated for the server, the larger the system cache can become. The larger the cache means vDisks reads will be faster. If you have more vDisks, you will need more RAM. A quick estimate is to plan for 2GB of RAM/Cache for each vDisk you will host. If you want more details, then I recommend the great article: Advanced Memory and Storage Considerations for Provisioning Servicescreated by Dan Allen (Sr. Architect at Citrix). It goes into the details of how Windows deals with cache.
vDisk Storage
The vDisk can be stored on just about any type of storage (iSCSI, Fiber, local, NFS, CIFS, etc). However, there are a few instances where the storage selected will have an impact on how the Provisioning services server’s operating system caches the vDisk blocks.

1. Network Drive: If the Provisioning services server sees the vDisk drive as a network drive via a UNC path, the server will not cache the file.

2. CIFS Share: If the storage infrastructure is a network CIFS share, Provisioning services will not cache the vDisk in memory.
Optimizations In Windows Server 2003, large system cache must be enabled by configuring the server’s performance options, which is shown in the figure to the right.
In Windows Server 2008, this setting is not required due to the enhancements in the memory allocation system. Windows 2008 utilizes a dynamic kernel memory assignment that reallocates portions of memory on-the-fly, while previous versions had these values hard set during startup. As Windows 2008 requires more system cache, the operating system will dynamically allocate.


Daniel – Lead Architect – Worldwide Consulting Solutions
Follow Me on twitter: @djfeller
My Blog: Virtualize My Desktop
Questions, then email Ask The Architect

For those of you who missed the June 18th TechTalk on the design for a 20,000 user environment, missed out. Well, not really.  Luckily, we recorded the presentation so you can watch it whenever you desire.  As you know, the webinar was based on a reference design for a 70,000 user school district.  Links to the materials are as follows:

In addition to the materials, we also had some really great questions during the webinar, which I’ve answered below:

  1. Yes
  2. No
  3. Maybe
  4. No Way
  5. Of Course
  6. Possibly
  7. You are Crazy!!!
  8. E-MC2

Just kidding, the questions you are probably more interested in are as follows:

Q: Can you go over a little more on the RAM requirements that you need for the virtual desktops?

A: Sure. Scalability tests conducted were able to run Windows 7 with 768MB of RAM. However, in an actual real-world implementation, you will most likely need more than that. First, you need to break your users down into different categories (Light, Normal and Power). As you move through these, the users will require more RAM. Based on the student population, high school users will use more apps than middle school or elementary.  Thus we need to allocate more RAM for those groups of users.
Q: Great Webinar – Did you experience latency problems using Streaming Applications

A: If you mean in the launching of applications, yes.  Streamed applications will take longer to start than installed applications.  What we did to help shorten the time (still not the same) was to move the RadeCache for the streamed applications to the virtual desktop’s D Drive, which is persistent across reboots.  That means the application cache stays put and is reused for subsequent launches.  Of course it still isn’t as fast as installed because streamed applications also validate the application cache is correct.
Q: how did you design HA for PXE services

A: You will need redundancy in your DHCP environment. Unfortunately, DHCP can only give you 1 address for the TFTP server. If we integrate NetScaler VPX into the environment, we can perform load balancing for the TFTP server. So with 1 IP address we automatically balance across X number of TFTP servers. And NetScaler is smart enough to know when TFTP is down and it will not forward requests onto i
Q: Virtualization of PVS. How many workloads is the cutover between PVS virtual and PVS physical.  I guess 2000 desktops = physical PVS , @40 XenApp servers = Virtual PVS is ok? Can you comment ..ta

A: No good answer, unfortunately.  Can you virtualize PVS… Yes.  Do we recommend it? Not for large, enterprise deployments.  Most people agree that server virtualization makes sense for workloads that do not fully consume a system resources.  PVS does not fit as it will utilize your NIC to the fullest.  Putting PVS on the hypervisor offers little benefit. However, for small deployments, 200 virtual desktops, you won’t fully utilize the NIC and might be able to consolidate some servers
Q: How can you figure out network overhead on your environment using PVS streaming to XenDesktop host

A: Testing

  PVS is bursty.  You only get traffic when you need more of the vDisk.  When you do your pilot, you can look to see how much network traffic PVS is generating for a XenDesktop virtual desktop. Some guidelines are as follows: 1 Gbps NIC should be able to support 500 virtual desktop streams.  Booting Windows 7 requires roughly 200-230 MB of data across the wire.  As you use Office, 300 more MB is transferred (assuming you are using PowerPoint, Word and Excel).  Once that data has been transferred, utilization drops until you need more of the vDisk. Although it is bursty, you need to have enough capacity (bandwidth) to support your storms (boot, logon, logoff)
Q: what was the plan for fault tolerance on the virtual desktops if using local storage?

A: Users log onto another virtual desktop.  If we assume the desktops are throw away machines, then there should be nothing relevant for the user.  Their files, data, and personalization should be on a network share and not on the virtual desktop. If the server fails, a user just connects to a new virtual desktop
Q: What kind of impact do techs like WAAS and Riverbed have on this type of network traffic?

A: Not certain.  The challenge technologies will have is HDX is encrypted. In order for these devices to do caching/compression, they must be able to decrypt the traffic. Once the optimization solution can read the HDX packets, it can then compress not only within a single session, but across perform cross-session compression for users within the same office or school.  But then before the packets are placed on the WAN, the traffic must be re-encrypted to protect the data.
Q: You actually assumed no dial-up?!? Nice neighborhood.

A: There will be dial-up, but just not from the schools.  Students can dial into their ISPs or connect via DSL or cable modem and get access to the environment remotely.  This means we will want to optimize the HDX protocol for low-bandwidth situations, like those dial-up users
Q: What RAID type for the Virtual Desktop servers?

A: RAID 10 (1+0).  We have 8 spindles on each hypervisor that will support the virtual desktops. This allows us to not require the use of a SAN because 8 spindles should allow for enough IOPS to support the expected virtual desktop load. RAID 10 gives us fault tolerance, but without the huge write penalties we would get with RAID 5.

Building a virtual desktop is simply a matter of installing the Windows operating system. Right?  Slow down… although this will work, it won’t give you the best performance and scalability.  One of the items that many people mistakenly forget to accomplish is to optimize the base operating system.  This is the 7th mistake out of the top 10 mistakes made with virtual desktops:

10. Not calculating user bandwidth requirements

9.   Not considering the user profile

8.   Lack of Application Virtualization Strategy

7.   Improper Resource Allocation

6.  Protection from Anti-Virus

5.  Managing the incoming storm

Most people spend time creating a customized standard operating environment for their desktop operating systems.  This often involves specific location settings, default application settings, and desktop descriptions.  However, when delivering an operating system into a virtual desktop, many organizations do not go far enough to optimize the desktop for the virtualized environment.  Whether the desktop is a hosted VM-based VDI desktop, a local streamed desktop or a hosted shared desktop, certain optimizations allow the hardware to focus on user-related tasks as opposed to extraneous system-related tasks. The following are examples of virtual desktop optimizations:

  • Disable Last Access Timestamp: Each time a file is accessed within an operating system, a time stamp is updated to identify when that file was last accessed.  Booting up an operating system accesses hundreds and thousands of files, all of which must be updated. Each action requires disk and CPU time that would be better used for user-related tasks.  Also, if Provisioning services is used to deliver the desktop image, those changes are removed when the desktop is rebooted.
  • Disable Screen Saver: Utilizing a graphical screen saver consumes precious memory and CPU cycles when the user is not even using the desktop. Those processes should be freed and used by other users.  If screen savers are required for security purposes, then simply blanking the screen should be invoked as this does not impact the memory and CPU consumption.
  • Disable Unneeded Features: Windows 7 contains many valuable components like Media Center, Windows DVD Maker, Tablet PC Components, and Games.  These applications are memory, CPU and graphics intensive and are often not required in most organizations.  If these components are made available to users, they will be used. It is advisable to remove unneeded services before deploying the first images.

These are only a few recommendations, but it is obvious that optimizations have a major impact on the virtual desktop environment. I’ve started building a list of optimizations for virtual Windows 7 desktops, which can be found in the [Windows 7 â€] section of the Virtualize My Desktop site. If you are looking to optimize Windows XP, then you can find that in the Windows XP Optimizationdocument.

Stay tuned for more.

Daniel – Lead Architect – Worldwide Consulting Solutions
Follow Me on twitter: @djfeller
My Blog: Virtualize My Desktop
Questions, then email Ask The Architect

All I want is a list of documents that will help me design my XenDesktop environment.  Who else wants the same thing?  I bet many of you are saying “Yes, Me too!!”  That’s great and everything but how do you know when a new white paper is released that relates to XenDesktop design?  Do you keep your own personal library of white papers for XenDesktop design?  And even more, how do you keep informed when updates are made to previously released white papers?

I’ve got a special treat for you, the NEW XenDesktop Design Handbook.  Instead of trying to create a 1,000 page document that discusses all of the different design options and best practices, we are creating a kit for XenDesktop architects.  In the kit you will find some goodies:

  • Reference Architectures
  • Reference Designs
  • Implementation Guides
  • Planning Guides

This is just the start.  If you subscribe to the kit, you will be able to receive notifications when updates are made to the Design Handbook. We are in the process of developing many new best practice documents focused on different design areas that you won’t want to miss.  Interested yet? Then how about I give you the link to the NEW XenDesktop Design Handbook (you must log on to MyCitrix).

Daniel – Lead Architect – Worldwide Consulting Solutions
Twitter: @djfeller
Blog: Virtualize My Desktop
Questions? Email Ask The Architect

For those of you who didn’t know, last week was BriForum and I was able to attend as a speaker and as an attendee.  I think it was a great event, and I believe it was the largest one ever, so congratulations to Brian, Gabe and the TechTarget team.
What did I learn last week?  I learned 10 things, which ironically fits nicely into this blog. Without wasting more of your time, here are the Top 10 Things I Learned At BriForum all for your enjoyment

10.Lou Malnati’s is great pizza and only 2 blocks from the hotel. Is there anything better than Chicago Deep Dish pizza?

9. During the keynote, many people plan to go down the Windows 7 route, but hardly anyone has done it yet.  Nothing really Earth-shattering but just proves the point that Windows 7 will be a major force by 2012. For now, your best bet is to start planning and get ready for your migration because it will take time.

8. The Citrix employees have a good sense of humor.  One session joked about the number of consoles XenDesktop has.  It was even stated that maybe Citrix needs a console to manage consoles or that secretly Citrix collects consoles.  Every Citrix employee in the room, including myself, was laughing pretty hard.

7. There is more to HDX/ICA than the protocol.  Citrix spent 15+ years optimizing the protocol. I knew many of these items already but I still learned a few more things like how scrolling is optimized.  For example, if you scroll in Excel vertical, horizontal or diagonal, the data new screen data isn’t sent again, the endpoint is simply told to shift position.  Cool.

6. Anyone that does Windows 7 64bit migration better have a Plan B when apps fail to function.  Most common option for organizations, according to Shawn Bass, is to leverage XenApp.

5. Cloudbursting your XenApp environment into the cloud is possible as demonstrated by Rick Dehlinger and Jim Moyle, but no cloud provider was able to meet their 7 requirements for enterprise deployment. Only SoftLayer performed the best by reaching 6.

4. Profiles were a major focus (big surprise).  A talk focused on the differences between profile streaming and profile segmentation as ways to optimize the user profile.  In essence, profile segmentation requires application knowledge, profile streaming does not. However, profile segmentation also allows one to migrate their settings to Windows 7. And it seemed to me that although profile segmentation required more work, the value was greater.

3. Profile redirection is oftentimes a good thing until we focus on the AppData folder.  If we redirect the folder, we optimize logon but might make apps slower. So do you start with the fastest logon and take the hit in the application performance or vice versa.  Seems like taking a logon hit is better as it only happens once, whereas having application performance issues might make the application unusable.  Best option is to pick one approach, then use profile solution to optimize further.

2. If users get something great, they will oftentimes accept missing functionality. The perfect example is the iPad and multitasking.  The same can be said for virtual desktops. If the experience is better than their traditional desktop, users might accept missing functionality. Maybe it isn’t experience, maybe it is availability, functionality, speed, etc?  What can you give your users?

1. Most desktops are not mission critical. They do not require expensive storage. Steve Greenberg even asked why when we do desktop virtualization do people all of the sudden believe the desktop is critical when the traditional desktop is simply garbage and disposable? No idea and a great question. That is why Paul Wilson and I have been speaking about using local storage instead of SAN storage for your virtual desktops.  It is also why we typically don’t see people implementing live migration for their virtual desktops.

There are plenty of other interesting points from BriForum and many interesting sessions. I know I’ll be spending more time watching the recordings from the sessions I couldn’t attend and re-watching a few of the sessions I could attend.  But overall, it was a great week and nice to hear other perspectives.

See you in 2011

Daniel – Lead Architect – Worldwide Consulting Solutions
Follow Me on twitter: @djfeller
My Blog: Virtualize My Desktop
Questions, then email Ask The Architect

What would you say if I were to tell you that migrating to a virtual desktop was no different than if you were going to migrate to Windows 7?  I’m being serious.  Migrating a user to a virtual desktop has many similarities to migrating a user to Windows 7 on a traditional desktop.  With a Windows 7 migration, we are concerned with hardware, operating system, applications, personalization, and more.  With a virtual desktop migration, we are focused on hardware, operating system, applications, personalization and more. Same focus areas. Interesting
Of course there are some differences. For example, regardless of the path you are taking, most organizations will create their “Corporate Desktop Image”. At its core, the standard desktop image would have similar configurations like removing games, disabling Media Center, adding anti-virus software, etc. This would be done if Windows 7 were on a traditional desktop or on a virtual desktop.  But on the virtual desktop we will likely do more. We will likely do things like:

  • Disable unused Windows services
  • Modify the behavior of the De-fragmentation subsystem
  • Disable the Background Layout Service after it has executed once
  • Clean and optimize the image before deployment

Not only that, but with a virtualized Windows 7 environment we will also modify certain aspects to provide greater responsiveness for the user. These are important in certain FlexCast instances where the user is not sitting in front of the Windows 7 desktop.    What optimizations am I talking about?  How about the shadow of your mouse?  Did you even know your mouse had a shadow.  Honestly, I didn’t.  But simply disabling this little feature can provide greater responsiveness for the user.

These are the things I’ve been gathering from our Windows 7 deployments.  These lessons learned will help you on your way to Windows 7.  These are the things I am excited to be discussing during my BriForum session next week in Chicago.    I hope to see you there.  If you can’t make it, then we will have to continue the Windows 7 migration on this community site or on the Virtualize My Desktop site where I’ll have a Windows 7 Migration Resource Center that will cover lessons learned, tips/tricks, and best practices.

Lead Architect – Worldwide Consulting Solutions
Follow Me on twitter: @djfeller
My Blog: Virtualize My Desktop
Questions, then email Ask The Architect
Facebook Fan Page: Ask The Architect