Category Archives: Citrix

  • 0

Automate updating OS ISOs with OSDBuilder and MDT

Now that we’ve started down this “MDT” train, there’s no getting off until we discuss some of the more important parts. If you haven’t read the previous posts you can find them here:

Post 1: https://verticalagetechnologies.com/index.php/2020/03/04/citrix-microsoft-mdt-powershell-tools-the-beginning

Post 2: https://verticalagetechnologies.com/index.php/2020/03/16/building-multiple-citrix-images-with-one-microsoft-mdt-task-sequence

Today we will be focusing on automating the procedures for always having an updated Windows OS ISO. This is especially important for automating master image builds b/c we can’t remove windows updates from the task sequence. Windows Updates can add hours into your build, so if we can decouple this from the process we would be saving considerable amounts of time.

For those not familiar with OSDBuilder it’s an awesome Powershell compilation created by David Segura that allows you to service your Windows Operating systems in an ‘offline’ fashion. Say you have a Server 2016 ISO that’s 4 years old. With OSDBuilder, you can ‘Import’ that ISO and apply all the necessary cumulative updates, in an automated offline fashion. So the next time you install Server 2016, it’s completely updated.

For me, this process looks something like this:

  1. ‘First Run’ Powershell script (only ran once)
  2. Patch Tuesday arrives
  3. Run ‘Install/Update OSDbuilder’ Powershell script (scheduled task)
  4. Run ‘Ongoing Updates’ Powershell script (scheduled task)
  5. Run MDT Task Sequence
  6. Rinse and Repeat (Steps 2-6)

As you can see you pretty much have a fully automated cycle that spits out a new #Citrix / #VMware Master Image with a fully updated OS.

Note: OSDbuilder follows WSUS release schedule, not Windows Updates.  Read more here  https://www.osdeploy.com/blog/microsoft-update-releases.

I’ll break this blog into 3 sections: Install/Update Module, First Run, Ongoing Updates. A big thank you to Julien for setting up the base code.

Install/Update Module

This section will install/update the necessary powershell modules. If the Module doesn’t exist at all, it will install it. Also, if the module finds an update, it will install the latest. It’s broken into two module sections, OSDBuilder and OSDSUS. OSDBuilder is the main module and OSDSUS is the module responsible for tracking the windows updates.

Latest release notes:

OSDBuilder – https://osdbuilder.osdeploy.com/release

#==========================================================================
#
# Automating Reference Image with OSDBuilder
#
# AUTHOR: Julian Mooren (https://citrixguyblog.com)
# DATE  : 03.18.2019
#
# PowerShell Template by Dennis Span (http://dennisspan.com)
#
#==========================================================================

# define Error handling
# note: do not change these values
$global:ErrorActionPreference = "Stop"
if($verbose){ $global:VerbosePreference = "Continue" }

# FUNCTION DS_WriteLog
#==========================================================================
Function DS_WriteLog {
    <#
        .SYNOPSIS
        Write text to this script's log file
        .DESCRIPTION
        Write text to this script's log file
        .PARAMETER InformationType
        This parameter contains the information type prefix. Possible prefixes and information types are:
            I = Information
            S = Success
            W = Warning
            E = Error
            - = No status
        .PARAMETER Text
        This parameter contains the text (the line) you want to write to the log file. If text in the parameter is omitted, an empty line is written.
        .PARAMETER LogFile
        This parameter contains the full path, the file name and file extension to the log file (e.g. C:\Logs\MyApps\MylogFile.log)
        .EXAMPLE
        DS_WriteLog -$InformationType "I" -Text "Copy files to C:\Temp" -LogFile "C:\Logs\MylogFile.log"
        Writes a line containing information to the log file
        .Example
        DS_WriteLog -$InformationType "E" -Text "An error occurred trying to copy files to C:\Temp (error: $($Error[0]))" -LogFile "C:\Logs\MylogFile.log"
        Writes a line containing error information to the log file
        .Example
        DS_WriteLog -$InformationType "-" -Text "" -LogFile "C:\Logs\MylogFile.log"
        Writes an empty line to the log file
    #>
    [CmdletBinding()]
    Param( 
        [Parameter(Mandatory=$true, Position = 0)][ValidateSet("I","S","W","E","-",IgnoreCase = $True)][String]$InformationType,
        [Parameter(Mandatory=$true, Position = 1)][AllowEmptyString()][String]$Text,
        [Parameter(Mandatory=$true, Position = 2)][AllowEmptyString()][String]$LogFile
    )
 
    begin {
    }
 
    process {
     $DateTime = (Get-Date -format dd-MM-yyyy) + " " + (Get-Date -format HH:mm:ss)
 
        if ( $Text -eq "" ) {
            Add-Content $LogFile -value ("") # Write an empty line
        } Else {
         Add-Content $LogFile -value ($DateTime + " " + $InformationType.ToUpper() + " - " + $Text)
        }
    }
 
    end {
    }
}
#==========================================================================

################
# Main section #
################

# Custom variables 
$BaseLogDir = "E:\Logs"                               
$PackageName = "OSDBuilder"

# Global variables
$date = Get-Date -Format yyy-MM-dd-HHmm
$StartDir = $PSScriptRoot # the directory path of the script currently being executed
$LogDir = (Join-Path $BaseLogDir $PackageName).Replace(" ","_")
$LogFileName = "$PackageName-$date.log"
$LogFile = Join-path $LogDir $LogFileName


# Create the log directory if it does not exist
if (!(Test-Path $LogDir)) { New-Item -Path $LogDir -ItemType directory | Out-Null }

# Create new log file (overwrite existing one)
New-Item $LogFile -ItemType "file" -force | Out-Null

# ---------------------------------------------------------------------------------------------------------------------------

DS_WriteLog "I" "START SCRIPT - $Installationtype $PackageName" $LogFile
DS_WriteLog "-" "" $LogFile

#################################################
# Update OSDBuilder PoweShell Module            #
#################################################

DS_WriteLog "I" "Looking for installed OSDBuilder Module..." $LogFile

try {
      $Version =  Get-ChildItem -Path "C:\Program Files\WindowsPowerShell\Modules\OSDBuilder" | Sort-Object LastAccessTime -Descending | Select-Object -First 1
      DS_WriteLog "S" "OSDBuilder Module is installed - Version: $Version" $LogFile
     } catch {
              DS_WriteLog "E" "An error occurred while looking for the OSDBuilder PowerShell Module (error: $($error[0]))" $LogFile
              Exit 1
             }

DS_WriteLog "I" "Checking for newer OSDBuilder Module in the PowerShell Gallery..." $LogFile

try {
      $NewBuild = Find-Module -Name OSDBuilder
      DS_WriteLog "S" "The newest OSDBuilder Module is Version: $($NewBuild.Version)" $LogFile
     } catch {
              DS_WriteLog "E" "An error occurred while looking for the OSDBuilder PowerShell Module (error: $($error[0]))" $LogFile
              Exit 1
             }

 if($Version.Name -lt  $NewBuild.Version)
  {
  try {
         DS_WriteLog "I" "Update is available. Update in progress...." $LogFile
         OSDBuilder -Update
         DS_WriteLog "S" "OSDBuilder Update completed succesfully to Version: $($NewBuild.Version)" $LogFile
       
     } catch {
              DS_WriteLog "E" "An error occurred while updating the OSDBuilder Module (error: $($error[0]))" $LogFile
              Exit 1
             }
  }

else {DS_WriteLog "I" "Newest OSDBuilder is already installed." $LogFile}

DS_WriteLog "I" "Trying to Import the OSDBuilder Module..." $LogFile

try {
        Import-Module -Name OSDBuilder -Force
        DS_WriteLog "S" "Module got imported" $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while importing the OSDBuilder Module (error: $($error[0]))" $LogFile
              Exit 1
             }

DS_WriteLog "-" "" $LogFile

#################################################
# Update OSDSUS PoweShell Module            #
#################################################

DS_WriteLog "I" "Looking for installed OSDSUS Module..." $LogFile


try {
      $Version =  Get-ChildItem -Path "C:\Program Files\WindowsPowerShell\Modules\OSDSUS" | Sort-Object LastAccessTime -Descending | Select-Object -First 1
      DS_WriteLog "S" "OSDSUS Module is installed - Version: $Version" $LogFile
     } catch {
              DS_WriteLog "E" "An error occurred while looking for the OSDSUS PowerShell Module (error: $($error[0]))" $LogFile
              Exit 1
             }

DS_WriteLog "I" "Checking for newer OSDSUS Module in the PowerShell Gallery..." $LogFile


try {
      $NewBuild = Find-Module -Name OSDSUS
      DS_WriteLog "S" "The newest OSDSUS Module is Version: $($NewBuild.Version)" $LogFile
     } catch {
              DS_WriteLog "E" "An error occurred while looking for the OSDSUS PowerShell Module (error: $($error[0]))" $LogFile
              Exit 1
             }



 if($Version.Name -lt  $NewBuild.Version)
  {
  try {
         DS_WriteLog "I" "Update is available. Update in progress...." $LogFile
         Update-OSDSUS
         DS_WriteLog "S" "OSDSUS Update completed succesfully to Version: $($NewBuild.Version)" $LogFile
       
     } catch {
              DS_WriteLog "E" "An error occurred while updating the OSDSUS Module (error: $($error[0]))" $LogFile
              Exit 1
             }
  }


else {DS_WriteLog "I" "Newest OSDSUS is already installed." $LogFile}


DS_WriteLog "I" "Trying to Import the OSDSUS Module..." $LogFile


try {
        Import-Module -Name OSDSUS -Force
        DS_WriteLog "S" "Module got imported" $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while importing the OSDSUS Module (error: $($error[0]))" $LogFile
              Exit 1
             }


DS_WriteLog "-" "" $LogFile


# ---------------------------------------------------------------------------------------------------------------------------


DS_WriteLog "-" "" $LogFile
DS_WriteLog "I" "End of script" $LogFile

Get-Content $LogFile -Verbose

First Run

This section will talk about how to first get started by importing your OS Media into OSDBuilder. This section really only needs to be run once.

OSDBuilder

Here is what happens:

  • Install/Update OSDbuilder
  • Import Server 2016 ISO into OSDBuilder
    • lives in the ‘OSImport’ directory
  • Update OS with latest updates
    • lives in the ‘OSMedia’ directory

The next time you run OSDBuilder, you don’t have to import the Media. You can simply run the commands in the ‘On-going Updates‘ section below. OSDBuilder will look in inside the ‘OSMedia’ folder, look for any updates from the last time it was run and apply the delta.

This code focuses on Server 2016. However, same code would apply for different Operating Systems.

# https://citrixguyblog.com/2019/03/19/osdbuilder-reference-image-on-steroids/

Import-Module -Name OSDBuilder

# Needs to be set for the rest of this to work
Get-OSBuilder -SetPath "E:\OSDBuilder"

# Create mount point for Windows Server 2016 iso
& subst F: \\Fileshare-p1\Files$\Citrix\Microsoft\2016-ISO

# Select index 2 (Standard w/Desktop Experience), go right to the updates, skip the ogv menu
Import-OSMedia -Verbose -ImageIndex 2 -SkipGrid

# Now we update the media
# the -SkipComponentCleanup argument is critical for us - without it, the PVS Target Device driver will break if installed during a task sequence 
Get-OSMedia | Sort ModifiedTime -Descending | Select -First 1 | Update-OSMedia -Download -Execute -SkipComponentCleanup

# With the updates complete, we create a build task
New-OSBuildTask -TaskName 'Task' -CustomName 'Test' -EnableNetFx3 -EnableFeature

# Cleanup mount point for 2016 iso
& subst F: /D

Note: You’ll want to change your paths to match your needs. The “Get-OSBuilder -SetPath “E:\OSDBuilder” is where you want OSDBuilder to create it’s directory/file/folder structure. In my case I put it on another drive. You’ll also want o change the path for your ISO location. And also take note of the -SkipComponentCleanup. You’ll need this if you are using Citrix Provisioning Services.

On-going Updates

The ‘On-going Updates’ section Gets/Applies Windows Updates to the lasted updated ‘OSMedia’ directory. Meaning that you’ll only be applying delta updates to the previous one you ran. If you want to apply updates from scratch, that’s configurable as well.

#==========================================================================
#
# Automating Reference Image with OSDBuilder
#
# AUTHOR: Julian Mooren (https://citrixguyblog.com)
# DATE  : 03.18.2019
#
# PowerShell Template by Dennis Span (http://dennisspan.com)
#
#==========================================================================


# define Error handling
# note: do not change these values
$global:ErrorActionPreference = "Stop"
if($verbose){ $global:VerbosePreference = "Continue" }


# FUNCTION DS_WriteLog
#==========================================================================
Function DS_WriteLog {
    <#
        .SYNOPSIS
        Write text to this script's log file
        .DESCRIPTION
        Write text to this script's log file
        .PARAMETER InformationType
        This parameter contains the information type prefix. Possible prefixes and information types are:
            I = Information
            S = Success
            W = Warning
            E = Error
            - = No status
        .PARAMETER Text
        This parameter contains the text (the line) you want to write to the log file. If text in the parameter is omitted, an empty line is written.
        .PARAMETER LogFile
        This parameter contains the full path, the file name and file extension to the log file (e.g. C:\Logs\MyApps\MylogFile.log)
        .EXAMPLE
        DS_WriteLog -$InformationType "I" -Text "Copy files to C:\Temp" -LogFile "C:\Logs\MylogFile.log"
        Writes a line containing information to the log file
        .Example
        DS_WriteLog -$InformationType "E" -Text "An error occurred trying to copy files to C:\Temp (error: $($Error[0]))" -LogFile "C:\Logs\MylogFile.log"
        Writes a line containing error information to the log file
        .Example
        DS_WriteLog -$InformationType "-" -Text "" -LogFile "C:\Logs\MylogFile.log"
        Writes an empty line to the log file
    #>
    [CmdletBinding()]
    Param( 
        [Parameter(Mandatory=$true, Position = 0)][ValidateSet("I","S","W","E","-",IgnoreCase = $True)][String]$InformationType,
        [Parameter(Mandatory=$true, Position = 1)][AllowEmptyString()][String]$Text,
        [Parameter(Mandatory=$true, Position = 2)][AllowEmptyString()][String]$LogFile
    )
 
    begin {
    }
 
    process {
     $DateTime = (Get-Date -format dd-MM-yyyy) + " " + (Get-Date -format HH:mm:ss)
 
        if ( $Text -eq "" ) {
            Add-Content $LogFile -value ("") # Write an empty line
        } Else {
         Add-Content $LogFile -value ($DateTime + " " + $InformationType.ToUpper() + " - " + $Text)
        }
    }
 
    end {
    }
}
#==========================================================================

################
# Main section #
################

# Custom variables 
$BaseLogDir = "E:\Logs"                               
$PackageName = "OSDBuilder"

# OSDBuilder variables 
$OSDBuilderDir = "E:\OSDBuilder"    
$TaskName = "CitrixVDA2"  # Do not use the real name of the task file - Example: "OSBuild Build-031819.json" --> Build-031819

#MDT variables
$MDTShare = "E:\DeploymentShare"


# Global variables
$date = Get-Date -Format yyy-MM-dd-HHmm
$StartDir = $PSScriptRoot # the directory path of the script currently being executed
$LogDir = (Join-Path $BaseLogDir $PackageName).Replace(" ","_")
$LogFileName = "$PackageName-$date.log"
$LogFile = Join-path $LogDir $LogFileName


# Create the log directory if it does not exist
if (!(Test-Path $LogDir)) { New-Item -Path $LogDir -ItemType directory | Out-Null }

# Create new log file (overwrite existing one)
New-Item $LogFile -ItemType "file" -force | Out-Null


# ---------------------------------------------------------------------------------------------------------------------------


DS_WriteLog "I" "START SCRIPT - $Installationtype $PackageName" $LogFile
DS_WriteLog "-" "" $LogFile


#################################################
# Update of the OS-Media                        #
#################################################

DS_WriteLog "I" "Starting Update of OS-Media" $LogFile

try {
        $StartDTM = (Get-Date)
        Get-OSBuilder -SetPath $OSDBuilderDir
        DS_WriteLog "S" "Set OSDBuider Path to $OSDBuilderDir" $LogFile
        $OSMediaSource = Get-ChildItem -Path "$OSDBuilderDir\OSMedia" | Sort-Object LastAccessTime -Descending | Select-Object -First 1
        Update-OSMedia -Name $($OSMediaSource.Name) -Download -Execute -SkipComponentCleanup
	    $EndDTM = (Get-Date) 
        DS_WriteLog "S" "Update-OSMedia completed succesfully" $LogFile
		DS_WriteLog "I" "Elapsed Time: $(($EndDTM-$StartDTM).TotalMinutes) Minutes" $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while updating the OS-Media (error: $($error[0]))" $LogFile
              Exit 1
             }


#################################################
# Creation of the New-OSBuild                   #
#################################################

DS_WriteLog "I" "Creating New-OSBuild" $LogFile

try {
        $StartDTM = (Get-Date)
        New-OSBuild -ByTaskName $TaskName -Execute -SkipComponentCleanup
        $EndDTM = (Get-Date)  
        DS_WriteLog "S" "OS-Media Creation for Task $TaskName completed succesfully" $LogFile
        DS_WriteLog "I" "Elapsed Time: $(($EndDTM-$StartDTM).TotalMinutes) Minutes" $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while creating the OS-Media (error: $($error[0]))" $LogFile
              Exit 1
             }


#################################################
# Import the OS-Media to the MDT-Share          #
#################################################

DS_WriteLog "I" "Searching for OS-Build Source Directory" $LogFile

try {
        $OSBuildSource = Get-ChildItem -Path "$OSDBuilderDir\OSBuilds" | Sort-Object LastAccessTime -Descending | Select-Object -First 1
        DS_WriteLog "S" "Found the latest OS-Build directory - $($OSBuildSource.FullName) " $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while searching the latest OS-Build directory (error: $($error[0]))" $LogFile
              Exit 1
             }


DS_WriteLog "I" "Importing Microsoft Deployment Toolkit PowerShell Module" $LogFile

try {
        Import-Module "C:\Program Files\Microsoft Deployment Toolkit\Bin\MicrosoftDeploymentToolkit.psd1"
        DS_WriteLog "S" "MDT PS Module got imported successfully" $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while importing the MDT PowerShell Module (error: $($error[0]))" $LogFile
              Exit 1
             }


DS_WriteLog "I" "Adding MDT Drive" $LogFile


try {
        New-PSDrive -Name "DS001" -PSProvider "MDTProvider" –Root $MDTShare -Description "MDT Deployment Share" 
        DS_WriteLog "S" "Created MDT Drive" $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while creating the MDT Drive (error: $($error[0]))" $LogFile
              Exit 1
             }


DS_WriteLog "I" "Importing OS-Build to MDT" $LogFile

try {
        $date = Get-Date -Format yyy-MM-dd-HHmm
        New-Item -Path "DS001:\Operating Systems\2016\OSDBuilder-$date" -ItemType "Directory"
        Import-MDTOperatingSystem -Path "DS001:\Operating Systems\2016\OSDBuilder-$date" -SourcePath "$($OSBuildSource.FullName)\OS" -DestinationFolder "OSDBuilder-2016-$date"
        DS_WriteLog "S" "Imported latest OS-Build" $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while importing the OS-Build (error: $($error[0]))" $LogFile
              Exit 1
             }



try {
        Remove-PSDrive -Name "DS001"
        DS_WriteLog "S" "Removed the MDT Drive" $LogFile
     }  catch {
              DS_WriteLog "E" "An error occurred while removing the MDT Drive (error: $($error[0]))" $LogFile
              Exit 1
             }




# ---------------------------------------------------------------------------------------------------------------------------


DS_WriteLog "-" "" $LogFile
DS_WriteLog "I" "End of script" $LogFile

You may have to edit these variables based upon your configuration.

Note that the ‘TaskName’ action is done in the ‘New-OSBuild’ section. This part can install any roles/features/languages that you may want to include that you don’t want to perform in your task sequence. If you just need Windows Updates, you can remove that whole ‘New-OSBuild’ section.

Also, be sure to modify the MDT import directory that matches how your structuring your Operating Systems.

Mine looks something like this:

If you need to check out the logs to diagnose or troubleshoot issues, everything gets logged to the ‘log directory’ you specify.

Summary

There you have it. You now have a way to Automate updating your Windows OS media by using OSDBuilder, which then gets imported into your MDT environment.

The next post I’ll go over the Task Sequence and how I’ve setup most of my installs.


  • 0

Building multiple Citrix Images with one Microsoft MDT task sequence

Continuing on the discussion of Automating the Citrix Image build process with MDT. One small problem is that there’s never just one image to do. You probably have an Image for a couple different line of businesses, another Image for these 3 Developers, an Administrator image, and maybe a few more images for other miscellaneous reasons.

At one point, the team I consulted for called their Citrix Desktop the ‘Unicorn Desktop’. We thought we could get away with having just 1 image. Very quickly we were reeled back to reality. While we keep striving to maintain just one image, we are always faced with some scenarios that keeps on adding more to the list. Technologies like App-V, App-V Scheduler, Powershell, ControlUP, and other automation vendors can help decrease the amount of images you maintain, but inevitably my guess is you’ll always have more than one.

If your using MDT/SCCM you’ll want to consolidate your efforts as much as possible. This means we’ll at least strive to have only one task sequence, the ‘Unicorn Task Sequence’ we’ll call it. So how does this look?

First, let’s start by examining the customsettings.ini.

[Settings]
Priority=Init,Image,Default
Properties=MyCustomProperty;BIOSVersion;SMBIOSBIOSVersion;ProductVersion,ComputerSerialNumber,MAC,OSVER

[Init]
ComputerSerialNumber=#Right("%SerialNumber%",7)#
MAC=#Replace("%MacAddress001%",":","")# 

[Image]
Subsection=%MAC%

[000000001234]
OSDComputername=CTX16LOB01
OSVER=16
TaskSequenceID=Prod

[000000001111]
OSDComputername=CTX16LOB2
OSVER=16
TaskSequenceID=test

[223344556688]
OSDComputername=CTX12DEV01
OSVER=12
TaskSequenceID=Steve-Shortened

[Default]
_SMSTSORGNAME=AAA
OSInstall=Y
DoNotCreateExtraPartition=YES
UserDataLocation=AUTO
TimeZoneName=Central Standard Time

SLShare=\\monitor.domain.com\Logs$
ScanStateArgs=/ue:*\* /ui:AAA\*
HideShell=YES
ApplyGPOPack=NO

SkipAdminPassword=YES
AdminPassword=supersecret
SkipApplications=YES
SkipBDDWelcome=YES
SkipBitLocker=YES
SkipComputerBackup=YES
SkipComputerName=YES
SkipDeploymentType=YES
SkipDomainMembership=YES
SkipUserData=YES
SkipFinalSummary=YES
SkipLocaleSelection=YES
SkipPackageDisplay=YES
SkipProductKey=YES
SkipRoles=YES
SkipSummary=YES
SkipTaskSequence=YES
SkipTimeZone=YES

DeploymentType=NEWCOMPUTER
SkipCapture=YES

;FinishAction=REBOOT

EventService=https://monitor.domain.com:9800

The [Settings] section tells how MDT is going to traverse the different sections below.

Settings

So when MDT process starts, it will start with the ‘Init’ section, then move onto ‘Image’, then ‘Default’. The ‘Properties=’ defines variables that are used in the different sections.

In our case, the important variable here is the MAC variable. In this case, it’s the MAC address.

When the machines boots, it will define the MAC variable with it’s MAC address, without ‘:’. Example: MAC=001122334455

After the ‘Init’ section, MDT moves onto ‘Image’

From here, MDT says goto the subsection, which is the MAC address variable. The subsections are defined with the brackets, ex: [subsection] , and can contain anything. In our case, the subsections contain the MAC address with a couple other variables defined. If the machine you are using has a MAC address of ‘000000001234’, MDT will process that following subsection.

Here is what the actual VMs look like on your hypervisor of choice.

Now that we are in the subsection [000000001234], we are defining a couple variables. OSDComputername, OSVer, and TaskSequenceID. The OSDComputername is what we will name the computer in Windows, this should match the VM name as well. The OSVER variable defines our Operating System version. This get’s called later in the actual task sequence. The TaskSequenceID is the ID of the actual task sequence that is used. You can get this from MDT console. You can see in the examples we are calling different task sequences depending on what machine(MAC) we are using. Now I know the title of this blog says ‘One’ task sequence. This is true for our actual Production runs. However, we do have other task sequences defined. For example, when we want to run a shortened version, or if we want to simply skip the OS run and just want to test out one or two application installs.

After the specific MAC subsection gets called. MDT moves onto the [Default] section. This section defines lots of different settings on how you want MDT to operate, alert, and prompt. I won’t go into each one in detail here as there are lots of websites that go into detail. However, those are the settings that I use for our production MDT environment.

By the time the [Default] section is done, the specific Task Sequence you used in ‘TaskSequenceID’ should be starting. Once it gets to the ‘Install’ part, we create separate ‘Groups’/Folders for our Operating Systems. We then can add a ‘condition’ if the OSVER variable equals whatever value we chose in the customsettings.ini section. (We could also key off the machine name here too)

On the ‘Server 2016′ group, if the OSVER is equal to ’16’, then it installs Server 2016. On the ‘Server 2012R2′ group, we change the OSVER value to ’12’. After the OS is installed, the task sequence just moves down the list. The ‘State Restore’ is the TS section that contains most of your image content, such as application installs, registry settings, custom powershell scripts, etc…

Notice we have a ‘Commands’ section. This contains different settings such as creation of log directories, setting time, disabling UAC, IPv6, System Restore, Page file and other related settings. ‘Patches’ and ‘PreReqs’ are groups that you may need to use to apply specific patches that your OS/WIM may not already have. Maybe an out of band Windows Update or something similar. The ‘Global Intalls’ group contains applications that are installed no matter what Image we are deploying. This is where I put C++ redistributes, Office, Silverlight, Chrome, Firefox, Adobe Reader, BIS-F, etc… The ‘Roles and Features’ contains unique Windows Roles and Features we want installed. ‘Citrix Installs’ contains all the necessary installs for Citrix, like, Receiver, VDA, Connection Quality Indicator, PVS Target Devices, etc…

The 3 ‘blurred’ groups are the specific different Images you are creating, Maybe the first one says ‘Admin Image, ‘Line of Business 1’ or ‘Developers’ or something like that. In my case we’ll call it ‘Admin Image’. This image will contain necessary tools to Administer the environment such as ADUC, Citrix Studio, and all the rest of the necessary consoles to Admin the environment.

As you can see above I’m creating another ‘condition’. If the computername (OSDComputername variable) is like ‘CTX%ADM%’. So if the VM/Machinename we were using was ‘CTXVDA-ADM001’ then MDT would execute this portion of the task sequence.

The ‘Final Installs’ Group contains any final activities. This is actually where I put my AV install, miscellaneous security fixes, compliance requirements, etc…

The ‘Seal’ Group contains the the final domain join, MDT log and notification/alerts, and BIS-F seal.

I hope you now are able to see how you can have a bunch of ‘shell’ VMs that are named differently according to what Image/Task Sequence you want to execute. With this approach we can start simultaneous machine builds at once, without stepping on any toes. You can size the VMs to whatever CPU/Memory size you like. I size them pretty big as I want this process to go quick and since they aren’t always on at once, I’m not too concerned about resources.

If you are just getting started I would encourage you to head over to https://xenappblog.com/ and check out his ‘Automation Framework’. Eric does an awesome job on simplifying the install and giving you the foundation you need to take this whole process from nothing to a production task sequence with little effort.

In future blog posts I’d like to go into a little more detail on the Task Sequence itself, how I create the OS WIM (OSDbuilder), Different types of alerting/monitoring I’m performing, and some other tips and tricks that you might find useful.


  • 0

Citrix + Microsoft MDT + Powershell + Tools: The Beginning.

Building Golden Images, Master Images, References Images (whatever you want to call them) is a necessity in today’s world. All too often I see Citrix customers using persistent VMs built manually. Sometimes 20-50 of the same VMs, going through the same manual clicks for upgrades, the same manual motions for software pushes, windows updates, etc… NO MORE PLEASE!!

Citrix PVS and MCS have helped pave the way for non-persistent VMs. <insert tangent> I’d really like to see this approach take over in the infrastructure role world too. Think stateless VMs for Citrix ADC, Delivery Controllers, Storefront, and other infrastructure roles <end tangent>. As soon as a persistent machine is built, your guess is as good as mine on what’s all changed on it from day to day. Maybe someone ran a script that changed something, maybe a bad windows update came in, SCCM pushed a bunch of software to it accidentally, the list goes on.

MDT isn’t the only product to build golden images. You can certainly use SCCM, Puppet, Terraform, Ansible, Chocolatey, Jenkins, etc… the list goes on and on. IMO MDT is one of the easiest and windows friendly options, but it’s nice to have other tools in the arsenal to solve whatever use case comes up. Like using Hashicorp Packer to create the VM and auto boot it into the MDT task sequence.

Over the next few months I’ll be sharing some blog posts regarding tips, tricks, gotchas, “how I did it” type examples of ways I’m leveraging MDT.


  • 0

Logon Simulators – Vendor Options and Breakdowns

Tags : 

Logon Simulators should be part of every production and test deployment. They take away the anxiety from worrying about session launch failures, access problems, and back-end issues. In this video presentation I’ll go over a handful of different vendor options that exist today. We will also drill down into the details of the pros/cons as well as actions to make your simulations a success.

Full Video Presentation: https://www.youtube.com/watch?v=ZcxEg5mcQYk

image

During the presentation I go over various problems that exist today. These problems range from External type of issues that are outside our control, to Access and Back-end issues, all the way to reactive behaviors.

I then go into how Logon Simulators can solve those problems.

These are the various different components that make up a Logon Simulator Product. Each product can have some or all of these components.

I drill down into 5 different vendor product offerings (Citrix, ControlUP, Eg Innovations, Goliath Technologies, and Login PI) explaining a little bit of the architecture, setup/install, configuration, and reporting.

We then get into a feature comparison chart of all the top features.

Lastly, I talk about some best practices on how you can make sure your Logon Simulators are a success.

To view the full presentation, click here: https://www.youtube.com/watch?v=ZcxEg5mcQYk


  • 0

Citrix Active-Active Datacenters with App Layering and App-V

Citrix Active Active Datacenter with App Layering and App-V (May, 2018)

 

Citrix Synergy is an annual conference where Citrix, Partners, Vendors, and Customers come together to socialize, present, inform, and train everyone on current announcements, new marketing campaigns, certifications, road maps, and general new initiatives.

This year I had the privilege of hosting a ‘Fireside Chat’ with my colleague @Craig_Harmel.  We presented on “SYN717: Active-active datacenters with App Layering and App-V“.

In this presentation we talked about how a large healthcare company used Citrix App Layering for agile app delivery while maintaining the up time users expect during its move to a Citrix-only desktop delivery model.  The driving factor in the company’s move to active-active Citrix deployment was an initiative designed to enable remote work for 40% of its workforce. The session discussed how this company overcame challenges including setting up GSLB while meeting CMS security compliance standards and managing layered images, user profiles and virtual apps between datacenters to deliver a fault-tolerant system with high availability.

We covered a wide area of discussion point and software involving App-V, FSLogix, nVidia Grid, Appsense (Ivanti), App Layering (Unidesk), Microsoft GPOs, File Shares, etc…

If you’re interested in watching a ‘cam’ version of the presentation, it’s located here – SYN 717 Active-Active DC with App Layering and AppV.  Also feel free to review the slide deck – https://verticalagetechnologies.sharefile.com/d-sebd2ec172d2494ab

This was truly a remarkable experience and I’m so grateful to be given the opportunity to speak at the most important #Citrix conference.

Note:  I’d also recommend watching this related video – Citrix Synergy TV – SYN236 – Best practices in multisite scenarios

                                             

 Steven Noel                                                             Craig Harmel

Citrix Architect                                                        Sr. Consulting Engineer
Vertical Age Technologies, LLC                           ITP

  • 0

How to Become a Citrix Jedi

 

Presenting at the last MyCUGC event was truly an honor.  See my full blog on “How to Become a Citrix Jedi” located on the MyCUGC website – https://www.mycugc.org/blogs/cugc-blogs/2018/03/12/how-to-be-a-citrix-jedi


  • 0

A Citrix Active/Active Deployment – Part 4

The last couple weeks have been a hoot.  Here’s the progress that’s been made.

  • XenServer hosts have all been setup (running 7.1)
  • Storage (FC) and NAS have been setup
  • All the VMs have been built, joined to the domain, patched, and prepped.
  • The FMA Site has been configured and settings have been replicated (see ‘delivery controller‘ section)
  • PVS Site has been setup
  • Storefront has been setup
  • Netscaler SDX/VPX have been setup

As you can see we pretty much now have the same components/VMs/Roles in each datacenter.  Interesting that a new Citrix blog post was released talking about Site design (Better than Ever, Xenapp/Xendesktop Site Design (v2017).  This post summarizes some pros and cons on 1 site designs and 2 site designs.  We are still moving forward with the 2 site design approach as it is geared up for the best flexibility, high availability, and stability.

the MyCUGC site has an interesting conversation that talks about some different active/active and active/passive scenarios.  This is worth a read. https://www.mycugc.org/p/fo/st/per=10&sort=0&thread=2096&p=1

In this post I’d like to focus on Storefront configuration.  For the initial deployment I created two Storefront VMs, installed Storefront, and joined them to my existing Storefront server group.  This way all the existing settings propagate to these new VMs.  I thought this was an easy option because the Stores need to have identical names and since all the rest of the settings are pretty much the same, it was just easier.  Here’s an article (https://docs.citrix.com/en-us/storefront/3/sf-plan/sf-plan-ha.html) that describes the caveats.  After the propagation, I removed both new VMs from the ‘storefront group’.

We now have a couple things to address:

  1. Aggregation of resources
    1. Export, Edit, and Import of Subscription database file
  2. Multiple Netscaler Gateway per Store
  3. Subscription Synchronization
  4. STA

Aggregation of Resources

The aggregation of resources means Storefront will ‘sort’ through the published apps/desktops in each Site/Farm and present them as a single resource.  For instance in both Sites you have a published Desktop called ‘Administrator Desktop’.  Without aggregating the two Sites/Farms you will see two separate icons, one from each Site.  These icons are usually denoted by the name + (1) or (2) next to it (Ex: ‘Administator Desktop (1)).

What we want to do is aggregate these resources so the user is only presented with one icon, ‘Administrator Desktop’.  In an active/active scenario when User1 clicks on the published resource it launches in Site1.  When User2 clicks on the published resource it launches in Site2, and so on and so forth.

Citrix CTA, George Spiers (@JGSpiers), has a great article on this – http://www.jgspiers.com/storefront-high-availability-optimal-routing/.  He explains what each checkbox does and how it affects the aggregation of resources.  One thing to note is that some default functionality has changed in the ‘Assign Controllers to the User Groups’ area.

Old:

This talks about the order of the Sites to be used in a ‘Failover’ fashion

New:

this talks about the order of the Sites to be used in a ‘random order’.  This 2nd picture is how I have my Stores configured.

I’m using Storefront 7.15 LTSR and from what I can tell George was using Storefront 3.8.  Entering both Sites in this location gives a true balance of resources.

For the aggregation of resources I’m just using the ‘Load balanced’ configuration.  As i still want to be able to publish unique resources at each Datacenter.  I’m guessing this is the more common option and provides more flexibility on on being able to publish specific apps in a specific datacenter, as well as troubleshooting application launches at both datacenters.

XD7.6 is at Datacenter#1 and CTXONE is at Datacenter#2.  Since XA6.5 is being dcommed soon, we will not be configuring it in an HA/GSLB fashion.

           Export, Edit, and Import of Subscription database file

Basically what happens here is, if you enable Storefront aggregation on an existing Storefront server, the subscriptions get tagged with a ‘DefaultAggregationGroup.\’ instead of the ‘Site/Farm’ name.  So what you have to do is

  1. Before you enable aggregation, export the subscription database file – https://support.citrix.com/article/CTX139343?_ga=2.209223634.1931658170.1508160390-868616312.1493145342
    1.  Add-PSSnapin Citrix.DeliveryServices.SubscriptionsManagement.Commands
    2. Export-DSStoreSubscriptions –StoreName StoreName –FilePath DataFile
  2. Edit the file by doing find/replace
    1. Find/Replace entry for the sitename
      1. Find: SITE-NAME.       (this is your XA/XD 7.x Site Name)
      2. Replace: ‘DefaultAggregationGroup.\’
    2. Using Notepad++ do a Find/Replace on all lines for
      1. find: \$[A-Z][0-9]*(-[0-9]*|[0-9]*)
        1. What this does is remove any of the $**** entries that is used.  You mostly see this in 7.x Sites.
      2. replace:
  3. Enable aggregation
  4. Import the new file using the commands from the website above
    1. Import-DSStoreSubscriptions –StoreName StoreName –FilePath FilePath

You’re subscriptions should now be working as normal.

Before:

Notice i also have a 6.5 farm in my storefront group. Since this farm isn’t being aggreated the tags don’t change at all.  So you shouldn’t have to worry about any of those entries.

After:

Multiple Netscaler Gateway per Store

You basically have two different sets of Netscalers trying to access the same store using the same URL.  So how does Storefront known which Netscaler you are coming from to perform the Authentication?  For this you need to configure the ‘VIP’ and Callback on the specific entry in Storefront.  Carl Stalhood has some good documentation on this here – http://www.carlstalhood.com/storefront-config-for-netscaler-gateway/#gslb

Subscription Synchronization

Tips from@CStalhood http://www.carlstalhood.com/storefront-subscriptions/:

  • The store names must be identical in each StoreFront server group.
  • When adding farms (Manage Delivery Controllers) to StoreFront, make sure the farm names are identical in each StoreFront cluster (server group).
  • Load balance TCP 808 for each StoreFront cluster. Use the same VIP you created for SSL Load Balancing of StoreFront. Each datacenter has its own VIP.
  • Run the PowerShell commands detailed at Configure subscription synchronization at Citrix Docs. When adding the remote cluster, enter the TCP 808 Load Balancing VIP in the other datacenter. Run these commands on both StoreFront clusters.
  • Don’t forget to add the StoreFront server computer accounts to the local group CitrixSubscriptionSyncUsers on each StoreFront server.
  1. I basically performed the powershell commands at each side.  I had a start time difference of 5 minutes and the same interval of 10 minutes.  So at Datacenter#1 the synchronization would kick off at 5:00AM, and then run again in 10 minutes.  At Datacenter#2 the synchronization would kick off at 5:05AM and then run again in 10 minutes.  Note: you can added multiple stores to synchronize.  Also, we don’t need to add ‘Store2’ to this list.  Instead of adding multiple stores here, we will be pointing ‘Store2’ to ‘Store1’ Datastore subscription file.
  2. To address having multiple subscription stores per store, we can have each store share a common ‘datastore location’.  This way users that accessing both Stores will share a common subscription location.  We are already addressing the synchronizing between Storefront Groups at each Datacenter.  Follow this article to share a common location – http://docs.citrix.com/en-us/storefront/3-12/configure-manage-stores/configure-two-stores-share-datastore.html

STA Servers

Since Datacenter#1 Netscalers could be up and Datacenter#2  XA Site could be down, you need to make sure to add the STAs on all Gateway VIPs on the Netscaler and all Netscaler Gateway entries on the Storefront stores.  I’ve seen some diagrams where you can create a GSLB or LB VIP for the STA servers.  That is also an option.

For this setup, I simply added 4 STA entries (1 for each delivery controller) to the configuration locations.


  • 2

A Citrix Active/Active Deployment Part 3

I’m at the point now where the Datacenter#2 has Networking, Storage, and Hosts ready for me to use.  That means it’s VM building time.  You pretty much need the identical components you have for your existing site.  Remember the key focus here is Highly Available and reliability.  So if one datacenter goes down, the other one will be able to take over.  This means we need the same VMs with the same roles.

  • 2x – Storefront Servers
  • 2x – Delivery Controllers
  • 2x – Director Servers
  • 2x – SQL ‘Always On’ Servers
  • 2x – PVS Servers
  • 1x – Licensing Server

Licensing: Usually in Active/Active scenarios you would deploy a Licensing VM and allocate/purchase the same amount of Citrix licenses as the first site.  Since this wasn’t budgeted we will be using a manual failover type of approach when it comes to the licensing server. We’ll be sending both Datacenters to one licensing server.  If that site fails or the primary licensing VM fails, we will manually switch the Citrix Sites to use the backup licensing server.  So technically we will be building a VM, installing the Citrix license role, but not allocating any licenses to it.  This should pass any ‘audit’.  Note:  You technically have 30 days to get the license server up before functionality goes away.  That should be enough time to backup/restore, build new, or whatever.  This backup licensing server will most likely be used in an ‘Ohh SH!…’ moment.

Here is a blurb about ‘Grace Periods’ from Citrix docs – https://docs.citrix.com/en-us/licensing/11-14/technical-overview.html

Grace periods

If product servers lose communication with the License Server, the users and the products are protected by a grace period. The grace period allows the product servers to continue operations as if they were still in communication with the License Server. After the Citrix product checks out a startup license, the product and the License Server exchange “heartbeat” messages every five minutes. The heartbeat indicates to each that they are still up and running. If the product and the License Server don’t send or receive heartbeats, the product lapses into the licensing grace period and licenses itself through cached information.

Citrix sets the grace period. It is typically 30 days but can vary depending upon the product. The Windows Event Log, and other in-product messages, indicate if the product has entered the grace period, the number of hours remaining in the grace period. If the grace period runs out, the product stops accepting connections. After communication is re-established between the product and the License Server, the grace period is reset.

The grace period takes place only if the product has successfully communicated with the License Server at least once.

Grace period example – two sites, both using the same License Server

The connection between Site 1 and the License Server goes down causing Site 1 to go into the grace period, continuing operation and making connections. For concurrent licenses, they can connect up to the maximum concurrent licenses installed. For user/device licenses, they have unlimited connections. When Site 1 reestablishes communication with the License Server, connections are reconciled and no new connections are allowed until they are within normal license limits. Site2 is unaffected and operates as normal.

If the License Server goes down, both sites go into the grace period. Each site allows up to the maximum number of licenses installed. As above, the user/device licenses have no limit.

Storefront:  There are some key considerations when building out the storefront servers with multi-site design in mind.  Please read ‘StoreFront high availability and multi-site configuration’  – https://docs.citrix.com/en-us/storefront/3/sf-plan/sf-plan-ha.html.   Once we get everything setup, I’ll expand upon aggregating sites.  Stay tuned for a future blog post on those specifics.

When you decide whether to set up highly available multi-site configurations for your stores, consider the following requirements and restrictions.

  • Desktops and applications must have the same name and path on each server to be aggregated. In addition, the properties of aggregated resources, such as names and icons, must be the same. If this is not the case, users could see the properties of their resources change when Citrix Receiver enumerates the available resources.
  • Assigned desktops, both pre-assigned and assigned-on-first-use, should not be aggregated. Ensure that Delivery Groups providing such desktops do not have the same name and path in sites that you configure for aggregation.
  • App Controller applications cannot be aggregated.
  • Primary deployments in the same equivalent deployment set must be identical. StoreFront only enumerates and displays to users the resources from the first available primary deployment in a set, since it is assumed that each deployment provides exactly the same resources. Configure separate equivalent deployment sets for deployments that differ even slightly in the resources they provide.
  • If you configure synchronization of users’ application subscriptions between stores on separate StoreFront deployments, the stores must have the same name in each server group. In addition, both server groups must reside within the Active Directory domain containing your users’ accounts or within a domain that has a trust relationship with the user accounts domain.
  • StoreFront only provides access to backup deployments for disaster recovery when all the primary sites in the equivalent deployment set are unavailable. If a backup deployment is shared between multiple equivalent deployment sets, all the primary sites in each of the sets must be unavailable before users can access the disaster recovery resources.

What I did is built my new VMs, installed Storefront, and joined them to my existing ‘Storefront group’.  This way, everything would have the same names, setup, and configurations.  Afterwards, I disjoined the ‘group’ and added only the storefront servers at Datacenter#2 to a ‘storefront group’.

Director Servers:  For director you will need to follow the Multi-Site instructions – https://support.citrix.com/article/CTX136165 . The overall goal for this will be to have Director at each Datacenter be configured for ‘Multi-Site configuration’.  We will use GSLB to send users to a LB VIP at either Datacenter.

Provisioning Services: We will treat PVS as separate entities.  So build out PVS like you would normally from scratch.  Since we use Unidesk/App Layering we will setup a 2nd ‘app layering connector’ that points to Datacenter#2 PVS servers.  Now, I don’t literally have to publish the template at both locations.  ‘Publishing’ compiles all the different layers, which takes time.  So we could setup a powershell sync/transfer script to copy the VHD to the second PVS store.

SQL ‘Always On’:  Since we will be deploying a Site at each Datacenter, technically that site should have it’s own Servers.   Deploy how you would normally deploy in a standard 1 site configuration.

Delivery Controllers: At this point just build your Site as you normally would.  In another post I’ll go through our different options on how both sites will talk to Storefront, and which sets of icons will be available based on settings.  Initially, I’ll be using my homeboy’s, @ryan_c_butler, replication script to export my current Citrix sites information to Datacenter#2 Citrix site.  This is something that can also be used as a scheduled task to keep both environments in sync.

Over the next week or so i’ll be verifying normal functionality of the site, storefront, netscaler, etc…  After that, i’ll be getting into the Storeront and GSLB configurations.

Stay tuned….

Previous Posts:

Part1 – Introduction – http://verticalagetechnologies.com/index.php/2017/04/11/a-citrix-activeactive-deployment-introduction

Part 2 – http://verticalagetechnologies.com/index.php/2017/06/06/a-citrix-activeactive-deployment-part-2


  • 2

Citrix App Layering – With Extra Frosting

I’ve had the pleasure of using Citrix App Layering for the past few weeks.  I have to say it’s pretty addicting.  Immediately after using the product I could see it’s potential. What started out as a POC/Demo turned into a layering frenzy.  First the OS layer, the Platform layer, then an App Layer.  Now we have over 40 App Layers, 4 Platform layers and 2 OS layers, all mixed into 10 different images.  This product is so simple to spin up too.

A few of us on twitter were recently discussing how powerful the product is and some ideas we would like to see implemented in future versions.  I wanted to write this to help let Citrix know we love the product, but want to see more!

For those that want some more information on the basics here are a couple websites to get you going:

During this post I’d like talk about the ‘Frosting’.  These are the extra features and possible opportunities I’d like to see Citrix make to the product in the future.  Before anyone starts getting down on me, just know that I’m an optimist and a full believer in exploring new ideas to make something better.  Like all good managers will tell you, the day your employees stop coming to you with new ideas is the day you need new employees.  Also, you can’t get what you don’t ask for.  So here goes…

Extra Frosting Ideas:

  1. Folders – Currently there is no way to categorize any layers.  This is fine if you only have a handful of layers, but if your organization is like mine, you’ll soon have 100+ layers.  You do have the option to ‘search’, but us Citrix Admins like our folders (Appcenter > Studio).
  2. Elastic Layers for ALL – Right now the only way you can provide elastic layers is to deploy an ‘App Layering Image’.  For those that want to try your first elastic layer, you‘ll need to deploy 1 OS layer, 1 Platform layer, and 1 Elastic image.  Citrix should be able to make this simpler and let us deploy an ‘agent’ to a standalone VM/physical PC.  This is a pretty big one IMO.  When admins are demo’ing/POC’ing this product they are going to compare it to App-V, AppVolumes, etc…  Right away the product will fail to meet their needs with the ‘virtualizing apps’ way of thinking.  This product does Image Management firstly, and app virtualizing secondly.  Citrix will need to fix the flexibility of Elastic Layers to truly compete with other vendors in this space.
  3. Provisioning Time – This is probably the number 1 complaint I hear about.  The whole process takes too long.  The majority of the steam comes from ‘publishing the image’, whether it’s to PVS or a Hypervisor.  If your on XenServer/Hyper-V there aren’t as many conversions since the VM itself is VHD.  However, if you run VMware there are conversions that need to happen to go from VMDK to VHD.   If you want to test a new version, be prepared to wait 30 minutes.  Office isn’t activating?  New version and wait 30 minutes.  Forgot something?  New version and wait 30 minutes.  The more layers you have the larger the Image and the longer you’ll have to wait.  This goes for adding versions too, not just publishing.  When you add a new version it takes a copy of the locally stored OS layer and spins up a new disk on the hypervisor of choice.  This all takes time.  Any way of streamlining or making this faster would greatly benefit us admins.
  4. Testing Layers – I think of this one like PVS streaming to server.  We need the ability to test layered versions quickly, without having to wait 30 minutes to publish an image.  Now I don’t need this to be ‘production ready’, I just want something that I can test quickly to verify functionality.
  5. Where’s VHDX – the images that App Layering publishes are all VHD, unless you’re provisioning to VMware.  Over the last two years there have been so many enhancements to use VHDX that I feel Citrix needs to get this incorporated.
  6. DR/HA – Currently Citrix/Unidesk recommends that you backup your ELM (virtual appliance) and CIFS location like you normally would any other device.  Their viewpoint is that if ELM goes down, you aren’t really down.  Since elastic layers will still be available, you just won’t be able to make any changes, to elastic layers or to any layers.  To me this thinking won’t cut it in the enterprise world.  There are teams dedicated to virtualizing apps and managing images.  A missed day could mean a zero day patch doesn’t get deployed.  There is a ‘layer sync script’ for VMware appliances – https://www.unidesk.com/forum/scripts-and-utilities/unidesk-layersync#comment-42535. However, I’m currently using XenServer.  I would love to see something built into the product to manage my high availability for me.
  7. Mix/Match OS layers –  This would give us the ability to use the same layer on multiple OS versions.  While I 100% get why they took away this option, I just would rather they let us choose.  If you want to put a checkbox with a disclaimer that it’s not supported on a per layer basis, fine.
  8. Automation/API – Us admins need the ability to script, automate, and control actions.  It would be nice to be able to script updating Chrome in a new layered version.  That’s just the tip of the iceberg, you could pretty much script all individual parts of the published image (windows updates, patches, new DLLs, etc…).  Think MDT on steroids.
  9. Notifications/Alerting – I like knowing whats going on and being able to fine tune the alerts.  Maybe i want to be alerted when the layer is done sealing, or maybe i want to be notified when the layer is done publishing.  Maybe it’s an email or a popup in the task bar.  I just want notification options 😉

I just want to reiterate, I’m loving this product so far.  The ease of use and power to manage images is amazing.  I have no doubt Ii’ll continue to use it in the future.  These are just a few things I’d love to see in future versions.

Let me know if you have you’re own ideas to better App Layering.


  • 3

A Citrix Active/Active Deployment Part 2

Since Part 1 has been published we have learned a little bit more about the overall design of the environment.  Let’s recap the initial requirements.

  • Management has instructed the team to build a Highly Available and Stable environment that spans multiple data centers.
  • Each technology component must be separate in each datacenter.  Which means, no sharing of Citrix resources for the infrastructure type roles (Broker, Storefront, etc…).
  • We need to be able to test changes in One Datacenter, while not affecting the other datacenter.

Netscalers:  We will have 2 SDX Netscalers at each datacenter.  On each SDX, there will be 1 VPX that’s in an HA pair with a VPX on the 2nd SDX (that houses the Netscaler Gateway, GSLB configuration, Load Balancers, etc…).  Similar to the screenshot below.

Storage and Hosts: There will be hosts and storage placed at each datacenter.  A combination of ESXi and XenServer will be used for the hypervisor.  ESXi will house the infrastructure roles (storefront, delivery controller, PVS servers, etc…).  XenServer will house the session VMs (“Friends don’t let friends get vTax’ed”).  We will have Fiber connection Dell Compellant at each datacenter.  This will provide the storage for our VMs, PVS Stores, etc… User files will reside in one datacenter and will be replicated to 2nd datacenter as ‘read only’.  Should the primary DC go down, the secondary would take over.  This means no matter what datacenter a user logs into, they will access their redirected folders, my documents, etc… in the 1st datacenter.  Note: this is a limit of our storage platform which follows the master/slave rules. This may change over time.

XA/XD Sites:  We chose to deploy two separate sites, one in each DC.  Based on Carl Stalhood’s article (screenshot below) and Chris Gilbert’s article, we could technically get away with deploying 1 site, however we chose to separate this into two due to failure domains.  This will allow us to completely test Site upgrades/updates, PVS image updates, etc… without affecting both datacenters.

Provisioning Services: Each datacenter will have 2 PVS servers for HA.  Each side will connect to their respective SMB share, local to that DC.  We will most likely use a powershell script to copy vdisks across.

Now that we have a better picture of the design, the hosts and storage are setup, and a basic picture of the configuration we’ll begin building out the VMs in the 2nd datacenter.  This will look pretty much identical to the first datacenter.

  • 2x Delivery Controllers
  • 2x SQL severs (Always On)
  • 2x PVS Servers
  • 2x Storefront Servers
  • 2x Director Servers
  • 1x Licensing Server

Once I have these VMs built and installed, along with the Netscaler SDX/VPX, I’ll dive a bit deeper on configuring the ‘multi-site’ components such as Delivery Controller, Storefront, and Netscaler configuration.

In the meantime we have some fine tuning/hardening around sync’ing: Site settings, Storefront subscriptions/stores, and PVS images.  We also need to come up with game plan for multi-site Ivanti/Appsense setup.

Stay tuned for Part 3.

________________________

Part 1 – Introduction

Part 2 (this blog)

Part 3


Twitter