Update Rollup 5 for System Center 2012 R2 Virtual Machine Manager

This article describes the issues that are fixed in Update Rollup 5 for Microsoft System Center 2012 R2 Virtual Machine Manager. This update rollup contains two updates for System Center 2012 R2 Virtual Machine Manager. One update is for servers, and the other update is for the Administrator Console.

Issues that are fixed in this update rollup

  • When you use the Virtual Machine Manager UI to enable a virtual machine replica, it doesn’t throw any error. However, Windows PowerShell does throw an error.
  • Virtual Machine Manager services crash when there is a discrepancy between Windows Server Update Services (WSUS) and the Virtual Machine Manager baseline. If the update is cleaned up from WSUS, the Virtual Machine Manager Service crashes while it tries to approve that update. Additionally, the ObjectNotFound exception is not handled correctly.
  • The current logic is that if a storage pool supports only one provisioning type, the storage pool defaults to type “fixed.” This is incorrect. If AreMultipleProvisioningTypesSupported is false and we mark THIN as false and then go ahead with the FIXED type, logical unit number (LUN) creation fails in cases where storage pools support the THIN type of storage.
  • To create site-to-site connectivity in current versions, customers have to use IPSec S2S tunnels. S2S GRE tunnels are now enabled to use bandwidth more efficiently.
  • When Virtual Machine Manager is running a job, it holds the lock and cannot be cancelled. If you try to cancel the job, the job is still shown as “running.” Any further refresh of the host or the cluster will evoke a failure notice that states that the lock is held by the running Virtual Machine Manager job.
  • The power savings daily and monthly performance data is not aggregated correctly. As a result, power savings for the month is seen as zero (0) for the Administrator Console. The hourly performance data is reported correctly by the cmdlet.
  • Virtual Machine Manager server setup is updated to install the latest DacFx (SQL Server 2014) for the SQL Server application host.
  • Virtual Machine Manager UI console crashes intermittently with the following exception:
    System.ServiceModel.CommunicationException in OptimizeHostsAction
  • All “Hyper-V Recovery Manager”-related strings have been updated to “Azure Site Recovery.” This is the new product name.
  • Migrating an unprotected virtual machine to a protected virtual machine on the same host currently shows the transfer as the SAN type. This is incorrect. Virtual Machine Manager should show the network only when the virtual machine is off, and shows virtual machine and storage migration (VSM) when the virtual machine is running, whether it is on the same host or on a different host.
  • Customers who run Virtual Machine Manager at the scale experience long completion time for loading the history of jobs.
  • The stored procedure dbo.prc_WLC_GetVmInstanceIdByObjectId fails if the VMId column is empty in any of the rows of the tbl_WLC_VMInstance table. Affected customers will not able to set up disaster recovery for their virtual machines. Typically this occurs when the customer has a virtual machine that was created and then upgraded to System Center 2012 Virtual Machine Manager SP1. In this case, enable protection is blocked for all virtual machines and not just for specific virtual machines.
  • The new LUN that is created on an EMC array pool tries to use an old SMLunId that was previously generated. Therefore, RG creation with multiple LUNs fail, and you receive the following error message:
    26587: SMLunId is re-used for newly created LUN
  • Storage array is unnecessarily locked while data is collected during a provider rescan. The lock is applied only when data is refreshed.
  • The EnableRG operation for NetApp fails when two providers are used in a fully discoverable model.
  • Creation of a virtual machine to the recovery cloud after failover has occurred on the recovery site fails, and you are told that the cloud doesn’t support protection. Virtual machine migration to RG is blocked in this scenario.
  • If the virtual machine was refreshed, the Administrator Console blocks shutdown of the virtual machine.
  • When test failover (TFO) is completed, the snapshot LUNs are removed from the backend (NetApp). However, Virtual Machine Manager still shows them. Only a provider rescan (not refresh) removes both the LUNs and the pool.
  • Currently, Virtual Machine Manager has only the “NETAPP LUN” option added into the host MPIO devices when we add the host into Virtual Machine Manager. With this update, “NETAPP LUN C-Mode” is added into the host MPIO devices as another option.
  • The System Center Operations Manager object property IsClustered for HostVolume is displayed in the UI without an associated value.
  • When the system is under load, Virtual Machine Customization operations report the following error:
    609: Virtual Machine Manager cannot detect a heartbeat from the specified virtual machine

    The creation of the virtual machine (with customizations) actually succeeds. However, Virtual Machine Manager puts the virtual machine in a failed state because of this job failure. The user can safely ignore the failure to bring the virtual machine back. However, the user may think that something went wrong and re-create the virtual machine.

  • Currently, users have to manually update the DHCP extension after update rollup installation on all hosts. This is now automated. After the DHCP extension is replaced in the Virtual Machine Manager server’s installation folder to the latest version, Virtual Machine Manager automatically checks the DHCP extension against all hosts. If the host has an older version of DHCP extensions, the agent version status will be displayed as “DHCP extension needs to be updated in host properties on the Status page.” The user calls the update agent and updates the DHCP extension on the Hyper-V host in the same way that the user did this for the Virtual Machine Manager agent. Also, if the VSwitch is a logical switch, the status will be shown in “logical switch compliance.” The user can remediate the logical switch. This will also update the DHCP extension on the host.
  • Sometimes the creation of the unattend.xml file will fail when multiple virtual machines are being customized at the same time. (This file is used as part of virtual machine customization.)
  • Virtual Machine Manager host and cluster refresher checks the permissions (ACLs) of file shares that are registered to a host or cluster. When permissions aren’t found, the refresher reports an error, and a button is placed on the host’s or cluster’s properties page to “repair” the share to set the appropriate permissions. If certain permissions are added, the refresher will erroneously report an error even if the required permissions do exist. The “repair” operation, if invoked, will report that it failed to repair the share.
  • Host.GetInstance is taking a lock on HostVolumes, HostDisks, and HostHbas. However, it is releasing locks only on HostVolumes and HostDisks and continues to hold HostHbas locks. Therefore, if a child task in ControlledChildScheduler takes a lock on the host, subsequent child subtasks cannot acquire a lock on the host.
  • Source logical unit is now being set as part of TFO snapshot creation so that DRA is able to look up the snapshot LUN that corresponds to a replica LUN.
  • Virtual Machine Manager Update Remediation shows only check boxes. This occurs because the dialog box doesn’t force loading of all of the update objects from the server and because of how data binding works. This leaves the names blank instead of either causing a crash or throwing error text.
  • Enabling protection with replication groups with a null target group entity in paramset while making the WCF protocol causes a critical exception to occur.
  • Sometimes, Replication Group protection at scale fails because of database issues.
  • In-place migration of a virtual machine to RG can lead to any of the following issues:
    • No destinations are visible for migration.
    • The migration wizard finishes, but no migration job is triggered.
    • Actual data transfer rather than just metadata transfer takes place.
  • The virtual machine vNic is renamed to “Not connected” if the vNic is not connected to a network. However, the name is not being reset to its original name when the connectivity changes. This can cause a lot of confusion in the UI because all vNics appear as “Not connected” even if there is real connectivity.
  • The UnregisterStorageLun task in a replication group fails intermittently because of SQL deadlock.
  • Service Deployment fails if a library resource is read-only and is copied directly and not from the network. When a file in a custom resource on the library server is read-only (attribute), the deployment will fail when you don’t use network copy. However, it will succeed when you do use network copy.
  • Very slow performance occurs when new-scvmconfig is called. new-scvmconfig is required for multiple new virtual machine scenarios. Each virtual machine creation takes longer and longer to run through placement as the virtual machine name is created.
  • A Virtual Machine refresher job hangs indefinitely after you enable maintenance mode on another cluster node. This will cause a deadlock condition in event-based virtual machine refresh jobs. This may occur when something happens during Subscribe or UnsubscribeForEvents. Now the deadlock condition is removed. If there is an error, it will fall back to the VMLightRefresher for that host.
  • A LUN that has no snapshots could not be registered with Register-LUN.
  • When parallel Register-LUNs are running for NetApp, that multiple SPCs may be created for the same node but for a different LUN. This can happen because of the following two reasons:
    • An OOB configuration was done, and an IG has been created for each node of the cluster for different LUNs. Although this is possible in Netapp, this is a bad configuration, and Virtual Machine Manager will throw an error.
    • An issue in Virtual Machine Manager could cause a situation in which parallel operations create different IGs for the same node and different LUNs.
  • For a discovered ReplicationGroup, the relationship of the LUN to RG is not established.
  • After a customer initializes placement of any or all members of a service configuration that uses a load balancer, the customer can no longer retarget the individual virtual machine configurations of the service configuration. Instead, when the user tries to do this, the user receives a message that states that the service configuration actions are invalid. This blocks the customer from being able to spread virtual machines across host groups.
  • Intermediate-level refresher removes all group sync information.
  • Get-SCReplicationGroup does not return a replication group after a provider was removed and re-added or if the provider was never realized in the array.
  • Planned failover for replication group fails in “PreValidateFailover.”
  • User selection on the Virtual Machine Manager UI grid is lost because Virtual Machine Manager keeps refreshing the object. This severely affects the ability to do multi-select and do operations on scale environments. When the IsSynchronizedWithCurrentItem property is set to True on a data grid, a multi-selection resets to a single selection during a refresh.
  • Virtual Machine Manager UI start-up for a self-service user takes a long time. The UI start takes longer than 4 to 10 minutes.
  • Cloning a virtual machine with Checkpoint fails with “An item with the same key has already been added” when the cloned virtual machine is placed on a dynamic disk. This blocks the cloning of the impacted virtual machines.
  • Performance data (disk space) is not available in Operations Manager for VMWare hosts. Performance data collection (except host disk space) for non-HyperV hosts is collected through the Windows PowerShell cmdlet Get-SCPerfData. However, for Host disk space, Virtual Machine Manager was still using the managed module. Now everything uses the Windows PowerShell cmdlet.
  • The Virtual Machine Manager Administrator Console crashes when the user tries to open the “Add Hyper-v Host And Clusters” wizard. The customer environment produces PRO objects with guid == guid.empty. These objects are cached at the client-side in ClientCache.
  • Library Refresher takes very long to run if the number of files on the library shares is huge and if during this time a read lock is held on the User Role. Therefore, other operations on UserRole fail during this time. Library refresh runs every hour, and currently it can take up to 50 minutes in a customer environment to run for one complete cycle. During this time, some of the operations on UserRole may fail.
  • When multiple deployments are performed concurrently, the progress (time remaining) display job of the file copy fails because of an Overflow exception and causes the service to crash.
  • If an administrator user or a self-service user grants permission to a new self-service user to a service template, the newly added self-service user does not see the service template in its session. Therefore, the self-service user who was granted the access does not see the resources in its session even after authorization is granted.
  • When a user is deleted from Active Directory, the user starts appearing as invalid SID in Virtual Machine Manager. If an invalid SID is present in the ACLs of a virtual machine, all subsequent modifications (addition or removal of users) to the ACL fail silently.
  • Storage provider refresh causes an exception to occur.
  • Migration of a protected running virtual machine uses a network instead of LiveVSM if the virtual machine is migrated to unprotected storage.
  • The container ID for a tier configuration object is initialized even when the hosts that are appropriate for that tier configuration are not in the scope of the placement attempt.
  • Multiple Create VM jobs fail with a locking exception when you run batch virtual machine creations in parallel.
  • While you are running a batch of 100 or more virtual machine creation scenarios in parallel, each virtual machine creation task does not show any progress for 15 to 20 minutes after you submit the task.
  • If a physical computer profile is created by using vNic (and by using a virtual machine network) and if there are more than one hostgroup for a logical network that also has that virtual machine network when you add a host resource on the “Provisioning Options” page, the host profile will be displayed for only one host group. The profile won’t be displayed for the rest of the host groups.
  • A Disable Replication Group job fails because of a database deadlock condition.
  • HP returns the same SMLUNId for source and replica LUNs. Therefore, the hostdisk-to-LUN association is not established in hostrefresher.
  • Disable Maintenance Packs because of Operations Manager alerts such as the following:
    Cloud maximum memory usage to fabric memory capacity ratio has reached or exceeded threshold.
  • Add-Virtual Machine ManagerStorageToComputeClusterOnRack fails, and you receive the following error message:
    Could not find tenant share registered to cluster 43J05R1CC.43J05.nttest.microsoft.com.
  • Virtual Machine Manager encounters critical exceptions during provider rescan.
  • Replication group does not show up in Cloud Properties if pools and LUNs are attached to a child hostgroup.
  • If replication is broken, a critical exception occurs if the replication group is used to perform a disaster recovery (DR) operation.
  • In live migration of a virtual machine from Windows Server 2012 to Windows Server 2012 R2, the threshold fails with a critical exception. As a result, live migration from Windows Server 2012 to Windows Server 2012 R2 won’t have the virtual network adapters fixed. This could cause the Virtual Machine Manager database to be inconsistent with the Hyper-V host and also to fail during the migration.
  • Creation of LUN sometimes fails with invalid handle error.
  • After failover, the virtual machines are reported to have protection errors in the Virtual Machine Manager UI although there are actually no errors.
  • A potential race condition in the MOM Discovery Refresher causes intermittent failures in Virtual Machine Manager Operations Manager connections. This can cause Operations Manager connection failures.
  • Cluster Node goes into a pause state intermittently after you refresh the host cluster. As part of reliability improvement, the HA calculation logic was changed to support failed nodes to be ignored. The calculation logic was rewritten, and in the new logic, logical networks are enforced on the switch. If the switch does not have any logical networks marked, the switch is marked as “non-HA,” and Virtual Machine Manager pauses the cluster node.
  • Custom properties are returned as Null after Set-SCVMTemplate is called. When a Virtual Machine Manager object’s attribute (such as Description) is updated through Windows PowerShell ($t | set-scvmtemplate -args), a problem arises in retrieving the CustomProperty parameter data (by using Windows PowerShell cmdlets such as $t, $t.customproperty, and $t.customproperties), and they will be returned as Null. This occurs because the CustomPropertyIDs of the object are being cleared on the engine side during updates.
  • Live migration fails with incompatible switch port settings. If the target Hyper-V virtual switch doesn’t have VLAN configured during the migration, and if the source virtual machine has a virtual network adapter, Virtual Machine Manager tries to create VLAN settings for it and to assign VLAN ID 0 (that is, “VLAN disabled”). But on a virtual switch where no VLAN is configured, the adding of the VLAN setting causes an incompatibility error from Hyper-V, and the migration fails.
  • A critical exception occurs in the Storage Refresher — ArgumentException — StrgArray.addPoolInternal. Under certain erroneous conditions, he Windows storage service can report multiple storage pools sharing the same ObjectId (this should never happen). The storage provider cannot be refreshed, and therefore cannot be managed. The provider cannot even be removed from Virtual Machine Manager to begin again.
  • Operations such as Migration, Store, or Delete on cloned virtual machines leave the virtual machine configuration file on the host. In a Virtual Machine Manager setup that uses cloning heavily, every cloned virtual machine will leave behind a set of virtual machine configurations and save state files after it is deleted or migrated. This consumes significant disk space. In addition, the deletion and migrations all succeed with a warning message that states that they couldn’t clean up the folder.
  • File share loses user set classification when the share goes from managed to unmanaged. Because of a NetApp provider issue, if the provider loses network connectivity to the array, the provider may not report back any pools even though there are pools. If this happens during a refresh, Virtual Machine Manager will assume that the pools are no longer there and will remove any pool records from the database.
  • GroupMasking in Virtual Machine Manager fails to get MaskingSet From Job. For group masking, if createmaskingset is called with a job, Virtual Machine Manager doesn’t get masking set from job completion but retries even on success. This is reported to occur only when unmasking to an iSCSI Initiator. FC initiators work fine.
  • Template that is based on “SAN Copy capable” VHDX is marked as “Non-SAN Copy Capable” template. The user won’t be able to rapidly provision virtual machines by using Virtual Machine Manager on Nimble storage.
  • prc_WLC_GetUniqueComputerName doesn’t set FoundUniqueName to true even when a unique name is found.
  • After storing a virtual machine to a library, Refresh Library Share will hit critical failures.
  • A LibraryShare resource does not update the namespace after an update of the SSU data path as the Library share. The namespace for library resources will not be updated even after refresh.
  • A virtual machine is shown as a replica virtual machine after failover operations. For ASR SAN replication scenarios, when the virtual machine is failed over, the virtual machine is shown as a replica virtual machine, and the user cannot make much use of the failed-over virtual machine. The user has to trigger reverse role to fix the replication mode.
  • Deploying a stored virtual machine fails in placement with a critical exception. A critical exception will block deploying a previously stored HA Hyper-V virtual machine from the library to an HA host.
  • Security Update A vulnerability exists in Virtual Machine Manager when it incorrectly validates user roles. The vulnerability could allow elevation of privilege if an attacker logs on to an affected system. An attacker must have valid Active Directory logon credentials and be able to log on with those credentials to exploit the vulnerability. For more information about this security update, see the following article in the Microsoft Knowledge Base.

For more Info., you can check below link

This entry was posted in Virtual Machine Manager. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s