Monday, February 28, 2022

Failed to mount NFS volume: Fault "PlatformConfigFaultFault", detail "NFS mount failed: Unable to connect to NFS server."

 When I try to restore the server from Veeam backup to Vcloud environment I got the error that, PlatformConfigFaultFault while restore the VMs. 

Detailed errors are shown below,

Unable to check if vPower NFS is mounted on the host "esxi-host.domain.local"(Veeam.Backup.Common.CRegeneratedTraceException)


Failed to connect backup datastore to the ESXi host "esxi-host.domain.local". (Veeam.Backup.Common.CRegeneratedTraceException)


Failed to add NFS datastore for NFS host 'Veeam_proxy4'. Failed to mount NFS volume (192.168.0.93:/VeeamBackup_Veeam_proxy4). 192.168.0.93: Fault "PlatformConfigFaultFault", detail "NFS mount 192.168.0.93:/VeeamBackup_Veeam_proxy4 failed: Unable to connect to NFS server." 

It is proxy issue. We need to open ports 902 and 443 between veeam proxy and ESXI host/VSPhere/VCloud end.



Monday, February 14, 2022

Error 0xC1900101 – 0x20017, The installation failed in SAFE_OS phase with an error during BOOT operation

We  had a request from customer that, upgrade their server from Windows 2012 R2 to 2019 (in place upgrade) and we were facing this below error when performing OS upgrade.  


The installation failed in the Safe_OS phase with an error during boot operation. Error Code: 0xC1900101 (This indicates an issue in driver package)

We found that issue is caused y windows driver packages and tried online toturial and unable to fix this issue and finally fixed the issue using below stuffs,

 --

To check which driver caused the issue, we checked in the CompatData.xml file present under the path "C:\Windows\Panther".

 In this file, in the DriverPackages section, we can see the driver which is blocking the upgrade. We need to check the BlockMigration entry whether it is either True or False.


If the values are, 

True = Indicates the problematic driver

False = No issues.

---

For example,

<DriverPAckage Inf="oem0.inf" BlockMigration="True" HasSignedBinaries="False" />

<DriverPAckage Inf="oem6.inf" BlockMigration="True" HasSignedBinaries="False" />

 


To get the driver's name, run the below command in CMD Prompt as elevated permission,

 

dism /Online /get-drivers /format:table


 
and go to "C:\Windows\INF" open the respective INF file to get the name of the exact faulty drivers.

 (in our server we found the below drivers are faulty)

For example, we found below from oem6.inf



1) oem0.inf = prnms001.inf (The Microsoft XPS Document Writer)

2) oem6.inf = vm3d.inf (VMware SVGA 3D)

 

To fix the issue we can try the following.

 

1) Remove Microsoft XPS Document Writer in the printer section.

2) Disable VMware SVGA 3D driver (Do not uninstall, we may receive black screen, cause this is the display related drivers)

Once performing the above fix, restart the server and initiate the upgrade again.

Note:

After the upgrade, both the drivers SVGA driver and Microsoft XPS document writer gets enabled automatically.


Gratitude section:

This issue was identified and fixed by one of my colleague Balumahendran Chinna Muniyandi. Thanks for his help to writing this article.

Tuesday, February 1, 2022

The Hyper-V Virtual Machine Management service encountered an unexpected error: Unspecified error (0x80004005). Error code: '32768'.

 We had a request from customer to extend the RAID 10 configured disks, however there is no option to extend without deleting because of mixed RAIDs 1 and 0.

So we decided to backup the data in Veeam server and delete the existing disks RAIDs (except OS - OS not under any RAID and it has dedicated disk)  and re-configure the RAID. 

After re-configured the RAID , OS was found as it is and and Hyper-V role was remained the same (We expected the same and it will reduce couple of hours down time.) with same configurations, however other stuffs were removed. I mean (.XML, .VMCX, .VMRS, .VMGS, .VHD, .VHDX) files.

Initial issue, we found was Veeam refused to communicate with this server and got RPC failed to communicate error,

The issue, we found was routing was missing and added in the server and after that both Veeam and servers were started to talk each other.

Then, we tried to restore the VMs but got below error,

Error Restore job failed Error: Failed to create planned VM configuration (path: 'D:\VMs\<GUIDs>.vmcx'). Job failed ('Failed to import a virtual machine.
The Hyper-V Virtual Machine Management service encountered an unexpected error: Unspecified error (0x80004005).'). Error code: '32768'.


Regarding the error - it seems Hyper-V could not import the VMs and throwing error. We believe that it was due to some issue in MicroSoft file systems. We contacted the Veeam and they suggest the same or try to restore files individually and import VM manually.

We restored the configuration and disks  (.XML, .VMCX, .VMRS, .VMGS, .VHD, .VHDX) files manually and import the VMs manually and all the VMs were started to work without any issues.

Email sent from a shared mailbox are not saved to the Sent Items

  Issue : When a user sends an Email from his delegated (shared) mailbox the Email which sent from the delegated mailbox are stored in user...