Category - VMware vSphere
This one had me stumped for a few minutes. A while ago I was working with a customer that was trying to boot into the EFI shell of a VM. Restarting into the EFI boot menu, they found the EFI shell option was missing: Turns out, you need power off the machine and disable Secure Boot in the VM’s Boot Options: From this To this Booting the machine you’ll find the EFI Shell:
Recently I had a customer testing their vCenter Server 7.0U3e file-based restore process using the VCSA Restore Wizard. During Stage 2 (data copy) the wizard hung at 80% and did not progress for hours. No UI errors or hints as to what was happening. I had a look at the restore wizard logs and found the following: 2022-07-12T01:16:19.828Z - debug: pollRpmInstallProgress:getGuestFileErr:ServerFaultCode: The object 'vim.VirtualMachine:103' has already been deleted or has not been completely created 2022-07-12T01:16:30.
A few weeks ago, a customer of mine was attempting an embedded vCenter upgrade from 6.7U3 to 7.0. Stage 1’s deployment of a new vCenter appliance was successful, however Stage 2 (on the new appliance) was failing while attempting to perform a pre-check. We checked the requirements-upgrade-runner.log file and found an error, but it’s quite vague: lookup.fault.EntryNotFoundFault. We worked together and checked the following: SSO admin password contained only supported characters.
I recently had the need to ‘prep’ a VM after converting it to vSphere. By ‘prep’ I mean (after you’ve installed VMware tools) do the usual grind of updating the virtual hardware to the latest supported by ESXi, update the vNIC to VMXNET3, and change the SCSI controllers to ParaVirtual. I thought about the times when I was in customer land and we would have to convert VMs from some other platform or in some cases, correct a VM that had been built incorrectly.
Note: A bit more testing on my end has found this script is only valuable if your VMDKs are on separate datastores. I am working to find a better metric to pull the data per VMDK. Background Have you ever heard of “Uncommitted Space” in vSphere? It’s one of those things we all seem to ‘know’ without really knowing. It’s a pretty standard metric most commonly found against vSphere Datastores. It’s effectively calculated based on the provisioned and used storage of a datastore and its contents.
Note: This post addresses and (hopefully) fixes the cause of the issue found here: vVols Endpoint - Failed to establish connection on ESXi host Recently, one of my customers was trying to refresh the CA store on newly built ESXi 6.7 U3 hosts under a freshly upgraded vCenter Server 6.7 U3 instance. When the admin tried refresh the CA store, they were getting this error message in the vSphere Client:
My customer has successfully rolled out VMware vSphere Virtual Volumes (or “vVols”) in their environment. They’re loving the simplicity of storage management in vSphere, but were a little stuck when they added a pair of newly installed ESXi hosts to their environment. The hosts were not mounting the vVols datastore as expected meaning hosts could not run VMs backed by vVols. All existing hosts were OK. To start, they dug in to the logs at /var/log/vvold.
Just recently a few colleagues of mine were attempting to generate new private keys with a 4096 bit size but they were seeing shocking performance from all of their Linux VMs. They were seeing key generation taking up to 15 minutes while smashing away at the keyboard to generate entropy. It wasn’t a resource issue, the VMs were sized appropriately and showed no signs of stress. They asked me if they could throw a “Chaos Key” USB device into each of the ESXi hosts to generate more entropy to reduce the time it takes, but I knew that wasn’t required (like I was going to let that happen).
After a very successful and quick migration from Windows SSO 5.5 U3e installation to a Platform Services Controller v6.0U3 appliance I was ready to get my VMCA into action. We have a corporate internal Microsoft CA with the VMware certificate templates already created as per VMware KB 2112009. Everything was coming up Milhouse, until CSR generation time using the ‘certificate-manager’ on the PSCs. After stepping through the ‘certificate-manager’ wizard and having the CSR and private key files sent to a directory of my choosing, I quickly inspected the CSR using openssl to make sure I was on the right track:
After performing the vSphere v5.5 to vSphere 6.0 migration in our testing environment with great success, I began work on our production environment. First things first, migrating Windows SSO to PSC appliance. I had successfully converted the first machine, and started doing some testing. Things like logging into the thick client and checking all vCenter servers and basic login services. Problem Out of 6 vCenter servers, only 1 was having issues.
- Upgrading CSE-deployed Kubernetes clusters from TKG 1.5.4 to TKG 1.6.1
- Upgrading VCD Container Service Extension from 4.0.1 to 4.0.3
- Understanding and Deploying TKGm 1.6 into vSphere
- Forwarding Cloud Director App LaunchPad logs to vRealize Log Insight using the vRLI Agent
- EFI shell missing from boot menu