Friday, November 29, 2019

HPE ILO 4 SD card issues

I've been a big fan of HPE server hardware for years.  I've been using them for as long as I have been in IT.  We switched to blades very early in their development.  It hasn't always been a fun ride, but not all server hardware is perfect through the years.  Most recently we had a mix of HPE blade servers.  Gen 8 and Gen 9 blades as our main VMware hosts in our data centers.  Then we purchased some DL160 Gen 9 servers as a very cheap vSAN 2-node ROBO solution for some of our remote locations.  They worked great.  All of these servers were built without disk for the OS, for the blades, there were no disks in them at all.  They all booted from SD Cards.  We knew with our redundancy, we could withstand a failure and rebuild quickly if necessary.

Then we started to see the ESXi errors on writing to hardware.  When we rebooted the hosts, they would fail to find the boot disk or SD card or it would look like a corrupted ESXi instance.  After messing with support for a while, we tried replacing the SD Cards, but they still wouldn't work.  Replacing the Motherboard did work.

Turns out there was an issue were the NAND memory on the ilo (which is where the SD card controller is located) would become corrupted.

We had only a few issues for a while but once we had a project to upgrade ESXi from 6.0 to 6.5 and upgrade firmware (BIOS requirement for Spectre\Meltdown) we started to see these issues en masse.  Almost every one of the hosts we tried to upgrade saw these issues.  It was a huge pain and slowed our migration down significantly.

HPE released many different advisories and ilo firmware in an attempt to fix the issue.

This is the latest version of that Advisory.

The Simple Procedure for blades:


  1. Upgrade the firmware to 2.61+. (When I started this it was 2.50.  There were many versions that changed the behavior throughout the year.  Some version were better than others.)
  2. Run ilo command to Format the NAND Memory.  You can get the Force_Format.xml details from the advisory.
    1. You can run it from a Windows host with the Ilo configuration utility.
    2. You can run it from SSH session from the Enclosure OA.
    3. NOTE:  This will “format” the NAND memory.  It will not erase anything.  Just resets the memory on the ilo that the SD card data runs from.  This can be run while ESXi is online or not, but it is preferable to shut the server off.  It will not format the SD-Card.
  3. Then reset the bay via the e-fuse command. 
    1. From SSH session on OA, run “Show server list”  To view the blade statuses.  Confirm the bay that you want to reset is correct.  This will reset the bay you input, very easy to make a mistake.
    2. Run “Reset Bay XX” Change XX to the Bay number.  Then Type Yes to continue.  Can’t stress enough that the blade will be reset immediately.
    3. Run “Show server list”  to monitor the status of the reset.
  4. After the blade is back up, it should boot automatically.  This usually fixes it.  Sometimes you need to reset it again.
In firmware version 2.51 (I think) HPE added a GUI button for this procedure.  Only if the SD-card controller experiences the error.  Once the error disappears, the GUI buttons for this procedure disappears.  See the advisory for details.

For DL class servers, not blades.  You do not have the ability to reset the e-fuse in order to reset the ilo.  You have to use an AUX Power cycle command.  This is only available via HPE Restful utility.


For the most part this worked.  Sometimes, a simple power off and power on fixes some boot issues.

Recently, with ilo version 2.70, we've had a couple of failures where the SD-card actually fails.  I don't believe that the card is actually failed, but it seems that the NAND format fixes the ilo, but the ilo fails to recognize the SD card.  Switching Motherboard did not resolve it, but swapping SD-cards does.  But we had to rebuild the host.  I do not have a solution for this yet.  Maybe we'll figure something out.

Suffice to say, this was one main reason we did not re-buy HPE hardware.  There were others though, but this contributed to a couple of really stressful years of upgrades.

Navigating Spectre Meltdown

Today, I catch up on long over due blog posts.  I've been meaning to post a couple this year, but I've found it very difficult to balance work with vCommunity blogs.  Let's hope this blog helps break the ice.

First up, Spectre\Meltdown.  I did a presentation at the Pittsburgh VMUG earlier this year in February.  I promised to upload the presentation and here it is.  Ignore the fact that it months late, and lets just celebrate the fact that it made it onto my blog at all.

Download PowerPoint Presentation on Github.

The vBrownbag video of my presentation.

I just wanted to add to this presentation and why I wanted to present on it at all. 

My company tends to live on the bleeding edge of technology. We are not a large enterprise, but we have the need to be up to date and nimble.  Recently we've put a lot of effort into securing our infrastructure via patching, discovering vulnerabilities and removing them.  Our security team was really pushing the patching around the same time that Intel released the Speculative Execution Side-channel vulnerabilities.

It got a lot of attention very quickly.  I mean have you seen the cute and scary mascots?  I had to explain our patching plan to the CIO and Director of IT Security.  So I had to figure it out quickly.  It didn't take long to discover that it was not as simple as normal patching.  It was going to take some time to do it properly.  I had to wade through all the scary discussions and discover the exact process to make it work.

I was told by outside IT comrades that very little VMware\Windows admins actually put as much effort into understanding and explaining the procedures and my knowledge would be helpful.  Often they would patch the Windows and\or ESXi hosts but not perform the VM hardware piece which is essential to tie it all together. Hence the presentation.

Since early 2018 and the time of this presentation in February 2019, we have seen a regular release of patches for CPU related vulnerabilities.  They all have impressive names and various risk ratings.  Each comes with different procedures to patch.  But with any CPU related patch, there are always multiple levels.


  • OS - Windows\Linux patch.  With Windows, Microsoft had just switched to an all in one cumulative patch.  At the time they didn't think ahead that there would be a need to not activate a patch.  But with these CPU patches, they remove CPU abilities in order to secure the system, thus slowing the system down.  
  • Windows Registry - So Microsoft had to inject a way to turn on or turn off the mitigation.  So they used a registry key to activate or not.  Desktop systems automatically activate the patch.  Server systems do not.  If you don't add the registry key, your system is not mitigated.
  • vCenter - The ESXi patches require changes to micocode and passing this microcode to the VMs.  In order to pull this off, you need to patch vCenter to be able to control this function.
  • ESXi - Of course there is a patch for ESXi.  Sometimes it contains the necessary CPU microcode.
  • BIOS\CPU Microcode.  The CPU needs patched too.  This changes the CPU instructions.
  • VM hardware - Finally, this new CPU Code needs to be passed to the VM's.  If you are running a cluster with EVC mode enabled (you should), you will need to patch all of them before completing these steps.  Once they are all patched, then you need to perform a cold power cycle of each VM (with VM hardware version 9 at least) to pass on the CPU instruction.


The Reality...  This can be done over time.  But what I have found is that it is really difficult pulling this off in a production data center with hundreds of hosts and thousands of VMs.  All of them have different change windows and expectations.  I've found that by the time I develop a plan to patch for one vulnerability, the next one has come out. The real trick is to keep the bad actors out of your environment.

My team is currently working through ways of automating some of these functions and patching.  I will reserve that for another blog post.