VxRack: Top Services Topics

This document contains the list of Top Services Topics for VxRack, identified by EMC Support as the most trending topics for the month of September 2017:

It is recommended that you also check for important information such as advisories (ETAs, ESAs), and other key resources on the Support by Product pages on the EMC Online Support portal.



Product Impacts of Upcoming Leap Second UTC adjustment on December 31st 2016.

This article contains the Product Table with possible impact by the Leap Second adjustment on December 13th 2016



Security Configuration Guides: How to deploy and use EMC products securely.


EMC publishes Security Configurations Guides for EMC products to enable customers.




Neutrino: Node Transfer can create container cc_nova_compute on the Platform node.

While testing Node Transfer In Neutrino 1.1, we discovered two major issues:

Issue # 1- After removing Cloud Compute service from a compute node, the unallocated node remains in the 'cc_compute' group.

Hence, when we transfer a Platform node to this unallocated node, the transfer playbook creates a 'cc_nova_compute' container on this new Platform Node.

And, it becomes part of the Hypervisors list.


Issue # 2 -  As a consequence of the situation above, an instance can be hosted on the hypervisor running on the new Platform node.


However, when we transfer the current Platform node, the instances remain attached to the former Platform Node that become unallocated.


Also, this unallocated node (former Platform node) sill shows in the hypervisors list. But in a down / disabled state.


This puts the system in an incoherent state and the instances are not reachable anymore.

affected version 1.1

fixed in




storcli add vd complains that "controller has data in cache for offline or missing virtual.

storcli add vd command returns error that "controller has data in cache for offline or missing virtual disks."


# /opt/MegaRAID/storcli/storcli64 /c0 add vd type=raid0 drives=252:4 direct wb ra

Controller = 0

Status = Failure


Description = controller has data in cache for offline or missing virtual disks

Unable to add virtual disks.


Neutrino: Cannot launch instances due to BIOS VT set to disabled.

When trying to launch a new VM, the below error occurs:


The same error can also be seen in the nova-scheduler.log


libvirtError: internal error: no supported architecture for os type 'hvm'


Further investigation on some cloud compute nodes show the following:


columbus-green:/var/log # grep -ri kvm messages


2016-09-05T12:07:41.267008+00:00 nile-1960-9d9e22d-1053 kernel: [ 15.264562] kvm: disabled by bios

2016-09-05T12:07:41.267036+00:00 nile-1960-9d9e22d-1053 kernel: [ 15.397152] kvm: disabled by bios

2016-09-05T12:08:53.304106+00:00 nile-1960-9d9e22d-1053 kernel: [ 88.977171] kvm: disabled by bios

2016-09-05T12:08:53.356018+00:00 nile-1960-9d9e22d-1053 kernel: [ 89.030927] kvm: disabled by bios



Neutrino: After upgrade there are multiple fg ports per the fip namespace.

It's expected to have one single 'fg' port per 'fip' namespace. (The 'fg' port is created only when the first instance is launched on the node) However, in some Neutrino systems that were upgraded to 1.1, we see multiple 'fg' ports per 'fip' namespace.



Neutrino: MySQL Galera does not come-up properly after a graceful restart

After a successful shutdown and restart of the POD, Cannot login to either the Neutrino UI or Horizon.




Neutrino: Metatada-Agent failure prevents instances from being configured with hostname and keypair


New instances not getting configured with hostname and keypair not being set.

The 'cloud-init.log' file on the new instance shows Metadata related Warnings



Note: Please click “Follow” at the top right of this screen (when logged in) to receive update notifications.