This document contains the list of Top Services Topics for VxRack, identified by EMC Support as the most trending topics for the month of September 2017:
It is recommended that you also check for important information such as advisories (ETAs, ESAs), and other key resources on the “Support by Product” pages on the EMC Online Support portal.
This article contains the Product Table with possible impact by the Leap Second adjustment on December 13th 2016
EMC publishes Security Configurations Guides for EMC products to enable customers.
While testing Node Transfer In Neutrino 1.1, we discovered two major issues:
Issue # 1- After removing Cloud Compute service from a compute node, the unallocated node remains in the 'cc_compute' group.
Hence, when we transfer a Platform node to this unallocated node, the transfer playbook creates a 'cc_nova_compute' container on this new Platform Node.
And, it becomes part of the Hypervisors list.
Issue # 2 - As a consequence of the situation above, an instance can be hosted on the hypervisor running on the new Platform node.
However, when we transfer the current Platform node, the instances remain attached to the former Platform Node that become unallocated.
Also, this unallocated node (former Platform node) sill shows in the hypervisors list. But in a down / disabled state.
This puts the system in an incoherent state and the instances are not reachable anymore.
affected version 1.1
fixed in 18.104.22.168
storcli add vd command returns error that "controller has data in cache for offline or missing virtual disks."
# /opt/MegaRAID/storcli/storcli64 /c0 add vd type=raid0 drives=252:4 direct wb ra
Controller = 0
Status = Failure
Description = controller has data in cache for offline or missing virtual disks
Unable to add virtual disks.
When trying to launch a new VM, the below error occurs:
The same error can also be seen in the nova-scheduler.log
libvirtError: internal error: no supported architecture for os type 'hvm'
Further investigation on some cloud compute nodes show the following:
columbus-green:/var/log # grep -ri kvm messages
2016-09-05T12:07:41.267008+00:00 nile-1960-9d9e22d-1053 kernel: [ 15.264562] kvm: disabled by bios
2016-09-05T12:07:41.267036+00:00 nile-1960-9d9e22d-1053 kernel: [ 15.397152] kvm: disabled by bios
2016-09-05T12:08:53.304106+00:00 nile-1960-9d9e22d-1053 kernel: [ 88.977171] kvm: disabled by bios
2016-09-05T12:08:53.356018+00:00 nile-1960-9d9e22d-1053 kernel: [ 89.030927] kvm: disabled by bios
It's expected to have one single 'fg' port per 'fip' namespace. (The 'fg' port is created only when the first instance is launched on the node) However, in some Neutrino systems that were upgraded to 1.1, we see multiple 'fg' ports per 'fip' namespace.
After a successful shutdown and restart of the POD, Cannot login to either the Neutrino UI or Horizon.
New instances not getting configured with hostname and keypair not being set.
The 'cloud-init.log' file on the new instance shows Metadata related Warnings
|Note: Please click “Follow” at the top right of this screen (when logged in) to receive update notifications.|