Quantcast
Channel: VMware Communities: Message List
Viewing all 230656 articles
Browse latest View live

NSX 6.4.3 Deployed VMs from vRA - some get connectivity some don't

$
0
0

Hi,

 

we are deploying VMs through vRA 7.5 in NSX 6.4.3 environment. What happens is the following: some VMs get deployed without problems and some get deployed but don't get network connectivity working and "guest OS customization timeout" happens. So deployment fails.

 

So when I check in vCenter, all VMs get connected to the same logical switch, get proper IP settings in Windows, but some cannot ping default gateway and cannot reach anything else outside their own network.

For example, 3 VMs deployed in the same network, deployment started within 2 minutes for each VM, they are running on the same ESXi host, connected to the same logical switch, and they all can ping each other and other VMs in the same network.

But some cannot ping their own gateway and cannot ping anything else outside their network. Any tips where to start with troubleshooting? It is the same VM template in question, so no differences between VMs.


Re: NSX 6.4.3 Deployed VMs from vRA - some get connectivity some don't

$
0
0

It was hard to distinguish in your statement if you were saying that if all VMs get started on the same ESXi host, same logical switch, yet some can ping their gateway and others cannot. Is that what you're saying is true?

Re: NSX 6.4.3 Deployed VMs from vRA - some get connectivity some don't

$
0
0

correct:) sorry for not-so-clear explanation.

 

Deployed from the same VM template, ended up on the same ESXi host, yet some work fine, some can ping gateway and some cannot.

Re: UUID for network in VMware.

$
0
0

MorefIDs do not change for these objects for the life of their existence within the same vCenter. HOWEVER, if you had a vSS (which is host specific) and then removed and readded that very same host to inventory, the id would change. This is one example.

Re: NSX 6.4.3 Deployed VMs from vRA - some get connectivity some don't

$
0
0

Yet all are connected to the same logical switch? Not different logical switches?

Re: NSX 6.4.3 Deployed VMs from vRA - some get connectivity some don't

$
0
0

correct, the same logical switch

Re: NSX 6.4.3 Deployed VMs from vRA - some get connectivity some don't

$
0
0

Is this phenomenon limited to this one ESXi host and one logical switch or are there others (hosts and/or logical switches)?

Re: Yosemite virtual machine freezes with a black screen after changing Compatibility settings

$
0
0

Hi Wil,

 

>> Curious why turning back the virtual hardware did not work. Did it also update vmware tools perchance?

 

I did not see any change to VMware Tools.

 

>> Would you mind if I have a look at your vmware.log file to see if I can see something out of the ordinary in there?

 

I found 4 vmware.log files in the virtual machine package – 3 from yesterday + 1 from today.  I've attached them to this message.

 

While we're trying to figure out this 10.10.5 black screen problem, I decided to build a new 10.10.5 virtual machine (starting with a copy of a fresh, uninitialized 10.10.5 virtual machine from 2016).  That was going smoothly until the 2nd round of App Store software updates.  When I returned to my iMac after the updates should have finished, I found another frozen black screen.  I had not touched the Compatibility settings on this new virtual machine.

 

-- Ward


Re: NSX 6.4.3 Deployed VMs from vRA - some get connectivity some don't

$
0
0

we have noticed this issue on other hosts and logical switches.

Re: vsan stuck initializing ssd

$
0
0

Hello dominikwal,

 

 

Welcome to Communities.

 

If you are 100% positive you don't need the data on the drives configured/attached to this node then you can start ESXi with vSAN modules disabled then remove the partitions on the device. Press Shift+O at boot before modules are loaded (the screen that shows 'Shift+R' option) put a single space after the bootUUID and then type 'jumpstart.disable=vsan,lsom,plog,virsto,cmmds' - No quotes just putting here for clarity, then hit Enter and it will boot without any awareness of vSAN, then to load with vSAN modules just reboot it normally.

 

To anyone else that reads this in future and is considering doing this - removing any partitions/data is obviously completely at your own risk so do ensure you know what you are doing and/or do not  need the data potentially being removed.

 

 

Bob

Re: Yosemite virtual machine freezes with a black screen after changing Compatibility settings

$
0
0

... A bug which probably hasn’t yet been fixed (I don’t have a Yosemite VM anymore, so I cannot easily verify this without creating a new one).

Re: Yosemite virtual machine freezes with a black screen after changing Compatibility settings

Re: Virtual SAN Disk Balance

$
0
0

Hello sandroalvesbrasil,

 

 

"I found the information that I can execute the RVC command (vsan.check_limits) and check if any of the percentages are next or above 80%."

vsan.disks_stats <pathToCluster> is best summary view in my opinion - Cluster > Monitor > Disk Management in the GUI is close-second.

 

"I want to see these percentages to understand why this occurred, given that our environment has not undergone any changes."

Sure there were changes - you stated just 2 lines above this that you put a node into Maintenance Mode - which option you used, how long it was in MM and what other things you did can affect how data is distributed. For example in a 4-node cluster with default RAID1 FTM if you place a node in MM with 'Ensure Accessibility' option with default clom repair delay timer all the data will start rebuilding on the remaining 3 nodes after 60 minutes and thus if this node is not taken out of MM until hours later then all the data on it will be removed (as it is stale and has been rebuilt in its absence) thus the disks will all be 0% used compared to the now higher than they were before % used on the other nodes. Additionally, data on a cluster is rarely static and can grow over time causing other knock-on changes.

 

"1 - Is there any other possibility besides what I found as (HW Failures / Removals, Host being put into maintenance mode or Incorrect policy change)?"

Have a look at which disks are imbalanced and by how much via the Disk balance Health check - it should be fairly obvious the cause based on what % used disks are and on which hosts (e.g. everything got rebuilt while a host was in MM) but yes other causes are possible e.g. a disk being marked as failed or controller freaking out, data rebuilt elsewhere then the disk/controller starts functioning normally again and data gets placed there (either from VM/vmdk creation, proactive rebalance or reactive rebalance due to disks over 80% used).

 

"3 - If we have never done the rebalancing, can I lose data? Can my environment stop?"

No, that shouldn't cause issues as reactive rebalancing occurs without user-intervention which starts moving data off disks with 80% (default value) or more used onto lower % used devices where possible - as with any system though, space-management is important and you should be sizing your clusters adequately.

 

"4 - In the VMware documentation says that I can disable the alarm. I looked and the error we received was yellow, that is, it is not critical. Even without proactive rebalance enabled, rebalancing will occur when it reaches 80% (vsan.check_limits)?"

There is no red condition for Disk-Balance AFAIK as it is not going to negatively impact anything other than potentially missing out on performance gains from more devices.

being actively and equally used in the cluster. If your disk-usage becomes imbalanced due to changes then simply rebalance it.

 

"5 - Does proactive rebalance say it will start whenever% (vsan.check_limits) reaches 30% of disks?"

Proactive rebalance does not start automatically ever (as the name implies!) - it has to be initiated via the Health check in the GUI or via RVC.

 

 

Bob

Re: get-view slow

$
0
0

Thank you for the Infos was Helpfully

Fusion Pro 11.0.2 has broken host/guest folder sharing

$
0
0

My host is 10.13 and my guest is 10.12. Many folder/file sharing operations seem to work just fine, it seems to only be re-creating folders on the host from the guest that has problems.

 

1. make a folder in the host and share it

2. in a command line C program in the guest, create a sub-folder in there, works

3. delete the sub-folder (either from guest or host it doesn't matter)

4. try to make the sub folder again. The BSD mkdir() call returns an error but yet the sub-folder is created 

 

So pick any command line program in the guest (here's /bin/bash) and you get this:

 

jrb-vm-sierra:~ jrb$ ls '/Volumes/VMware Shared Folders/fusion'

jrb-vm-sierra:~ jrb$

jrb-vm-sierra:~ jrb$ mkdir '/Volumes/VMware Shared Folders/fusion/guest'

mkdir: /Volumes/VMware Shared Folders/fusion/guest: Input/output error

jrb-vm-sierra:~ jrb$ ls -l '/Volumes/VMware Shared Folders/fusion'

total 2

drwxr-xr-x  1 1123  219  64 Mar 23 13:35 guest

 

I tried this in python also with os.makedirs(), behaves the same.

 


Re: VMware, уменьшение размера HDD

$
0
0

Почитал ветку. Вы че тут все прикалываетесь? Размер диска без проблем уменьшается с помощью утилиты vmware-vdiskmanager.exe, которая входит в пакет VMware Workstation. А он пишет, что стандартными методами это сделать нельзя. Еще как можно, доки читать надо, клоуны.

Re: vSAN: Mismatched Versions

$
0
0

Hello DrSeussFreak

 

 

Welcome to Communities.

 

"I have removed the offending 6.7 host from the cluster, and removed the 6.5 host as well. I change the Virsto.DiskFormatVersion on the new hosts to 5, and re-added the 6.5 host, but it can still not see anything on the vsanDatastore."

How exactly was the 6.5 node removed and re-added to the cluster? e.g. does it still have its original Disk-Groups intact and healthy?

If you didn't recreate the Disk-Groups on re-joining the cluster check using:

#vdq -Hi

#esxcli vsan storage list | grep CMMDS

 

Are you positive you didn't perform On-Disk format for the remaining 6.7 nodes in the absence of the 6.5 node?

Can check this via GUI but more clear from RVC:

> vsan.disks_stats <pathToCluster>

> vsan.obj_status_report <pathToCluster>

 

Sounds like the 6.5 node is potentially isolated from the cluster - if the 'Remediate cluster' button is clickable via the 'vCenter is authoritative' Health check under Cluster then potentially you can remediate unicastagent entries automatically - otherwise make the necessary checks e.g. node has vSAN-enabled vmk(s) that can communicate with the other nodes vSAN-enabled vmk(s), all nodes in cluster have 6.5 node entry in their unicastagent lists, 6.5 node has unicastagent list entries for all other nodes, node is out of Maintenance Mode, node is not having some other issues such as controller physical/driver/firmware issue (#dmesg is your friend here).

 

 

Bob

Re: vSAN: Mismatched Versions

$
0
0

I am just rebuilding, I appreciate the response though.  I already began.

 

I am well aware I am running in a very unsupported config, and all of my VMs became inaccessible.

Re: hosts are not responding state/frozen state after upgrade from 5.5U3 to 6.5U2

$
0
0

i'am not sure if it's also same Issue on 6.5

 

me at 6.0.   and had lots Issues  with lsi-mr3 Driver.  Either   sfcbd-watchdog   stucked and was not killable.   or got buffer errors once started a bigger punch of vms after a Host Crash.Or System just stucked after a while.

i have also LSI controller. and for me the only stable Driver was megaraid-sas.  i had to disable  lsi-mr3 and to enable megaraid legacy.

 

Since then all running fine.


May worth to check it

 

0x2  = STATUS CODE:CHECK CONDITION

0x5 = SENSE CODE:ILLEGAL REQUEST

0x24 = ADDITIONAL SENSE CODE:INVALID FIELD IN CDB

0x1a = COMMAND CODE:MODE SENSE(6)

 

if i understand this - the commands which getting tried > the controller doesn't understand them

 

Best regards

 

Max

Re: Finding linked clones based on a parent

$
0
0

exactly what i was looking for.

 

apparently if you use Manage --> Delete from Disk it wipes the entry from the parent VMSD file.

 

looked at one of my parent VMs and i saw some "ghost" entries so i must have deleted some linked clones manually at one time.

 

i hope that at some point VMware adds a UI for things like this so as to make management a little easier.

 

thanks!

Viewing all 230656 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>