I have 2 settings I am testing now. After reboot the hostd file change was not enough. I will post again after another reboot and import test in about 30 minutes.
Added: Here is what I am testing
[root@localhost:~] grep -wns '<soap>' /etc/vmware/hostd/config.xml -A 4
323: <soap>
324-...
I believe I have the fix for the 6.5/6.7 users. I am now able to successfully import on 6.7.
SSH into your ESXi server and run this:
sed -i 's,</soap>, <maxSessionCount>0</maxSessionCount>\n </soap>,' /etc/vmware/hostd/config.xml && /etc/init.d/hostd restart
You can check the hostd config...
Not seeing Config.HostAgent.vmacore.soap.maxSessionCount on 6.7 Update 2 either. But if that 500 is in seconds (8.3 minutes) then that is exactly how long mine runs until it times out.
Mar 28 15:55:22
Mar 28 16:03:14
node-a
root@pam
VM 100 - Create
Error: unable to create VM 1...
Added: I...
I have seen the same issue as OP on Toshiba SAS drives only. My Hitachi SAS drives are correct. The WWN number returned by /dev/disk/by-id/ and smartctl is 1 higher than what is printed on the label. Note the serial number matches.
# smartctl -i /dev/disk/by-id/wwn-0x500003992892bdf1
smartctl...
I want to thank the Proxmox team for yet another amazing update with v5. This post is just for information for anyone out there running a ZFS cluster with no shared storage. It is possible to live migrate QEMU VMs but you cannot use the GUI (yet). If you attempt to migrate a running VM using ZFS...
If you have an ODD to 2.5" SATA HDD adapter swap your DVD drive and try booting from the onboard SATA port instead. I have an R710 but I used M1015 and H310 cards in IT mode without problems with ZFS and the PERC6 without ZFS.
https://www.newegg.com/Product/Product.aspx?Item=N82E16817997048
My 2 cents are stick with the S3700 drives or S3710s. I prefer 10GbE vs InfiniBand as it seems more standard. Then I can easily bring more 10GbE to the LAN (vs storage) side of the network. (I know IB has lower latency) If you don't mind used to save some money on the 10GbE switch. Check out the...
The 3 nodes is required as when 1 falls off, 2 nodes still talk to each other and agree the cluster is up. When you only have 2 nodes and 1 falls off, neither know which one has become the master. The third node is like a tie breaker.
The Proxmox backup hook script is awesome. Here is my script to copy to external USB drives with or without PGP.
#!/usr/bin/perl -w
# Proxmox removeable media backup script. Schedule using STOP mode. zfs-snapshots not supported.
# Place this script in /usr/local/bin/backup-hook.pl
# Install...
See this: https://forum.proxmox.com/threads/grub-rescue.33696/
I had lots of trouble booting ZFS lately and rootdelay=20 (even 5) was long enough to fix most of my troubles. Hope this helps.
For ZFS rpool problems:
Type: /sbin/zpool import -N rpool(enter) exit(enter)
I had to add rootdelay=20 in my /etc/default/zfs to the line that starts with 'linux'..., then run update-grub on a recent hardware change to a Dell T630 PERC H330.
See this thread...
Thanks for the fixed ISO. I am using the new build to install. I checked the partitions and it appears the installer does now create a pool with all 10 drives. At least that part is fixed.
I am having boot problems. I have not been able to get server to boot into Proxmox with 10 drives in the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.