Hi.
In my production/test cluster with Proxmox and CEPH (same nodes) I have a node that I use for testing updates before applying to other servers.
Repository pve-no-subscription.
After today updates I get in log (just an extract)
May 02 17:56:47 pve-hs-3 pvedaemon[4929]: rados_connect failed...
You should already have a maintenance window during which you install kernel updates and reboot server, you should do in that timeframe.
Usually changing a ZFS parameter should not cause problems, but you can try on a test machine before.
You should set arc_max (and eventually arc_min) according to the RAM you want ZFS ARC cache to use.
In /etc/modprobe.d/zfs.conf, you put the lines:
# EXAMPLE ZFS ARC MIN - 512MB
options zfs zfs_arc_min=536870912
# EXAMPLE ZFS ARC MAX - 2G
options zfs zfs_arc_max=2147483648
Everytime you...
Have you enabled balloning device in the VM config? If you have it disabled, you must shutdown the VM (if it's windows 8 or 10 use shutdown -s -t 0), enable ballooning device in the config and then restart the VM, so you should find the device in the device manager
The device is called VirtIO...
You must install the service too, in the Virtion Balloon folder you should run blnsrv -i , it's better to move the balloon folder in a folder on the guest hard disk (if you use it from virtual CD / ISO)
Yes
I had a similar problem and to solve it I just installed a small Debian 9 VM (256MB RAM, 2GB disk) with chrony service installed and servicing NTP on my network
I think it's because zfs pool is not initialized and so it can't find it and /boot partition
Try to start with proxmox rescue mode, add the rootdelay=10 to /etc/default/grub in line GRUB_CMDLINE_LINUX_DEFAULT, update-grub and then restart.
It's worth a try
In a similar server with a similar setup, I had to add
rootdelay=10
to grub command line.
Read here: https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks#Boot_fails_and_goes_into_busybox
I have a similar configuration, a pool made by HDDs and a pool made by SSDs.
If the SSDs are correctly recognized, you should see the right device class for the OSDs:
$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 8.93686 root default
-6 2.94696...
Can you post the output of
qm config 101
What Windows version are you using?
I have a lot of VM with various Windows version, all with balooning enabled, virtio drivers installed, balloon service installer inside Windows, and I get no blue screen.
You should try to update from proxmox 4.4 to...
Create VPN between the Proxmox sites and the backup site, then use Veeam Endpoint Backup (or similar, I just love Veeam because I use it even in VMWare environments) in Windows VM, that can run incremental backups.
Do not open SMB to the internet, tunnel it through a VPN
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.