Hello, after the upgrade ( pve7 to 8 ) my grub was crashed and I managed to recovery bootloader with the steps from here: https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool#Switching_to_proxmox-boot-tool
But now when I select which kernel option to boot from bootloader...
I tried with the next script, but it's still the same, after the backup of first VM is done the server is shutdown, doesn't wait for all of them to be done, can you check please ? Thank you so much!
#!/bin/bash
if [ "$1" == "backup-start" ]; then
echo "Backup started"
fi
if [ "$1" ==...
This works great, but if I have more than 1CT/VM in /etc/pve/jobs.conf for backup, the script wait until first VM backup is done and after that shutdown the server, how can I make this script to wait until all VMs backup are done ? Thank you!
If need ECC ram maybe will not be the best choice to use zfs replication for MiniPCs, as I said, I have 3x ( Hp ProDesk 600 G3 ), then which good alternative I have for this setup? Thank you so much!
Hello guys,
So I just recieved 3x mini HP PCs (ProDesk 600 G3) with 16GB Ram and 480GB ssd.
I want to create a proxmox cluster and I want to ask you which will be the best configuration for storage with these 3 nodes ?
I’m curious about ceph too, but not sure yet if this configuration is...
Hello, I have one main dell server (8TB, 2 cpu, 64 gb ram, etc)
And a thinkcenter NUC ( 512GB, 16 gb ram )
What is the best solution to make 2 VMs from main server, high availability to NUC, every machine use 3 gb ram and 30-40 GB storage.
On the maine server I have zfs and raid 1.
NUC is not...
I haven't time until now to implement this scenario, but I'm not sure how to bind-mount the same /Plex directory from LXC1 to LXC2, /Plex directory (256 GB in this case) was created with container creation as mp0 and store the .raw image on a Directory on hdd_pool, but I don't understand exactly...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.