Another error after a snapshot rollback, on the first cluster (production).
()
2020-07-10 12:22:00 103-0: start replication job
2020-07-10 12:22:00 103-0: guest => VM 103, running => 40137
2020-07-10 12:22:00 103-0: volumes => local-zfs:vm-103-disk-0,local-zfs:vm-103-state-voor_2_3_install...
My APC ups is hooked up to my proxmox node via usb as well, I use apcupsd for monitoring and logging.
Could you try installing the service and setting the "EVENTSFILE" setting to your liking in /etc/apcupsd/apcupsd.conf.
Here's a little guide to set up the service for usage with usb...
There's a thing called write amplification on zfs, using trim (implemented in zfs 0.8) will help significantly.
I've tries using samsung ssd's for my home lab as well but performance isn't great. Like LnxBil said, consumer ssd's aren't built for zfs.
My Intel s3500 480gb ssd's dropped from 30%...
The replication was working rock solid until someone had to rollback a snapshot.
This is the content of datacenter.cfg:
keyboard: en-us
migration: insecure,network=10.1.25.0/24
This was related to the second issue:
volume 'rpool/data/vm-102-state-test2' already exists
The no tunnel IP received...
I've had issues with the "INACCESSIBLE_BOOT_DEVICE" message when transfering a hw machine to a vm. The issue was resolved by deleting Intels Rapid storage.
I suspect something in the boot process is still looking for a Xen hdd in specific instead of a common (ide) drive.
disk2vhd works fine for...
I have a few clusters that use ZFS replication and HA.
After a client rolled back a snapshot on one of their vm's a weird error popped up on the next replication.
Removing the replication task and recreating it fixed the issue.
2020-06-25 11:36:01 102-0: start replication job
2020-06-25...
I've used the same script to migrate hundreds of vm's from Xen to Proxmox.
Even a Windows 2003 vm worked after changing some settings from our default vm template:
acpi: 1
agent: 0
bios: seabios
boot: cdn
bootdisk: ide0
cores: 4
ide0: local-lvm:vm-XXX-disk-0,size=100G
ide2: none,media=cdrom...
You should be able to use the gpu in a container when it's not passed through to a vm.
https://www.passbe.com/2020/02/19/gpu-nvidia-passthrough-on-proxmox-lxc-container/
First of all I'd like to thank the devs for another great update. pve 6.2 has brought some really nice features.
During my testing with 6.2 I was pleased to see that live migration with zfs replication works now, this is a feature I've been waiting for.
I use HA with zfs replication as...
My fix for nvidia quadro cards is to export the bios and pass the bios file through to the vm.
Also simply running the vm as omvf instead of bios (requires reinstall) might work.
The iso must fit in ram uncompressed, I've had some mixed results simply loading the iso.
I'm using the following script to convert the iso which has been a better solution in my experience.
https://github.com/morph027/pve-iso-2-pxe
Nice! Glad you got it to work.
AFAIK it doesn't matter if the EFI disk gets lost since it's just a boot firmware and contains vars like boot order.
I did a quick test by copying an existing vm with uefi and deleting the EFI disk. The vm booted after recreating the EFI disk and even without one...
Could you check out the disk using a live cd of some sort?
I have never imported a vhdx file directly, all my converted imports went well though.
qemu-img convert -f vhdx image.vhdx -O raw image.raw
qm importdisk 102 image.raw local-zfs
These are the commands I use for importing vhd(x) files.
Did you set your vm to be uefi?
On the hardware tab set BIOS to OMVF and add an EFI disk on the same page with the add button.
At the options tab make sure that the right disk is selected for the boot order.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.