Resurrecting this thread: I'm seeing the exact same problem for backups on a hetzner storage box (BX50). Funnily the problem only occurs from one server, while backups from 3 other servers on the cluster to the same storage box never ever have a problem. Also noteworthy on average only 20%-30%...
Thanks for your answers.
My takeaway now was (and this what I did in the cluster in question) to go for ZFS in an unhomogeneous cluster where some nodes may have significantly slower storage and where I can live with data loss of few minutes in case of a crash/failover. For future ceph...
Unfortunately my Google skill was not high enough to get this seemingly basic question for my ceph vs ZFS decision answered:
When I add new nodes with better disk performance (which is my current case as I replace a cluster server and the new one will be the first with NVMes) to the pool, would...
I just saw the exact same error and successful finish. However there were over 2TB left free on the target storage (a Synology CIFS share) and the backup had below 40GB.
I could imagine that the success message was actually valid. When I repeated the backup directly afterwards it went through...
That is a nice find and analysis!! Exactly the same for me (also with the failure on first VM boot which I also ignored since simply booting a second time works).
And now with blacklist i2c-nvidia-gpu added to /etc/modprobe.d/blacklist.conf kernel 5.3.13-3-pve and all VMs including the one...
The current kernel update (5.3.13-2-pve) had similar issues for the VM with the RTX 2080 SUPER passthrough.
Only now an USB controller is involved but that probably is the case because meanwhile I added USB controller passthrough and stopped using the built in USB feature with the visualized...
@spicyisland nice to know that there are more people with a similar setup :)
@wolfgang thanks, that gave me confidence to try the new 5.3.13-1-pve kernel. However due to efi boot things seem a little bit different on my system.
But first of all 5.3.13-1-pve still has a similar problem (log...
When I boot with my recent kernel (5.3.10) I cannot start my VM which gets a RTX 2080 SUPER passed through (I'll attach the full log with error below). Another VM which gets a GT 1030 passed through still works normal.
However, when I select the previous kernel (5.0.x) from the boot menu...
Seems like it but now everything works for me. And since I had to do some things a little different from what I found in the Proxmox guides and forum I thought I post it real quick:
1. Set up like described in the Promox guides posted above
2. In the VMs .conf files I had to differ a little bit...
Now I feel silly :D Thanks so much for the info - that was effective!
However, GPU passthrough still doesn't work for the primary GPU. Now without any error (as far as I can see) but a black screen after sysboot console which switches to no signal when the VM tries to claim it. However now that...
I'm playing around with Proxmox 6 and PCI/GPU passthrough in a desktop PC with two graphic cards. I basically followed the Proxmox guides and some other tutorials out there. The setup also already works perfectly for both GPUs with near native performance in the VMs but unfortunately only as...
@tom I really came to a liking of Proxmox (incl. how you as team handle most of things as well as your support and pricing strategy) over the past one or two years were I started using it more often, but this link is just like throwing a giant ball of cluttered information overload at a poor...
Is that a recommendation you made up yourself? If yes say so and don't say "in general", otherwise state your source. Until than one should note that pfSense treats VirtIO as first class citizen (and has for a long time): https://docs.netgate.com/pfsense/en/latest/virtualization/index.html
I fear I have to join the line of affected users... From screening through this thread I don't think I can add anything useful, my case looks very similar.
However, is there an easy way to get an email alert when corosync gets killed? One of my clusters just was in degenerated state for two...
*thumbs up* exactly that's the case!
Maybe this information should be added to the "Limit ZFS Memory Usage" section on https://pve.proxmox.com/wiki/ZFS_on_Linux ;-)
When I rerun it the output is
update-initramfs: Generating /boot/initrd.img-5.0.15-1-pve
Without anything else and I'm quite sure it was the same when I ran it the first time.
Meanwhile I tried to set the size on the fly with
> echo "8589934592" > /sys/module/zfs/parameters/zfs_arc_max
> echo...
Thanks for the info but that somehow would kill the purpose of a NVMe server (:
So I guess we have to wait. Or are there maybe other workarounds like maybe an easy way to install a non raid boot partition and still get to use most of the space for a RAID1 ZFS pool afterwards? That would at...
Any update on this or maybe a workaround?
Specifically I'd like to install Proxmox on https://www.hetzner.de/dedicated-rootserver/px62-nvme but I'd like to avoid ordering them (or one to start with) and finding out Proxmox will not install :p
Thanks for the links. The Proxmox wiki beats me to the punch again ;-)
I did quite a lot SPICE testing now and I'm surprised how bad it is... sure, everything I do tests with right now is hard to compare to the setup I'm planing but still, while MS RDP is already nearly lag free with 1080p...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.