Search results

  1. A

    Proxmox Backup VM on BOX

    Resurrecting this thread: I'm seeing the exact same problem for backups on a hetzner storage box (BX50). Funnily the problem only occurs from one server, while backups from 3 other servers on the cluster to the same storage box never ever have a problem. Also noteworthy on average only 20%-30%...
  2. A

    Basic ZFS vs Ceph Question

    Thanks for your answers. My takeaway now was (and this what I did in the cluster in question) to go for ZFS in an unhomogeneous cluster where some nodes may have significantly slower storage and where I can live with data loss of few minutes in case of a crash/failover. For future ceph...
  3. A

    Basic ZFS vs Ceph Question

    Unfortunately my Google skill was not high enough to get this seemingly basic question for my ceph vs ZFS decision answered: When I add new nodes with better disk performance (which is my current case as I replace a cluster server and the new one will be the first with NVMes) to the pool, would...
  4. A

    Backup issue - broken but finished successfully

    I just saw the exact same error and successful finish. However there were over 2TB left free on the target storage (a Synology CIFS share) and the backup had below 40GB. I could imagine that the success message was actually valid. When I repeated the backup directly afterwards it went through...
  5. A

    [SOLVED] "unable to handle page fault" with 5.3.10 kernel but no problem with 5.0.x - how to preserve the old working kernel on updates?

    That is a nice find and analysis!! Exactly the same for me (also with the failure on first VM boot which I also ignored since simply booting a second time works). And now with blacklist i2c-nvidia-gpu added to /etc/modprobe.d/blacklist.conf kernel 5.3.13-3-pve and all VMs including the one...
  6. A

    [SOLVED] "unable to handle page fault" with 5.3.10 kernel but no problem with 5.0.x - how to preserve the old working kernel on updates?

    The current kernel update (5.3.13-2-pve) had similar issues for the VM with the RTX 2080 SUPER passthrough. Only now an USB controller is involved but that probably is the case because meanwhile I added USB controller passthrough and stopped using the built in USB feature with the visualized...
  7. A

    [SOLVED] "unable to handle page fault" with 5.3.10 kernel but no problem with 5.0.x - how to preserve the old working kernel on updates?

    @spicyisland nice to know that there are more people with a similar setup :) @wolfgang thanks, that gave me confidence to try the new 5.3.13-1-pve kernel. However due to efi boot things seem a little bit different on my system. But first of all 5.3.13-1-pve still has a similar problem (log...
  8. A

    [SOLVED] "unable to handle page fault" with 5.3.10 kernel but no problem with 5.0.x - how to preserve the old working kernel on updates?

    When I boot with my recent kernel (5.3.10) I cannot start my VM which gets a RTX 2080 SUPER passed through (I'll attach the full log with error below). Another VM which gets a GT 1030 passed through still works normal. However, when I select the previous kernel (5.0.x) from the boot menu...
  9. A

    [SOLVED] Command line from /etc/default/grub not applied and PCI passthrough not working

    Seems like it but now everything works for me. And since I had to do some things a little different from what I found in the Proxmox guides and forum I thought I post it real quick: 1. Set up like described in the Promox guides posted above 2. In the VMs .conf files I had to differ a little bit...
  10. A

    [SOLVED] Command line from /etc/default/grub not applied and PCI passthrough not working

    Now I feel silly :D Thanks so much for the info - that was effective! However, GPU passthrough still doesn't work for the primary GPU. Now without any error (as far as I can see) but a black screen after sysboot console which switches to no signal when the VM tries to claim it. However now that...
  11. A

    [SOLVED] Command line from /etc/default/grub not applied and PCI passthrough not working

    I'm playing around with Proxmox 6 and PCI/GPU passthrough in a desktop PC with two graphic cards. I basically followed the Proxmox guides and some other tutorials out there. The setup also already works perfectly for both GPUs with near native performance in the VMs but unfortunately only as...
  12. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    @tom I really came to a liking of Proxmox (incl. how you as team handle most of things as well as your support and pricing strategy) over the past one or two years were I started using it more often, but this link is just like throwing a giant ball of cluttered information overload at a poor...
  13. A

    PFsense 2.4 on Proxmox 5.2/5.3

    Is that a recommendation you made up yourself? If yes say so and don't say "in general", otherwise state your source. Until than one should note that pfSense treats VirtIO as first class citizen (and has for a long time): https://docs.netgate.com/pfsense/en/latest/virtualization/index.html
  14. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I fear I have to join the line of affected users... From screening through this thread I don't think I can add anything useful, my case looks very similar. However, is there an easy way to get an email alert when corosync gets killed? One of my clusters just was in degenerated state for two...
  15. A

    [SOLVED] zfs_arc_max / /etc/modprobe.d/zfs.conf seems to be ignored in Proxmox 6.0

    *thumbs up* exactly that's the case! Maybe this information should be added to the "Limit ZFS Memory Usage" section on https://pve.proxmox.com/wiki/ZFS_on_Linux ;-)
  16. A

    [SOLVED] zfs_arc_max / /etc/modprobe.d/zfs.conf seems to be ignored in Proxmox 6.0

    When I rerun it the output is update-initramfs: Generating /boot/initrd.img-5.0.15-1-pve Without anything else and I'm quite sure it was the same when I ran it the first time. Meanwhile I tried to set the size on the fly with > echo "8589934592" > /sys/module/zfs/parameters/zfs_arc_max > echo...
  17. A

    [SOLVED] zfs_arc_max / /etc/modprobe.d/zfs.conf seems to be ignored in Proxmox 6.0

    > cat /sys/module/zfs/version 0.8.1-pve1 > cat /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=8589934592 > update-initramfs -u > reboot > cat /sys/module/zfs/parameters/zfs_arc_max 0 > awk '/^c/ { print $1 " " $3 / 1048576 }' < /proc/spl/kstat/zfs/arcstats c 32138 c_min 2008.62 c_max...
  18. A

    NVME ZFS problem

    Thanks for the info but that somehow would kill the purpose of a NVMe server (: So I guess we have to wait. Or are there maybe other workarounds like maybe an easy way to install a non raid boot partition and still get to use most of the space for a RAID1 ZFS pool afterwards? That would at...
  19. A

    NVME ZFS problem

    Any update on this or maybe a workaround? Specifically I'd like to install Proxmox on https://www.hetzner.de/dedicated-rootserver/px62-nvme but I'd like to avoid ordering them (or one to start with) and finding out Proxmox will not install :p
  20. A

    Which hardware and setup is good for VDI / decent graphic performance?

    Thanks for the links. The Proxmox wiki beats me to the punch again ;-) I did quite a lot SPICE testing now and I'm surprised how bad it is... sure, everything I do tests with right now is hard to compare to the setup I'm planing but still, while MS RDP is already nearly lag free with 1080p...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!