Search results

  1. S

    io-error on all VMs - Storage Full (ZFS Pool 0B free)

    A) The Pool would fill up anyways ... For 70GB used inside the guest, it would accumulate 700GB (!!!) of used space on the HOST. Now it's "only" using 181GB .... A1) No extra bay capacity B) I'm getting confused ... All the space / disk on the HOST is occupied by ZFS, there is no space for...
  2. S

    io-error on all VMs - Storage Full (ZFS Pool 0B free)

    What problem ... Right now ... none I guess ... except possibly user error deleting a whole dataset/Container inside the VM or something. Going forward probably backups (using sanoid + syncoid) but not there yet ... I plan to use Proxmox Backup Server (PBS) in the future but as I said ... not...
  3. S

    Probable Data Loss - What is happening ?

    @fiona : the only possible reason, but I cannot be 100% sure about it since I don't remember the exact timing of events, is that I ran a Proxmox VE Update/Upgrade before-during-after (cannot remember exactly now :() I was doing the modifications inside that VM. Maybe that, at reboot of the VM...
  4. S

    io-error on all VMs - Storage Full (ZFS Pool 0B free)

    As peer the title, I have a host (running Podman/Docker Containers) that is periodically hitting 100% ZFS Pool Usage due to the Snapshots that keep getting created automatically on the Host. I enabled TRIM on the GUEST ZFS Pool (yes ... I'm running ZFS in the GUEST on top of a ZVOL that Proxmox...
  5. S

    Probable Data Loss - What is happening ?

    It's standalone ... But I believe Proxmox VE automatically runs a cluster anyways, even if there is a single node. root@pve16:~# mount -l | grep pve /dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other) root@pve16:~# zpool history |...
  6. S

    Probable Data Loss - What is happening ?

    Here is the Config: root@pve16:~# qm config 105 balloon: 0 boot: order=scsi0;ide2;net0 cores: 2 cpu: host ide2: none,media=cdrom memory: 4096 meta: creation-qemu=8.1.5,ctime=1714410786 name: DC2 net0: virtio=BC:24:11:A4:EA:7E,bridge=vmbr0,firewall=1 numa: 1 onboot: 1 ostype: l26 scsi0...
  7. S

    Probable Data Loss - What is happening ?

    I am a bit dumbfounded with 2 Debian VMs on 2 different Proxmox VE Hosts. The boot process seems stuck as in the screenshot above. No matter how many times I refresh the page, the NoVNC Console always displays this useless page and I cannot do anything. However ... I could ssh and perform...
  8. S

    Can't remove old vm-disk "data set is busy"

    Not sure how relevant it is but this is my experience after struggling A LOT with these almost random issues. Note however that this is for a DATASET, NOT A ZVOL ! lsof unfortunately never worked for me troubleshooting ZFS busy issues ... # Dataset to be destroy DATASET="zdata/PODMAN-TEST" #...
  9. S

    ZFS root booting to busybox, but no displayed command, message or error?

    I was facing the same problem (usually solved through backing everything up via zfs send | zfs receive remotely, recreating the partition layout & zpool, and restoring through zfs send | zfs receive remotely in reverse). I discovered that the wrong line in /boot/grub/grub.cfg is actually added...
  10. S

    Proxmox VE 8 Booting Stuck on some systems

    I recently upgraded Proxmox VE from 7.x to 8.x alongside a Debian Bullseye -> Debian Bookworm Upgrade. On some systems however, it gets stuck at boot. The only thing I can see over Remote Management / IPMI is, using Kernel 6.5.11-7-pve: Any ideas on how to debug further ? Kernel...
  11. S

    PCIe Bifurcation and PCIe address change

    That's unfortunate. I solved this the "easy" way by bifurcating the PCIe slot to x8x8 instead of x4x4x4x4. This way, since 2 devices are installed (in M.2 slot #1 and #3, so that each falls in a separate x8 slot), no "empty" places are left. The "trick" appears to work fine across several...
  12. S

    PCIe Bifurcation and PCIe address change

    I successfully managed to setup PCIe Bifurcation on my Supermicro X9DRi-LN4F+, since I am using a NVME Expander (ASUS Hyper M.2 X16 V2) for 2 x NVME drives in striped configuration (planning to destroy & recreate as mirror for safety reasons !). The NVME Expander is *not* passed through any...
  13. S

    Proxmox VE 6.2 Randomly Rebooting

    Thank you for your reply, Richard. The system runs fine with kernel from the 5.3.x series. Now up since 5 days and still going strong. Something is definitively wrong with the kernel in the 5.4.x series. Why would the issue be caused by saltstack? You think the salt minion is causing a kernel...
  14. S

    Proxmox VE 6.2 Randomly Rebooting

    For some weird reason my recently installed "new" NAS is randomly rebooting (approximatively 1-2 times each hour). System: - Supermicro X10SLL-F - Intel Xeon E3-1270 V3 - 4 x 8GB Unbuffered ECC DDR3 - 2 x Crucial MX100 256GB - 1 x IBM Exp ServeRAID M1015 (LSI 9220-8i) -> PCI-e pass-through to...
  15. S

    PCI Passthrough Selection with Identical Devices

    IMPORTANT: see EDIT #3 as to why this script doesn't work. I modified a bit the script from @radensb to be more dynamic and support several devices and separate the script from the configuration. My main motivation is to install the script using Saltstack on every PVE host. Then on each PVE...
  16. S

    [TUTORIAL] Cluster made of permanent + adhoc nodes

    Hi, I wanted to share my bits with you concerning my home lab cluster made of: - Permanent on-state machines - Ad-hoc machines (in order to reduce power consumption) Of course, besides the "usual" problems of voting (solved by assigning a much higher number of votes to the permanently-on...
  17. S

    Issue with chattr inside LXC container

    I will gladly provide any information that could help solve this issue but I'm not sure what I can give more as information ... I am just forcing a folder to be immutable with the "+i" flag. I don't want the system to be able to write to that folder without the ramdisk (tmpfs) being mounted...
  18. S

    Issue with chattr inside LXC container

    I recently started playing with Proxmox VE 5.1.3 and so far my impressions are good. Due to some performance overhead of KVM for a distcc compile server, I tried LXC and I get bare metal performance essentially. The only problem is that, in order to avoid filling the local disk, I want to...