Recent content by onixid

  1. O

    Proxmox 7 nodes cluster + dedicated Ceph storage node

    Hi, I have 3 PowerEdge R730 XL, 4 PowerEdge R740 XL and a PowerEdge R740XD (block storage) that I would like to use to setup a Proxmox cluster. I am investigating the possibility of setting up the 7 server within the same cluster (even though the hardware specs differ between models) and use the...
  2. O

    SMART errors on root filesystem

    Well, I ran a "surface" scan of the original disk and it reported some read errors, which caused some information loss that impacted one of my containers that didn't start anymore. Luckily I partially got it back as I had an almost year-old backup of it, it's not the entire thing, but it is...
  3. O

    SMART errors on root filesystem

    Thank you, I ended up buying an identical SSD and making a binary copy of the PVE disk using ddrescue on the new disk and then replacing the old one.
  4. O

    SMART errors on root filesystem

    Hi, I noticed that one of my containers was down, and, when I tried to start it, it returned some I/O errors. So I checked the disk with smartctl and this is what I got: root@pve:/var/lib/lxc# smartctl -a /dev/sde smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.74-1-pve] (local build)...
  5. O

    Passthrough issue with 10Gtek 82576-2T-X1

    This could be meaningful: [15265.270900] perf: interrupt took too long (2627 > 2500), lowering kernel.perf_event_max_sample_rate to 76000 [22493.329383] perf: interrupt took too long (3298 > 3283), lowering kernel.perf_event_max_sample_rate to 60500 [32939.353103] perf: interrupt took too long...
  6. O

    Passthrough issue with 10Gtek 82576-2T-X1

    If I remove that it won't start completely and it throws this error: kvm: -device vfio-pci,host=0000:07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:07:00.0: failed to setup container for group 11: Failed to set iommu for container: Operation not permitted TASK ERROR: start failed...
  7. O

    Passthrough issue with 10Gtek 82576-2T-X1

    Hi, I just installed a 10Gtek 82576-2T-X1 on a HP Microserver Gen8, Proxmox (version 7.2-3) is showing the card correctly. I followed the passthrough guide to configure the system and the current configuration is as follows: IOMMU Group 0 00:00.0 Host bridge [0600]: Intel Corporation Xeon...
  8. O

    VM I/O errors on all disks

    I had to manually restart the array Update, I had to manually restart the md raid, it worked fine, so I switched off the VM again and then I reapplied the aio flag to the virtio devices. Once rebooted the array started up properly. I will keep monitoring the situation for about 24 hours and...
  9. O

    VM I/O errors on all disks

    I tried to revert the change, the system still is not mounting the raid, I had 6Tb of data in it...
  10. O

    VM I/O errors on all disks

    After the change the RAID 10 is not getting mounted anymore: Aug 11 12:31:54 nas01 monit[904]: Lookup for '/srv/dev-disk-by-label-Array' filesystem failed -- not found in /proc/self/mounts Aug 11 12:31:54 nas01 monit[904]: Filesystem '/srv/dev-disk-by-label-Array' not mounted Aug 11 12:31:54...
  11. O

    VM I/O errors on all disks

    You mean adding it like this only to the following devices? virtio1: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1JP0EY0,size=2930266584K,aio=native virtio2: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4UEJS7U,size=2930266584K,aio=native virtio3...
  12. O

    VM I/O errors on all disks

    pveversion -v: root@pve:~# pveversion -v proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve) pve-manager: 7.0-10 (running version: 7.0-10/d2f465d3) pve-kernel-5.11: 7.0-6 pve-kernel-helper: 7.0-6 pve-kernel-5.4: 6.4-5 pve-kernel-5.11.22-3-pve: 5.11.22-6 pve-kernel-5.4.128-1-pve: 5.4.128-1...
  13. O

    VM I/O errors on all disks

    Hi, I recently upgraded from Proxmox 6 to 7, I have 1 VM and about 7 LXC containers running on it. My VM runs OpenMediaVault with 4 passthrough 3TB WD Red disks and another WB Black 12Tb connected through USB. Today I noticed that my network shares were really slow, so I checked the OMV VM and I...