Hi, I have 3 PowerEdge R730 XL, 4 PowerEdge R740 XL and a PowerEdge R740XD (block storage) that I would like to use to setup a Proxmox cluster.
I am investigating the possibility of setting up the 7 server within the same cluster (even though the hardware specs differ between models) and use the...
Well, I ran a "surface" scan of the original disk and it reported some read errors, which caused some information loss that impacted one of my containers that didn't start anymore.
Luckily I partially got it back as I had an almost year-old backup of it, it's not the entire thing, but it is...
Hi,
I noticed that one of my containers was down, and, when I tried to start it, it returned some I/O errors.
So I checked the disk with smartctl and this is what I got:
root@pve:/var/lib/lxc# smartctl -a /dev/sde
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.74-1-pve] (local build)...
This could be meaningful:
[15265.270900] perf: interrupt took too long (2627 > 2500), lowering kernel.perf_event_max_sample_rate to 76000
[22493.329383] perf: interrupt took too long (3298 > 3283), lowering kernel.perf_event_max_sample_rate to 60500
[32939.353103] perf: interrupt took too long...
If I remove that it won't start completely and it throws this error:
kvm: -device vfio-pci,host=0000:07:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 0000:07:00.0: failed to setup container for group 11: Failed to set iommu for container: Operation not permitted
TASK ERROR: start failed...
Hi,
I just installed a 10Gtek 82576-2T-X1 on a HP Microserver Gen8, Proxmox (version 7.2-3) is showing the card correctly.
I followed the passthrough guide to configure the system and the current configuration is as follows:
IOMMU Group 0 00:00.0 Host bridge [0600]: Intel Corporation Xeon...
I had to manually restart the array
Update, I had to manually restart the md raid, it worked fine, so I switched off the VM again and then I reapplied the aio flag to the virtio devices.
Once rebooted the array started up properly.
I will keep monitoring the situation for about 24 hours and...
After the change the RAID 10 is not getting mounted anymore:
Aug 11 12:31:54 nas01 monit[904]: Lookup for '/srv/dev-disk-by-label-Array' filesystem failed -- not found in /proc/self/mounts
Aug 11 12:31:54 nas01 monit[904]: Filesystem '/srv/dev-disk-by-label-Array' not mounted
Aug 11 12:31:54...
You mean adding it like this only to the following devices?
virtio1: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1JP0EY0,size=2930266584K,aio=native
virtio2: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4UEJS7U,size=2930266584K,aio=native
virtio3...
Hi, I recently upgraded from Proxmox 6 to 7, I have 1 VM and about 7 LXC containers running on it.
My VM runs OpenMediaVault with 4 passthrough 3TB WD Red disks and another WB Black 12Tb connected through USB.
Today I noticed that my network shares were really slow, so I checked the OMV VM and I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.