Separate names with a comma.
You should read this thread:
I know that the Proxmox Team hate RAID-0...
You have a pool with size=2 and min_size=2. If an osd is down, there are some placegroups with only one copy available, which is less than...
You are propably going to use ceph pool with failure domain = host and min_size = 2, so you neeed at least 2 hosts.
Now you have a choices (from...
root@fujitsu1:~# ceph <TAB><TAB>
auth df heap mon quorum...
Is there any slow request in the ceph status?
If not, and osd is still up, there wasn't any I/O operation on that drive, so ceph doesn't know if...
Upgrade proxmox fixes the problem.
proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-8 (running version: 5.2-8/fdf39912)
Restore to local-lvm don't work too.
restore vma archive: zcat /mnt/pve/qnap/dump/vzdump-qemu-1191-2019_03_08-10_18_13.vma.gz | vma extract -v -r...
restore vma archive: zcat /mnt/pve/qnap/dump/vzdump-qemu-1191-2019_03_08-10_18_13.vma.gz | vma extract -v -r /var/tmp/vzdumptmp974517.fifo -...
The problem is you have size=min_size, so any down osd will freeze the pool.
Change size to 3 (this will cause mass data movement, so be advised).
And pool size/min size?
I/O is blocked because of:
2019-02-06 11:10:56.387126 mon.bluehub-prox02 mon.0 10.9.9.2:6789/0 33971 : cluster [WRN] Health check failed: Reduced...
So maybe you should move vlans inside VM instead of setting one interface for each.
Use vlan inside VM?
So you have a hardware problem. Remove all cards and try again.
With kernel panic? Please show kernel messages.
That's the reason.
Remove 'acpi=off' from command line and you will see all the cores.
It's not true.
losetup -o offset=1048576 /dev/loop22 disk-drive-ide0.raw
mount /dev/loop22 /mnt/123
This is outdated.
Use lvm and ceph-volume:
Are you sure you using right /dev/sdX device and sdX device isn't mounted?