Replace what?
Post output of
zpool status
If it's a single drive pool, you can convert it in a raid-1 mirror with zpool attach <actual drive> <new drive>
Upgraded with the new kernel 3 hosts in a cluster, no problems with Windows VMs so far, I'll try to enable again vzdump backup of Windows VMs (I noticed I had more BSD when vzdump backups were enabled) and keep watching the situation.
Next step I hope will be upgrade to CEPH Luminous 12.2.2 that...
First of all, you should upgrade to the latest version, there is a guide to upgrade from 4.4 to 5.1
https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
If you need to install an ISO (cdrom image) to install a VM, you can upload your ISO, from the proxmox GUI, in left pane Server View -...
I had a similar problem on a server, for me the solution was to reduce the used RAM by ARC ZFS cache and raise vm.min_free_kbytes
For ARC size, in /etc/modprobe.d/zfs.conf put
options zfs zfs_arc_max=X
where X is the size in bytes of how much RAM you want ARC cache to use, the value to use is...
Using fdisk (or cfdisk or gdisk if it's gpt) /dev/sdX from inside the VM, can you see the free space at the end of the disk?
In that case you can:
- partition that space
- pvcreate /dev/sdXn on the new partition
- then vgextend vg /dev/sdXn (vg is the name of the volume group you want to...
- In Windows 10, make a normal shutdown (not the fast shutdown/reboot that's default in windows 8/10), from cmd / powershell:
shutdown -s -t 0
- After that resize the virtual disk (skip this step if you already had resized it)
- Restart VM
- check from diskmgmt.msc if you have free space to...
What is the OSD log?
In the end, I think you could put out that OSD and let ceph rebalance the PGs from that OSD, at least try a ceph repair 15.1f26
I think it's better if you wait some reply from someone with more experience than me on ceph, I tried to help for what I know, but I don't know...
OSD 134 is primary for that PG, but OSD seems to have problems communicating with other OSDs
From the host where is OSD 134 (host19 according to crush map) you should check
systemctl status ceph-osd@134
and last lines from /var/log/ceph/ceph-osd.134.log
What you get from ceph pg dump|egrep "^(PG_STAT|15.1f26)"
You could even check that osd log in /var/log/ceph/ceph-osd.134.log from the host where is that osd
PG 15.1f26 is on the OSDs 134 and 159 (you have size=2 in this pool?) and OSD.134 have blocked request. Is OSD.134 working?
You have problems with OSD.70 too. Is it working?
Have you tried
ceph health detail
to see what is the problematic pg and the OSDs to which is mapped?
You should see a line similar to:
pg xx.yy is <status>, acting [ osdX, osdY, osdZ, ... ]
If one or more of the OSDs to which the pg is mapped is working, you could even try a repair of the pg...
According to your zpool output, the pool-2tb is created on pve1, so there is no pool-2tb in pve2.
On pve interface, under Datacenter - Storage, edit the pool-2tb and on the Nodes dropdown, choose pve1
Another BSOD on Windows 2012R2 during the night, I can't understand why it's happening always during the night.
KSM sharing is 0, swap is used, see attached screenshot
When you add a mount point to a container, you have a check box in the gui with label Backup. If you want backup of that mount point, you have to flag that check box
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.