If migration sucks, then you will have to take a look at the network as well. On the source host, reading from the storage shouldn't cause high i/o latency.
Pardon? What other scenario do you need? I just had a sudden pve host reboot on thursday… these things can - and will, happen. Better to...
As I said, you connect the new drive and watch syslog whre you can watch the drive being incorporated into the system. Then you simply perform a ls -l on the /dev/disk/by-id folder and get your new drive's mapping. Since your faulted drive is still mentioned in the zpool, I'd first go with the...
You can do it in two ways:
zpool replace pool1 1200958216758885009 <new disk>
Or, ou would just use the new disk and attach it to the mirror-0 vdev like this:
zpool attach pool1 ata-WDC_WD80EMAZ-00M9AA0_VAGDUWBL <new disk>
zpool detach pool1 1200958216758885009
You will need to query the...
I think the issue with containers is, that these are sub-volumes from the ZFS root mount point and since that one got messed up, these sub-vols couldn't been mounted. ZVOLs on the other hand are not mounted, but are treated a raw devices, hence none of my guests had any issue starting up and...
No, I actually don't know, what caused this, but I have seen a couple of threats on this forum dealing with stuff that seemed to stem from this issue, which I think has bee introduced in 6.1.8, but I can't tell for sure, since I only rarely restart my geusts/containers. I had a weird pve host...
Recalling from my session yesterday, this is what I did to be able to export my zpool pool properly:
systemctl stop pve-manager
systemctl stop zed
systemctl stop pve-storage
systemctl stop pvedaemon
systemctl stop pve-cluster
systemctl stop pveproxy
systemctl stop pvestatd
systemctl stop...
I wouldn't tie this to the size of the zpool.cache file… and you wouldn't know what's going on with the mound folder, unless you exported the zpool. The log clearly showed an issue with the mount point, so why don't give it a try? All I am saying is, that I experienced the same behaviour, until...
Soo… after getting my issue with my zpool fixed, I was able to look into this and for me at least, it's the way I remember… while performing a backup in snapshot mode, my guest (all my guests) are staying connected an operational throughout the backup process. No single ping got lost, when I ran...
That's right… the installer will wipe the existing zpool out of existence. That's the reason I always separate rpool/boot and vm storage. If you want to save your data, you'd have to zfs send your rpool data ZFS to a file or another zfs host.
I had a similar issue and it turned out that, although I thought that the zpool was correctly mounted, it wasn't. Also the zpool.cache file had a size if 0 which also wasn't right. So I manually stopped all pve services and when trying to export the zpool, something immediately imported it back...
Backup/Restore/Migration will not trigger sync-writes, but only async ones! If you experience issues with troughput speeds on any of them it's not due to sync-writes, but because something else is straining your resources.
I'd be more weary of sudden host crashes, which could then totally...
And you did ran grub-install /dev/sdp ? I am actually not a 100% sure, if the disks being of different sizes does matter, since I only used that on disks of equal sizes.
So… it turned out, that my pve host crashed due to the mail gateway LXC creating repeatedly OOM-killer invocations. After that reboot the zpool didn't got mounted in time and then the mount point got "occupied" and thus the ZFS containing the folder didn't get mounted.
So… I ended up shutting...
Can it be, that there has been an error introduced in latest version of PVE? I do have configured a zpool and on that zpool is a folder, which I have set to be used for backups. However, every time I try to run a backup and select the folder, I am seeing this:
As you can see, the amount of...
I see. I just wanted to try out, if the snapshot-based backup of a guest actually holds it for the duration of the backup process, but I just encountered an issue, that renders this impossible: when trying to perform a guest backup, all available locations are finally pointing to the local...
Are you mixing-up backups with replication? ZFS snaps are only send to another zfs storage and not via cifs. The snapshot in regards to backups refer to the qemu volume, which gets snapshotted internally and then that snapshot is saved to disk.
You should be able to query these values like this (note that you'd need to query each osd daemon for it's individual settings, hence the slightly different call):
ceph daemon osd.1 config get osd_max_backfills
ceph daemon osd.1 config get osd_recovery_max_active
I haven't been using...
Wow… sync=disabled is a real threat on a VM server. If an application issues a sync write, it is usually doing so on purpose. Sync writes ensure that the drive has landed safely on the storage, before signalling success to the request. First examples that come to mind are database applications...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.