most probably it should appear as 'unused disk', and can be erased (if really you are 100% sure that is not needed anymore
also it is possible that the volume exists even if doesn't appear in config, so very careful you could delete that volume using zfs commands in console
as deleting is a...
actually I saw some error messages on HP servers' RAID status that 'told' me there is some CRC detection/correction on that cards too (even old models); unfortunately I could not find any information about that subject, so I cannot tell if detection and auto-correction are 'real-time' as ZFS...
Maybe a stupid question, but I cannot find any hint in the documentation.
Does PBS 2.x benefit in any way of a zfs pool vs. a hardware raid array? I'm not talking about general discussion regarding zfs vs. hw raid (both strategies have pros and cons, it's a long discussion), I'm interested...
You can use this nagios plugin (maybe with some little modifications):
https://exchange.nagios.org/directory/Plugins/Hardware/Storage-Systems/RAID-Controllers/check_cciss--2D-HP-and-Compaq-Smart-Array-Hardware-status/details
It is based on hpacucli/ssacli output. It will not appear "nice" in...
me too, yesterday, but without issuing a fstrim -av before moving that vm (but like I've said above, I'm not sure if it does the trick); still can't reproduce the failure
it's your server, but personally for more than 3 TB per disk (maybe even lower that limit) I would use no less than raid6 (raidz2); because at large disk sizes there is a great risk that a second drive will crash when rebuilding/resilvering and in that case you will lose data
zfs likes raw "dumb" disks, because any layer of raid or smth may hide or lie about some information that zfs needs
some modern hw raid implementation have a self-healing mechanism (like checksums in zfs), so you can recover from a situation like in raid1 you get 2 different data blocks from the...
if you created the virtual disk on a storage with thin provisioning then only the actual occupied data is allocated to the disk (plus "garbage")
to "garbage collect" use (cron based or from time to time) fstrim -av in the vm (and make sure that discard option is checked on the virtual disk)
Same problem here, but with debian 10. Unable to replicate the bug, it's now and then, different vms, different hosts. Not sure if "fstrim -av" before migration may help or it's just a "homeopatic" solution.
On source:
Mar 27 16:09:28 QEMU[19778]: kvm: ../block/io.c:1810...
first, you should define what "old" proliant means (model - generation, memory, hdds/raid) and what is your purpose (fun/testlab vs. some production)
only pve6 is supported, pve5 is old (maybe for fun & testlab) & unmaintained, pve4 is too old to talk about it
You should create a LVM thinpool on each server (Datacenter/server/disks/lvm-thin -> create thinpool) with same name and then create in Datacenter / storage a LVM-Thin storage. Hope I was explicit because I'm so tired now, so be careful what you do in the interface.
You cannot have two storages with the same name in the cluster.
But you can have a local storage (i.e. named 'vmdata') on all or some servers in the cluster (in fact there will be x local storages, one on each server, but with the same storage name).
A lot of (old) related posts in forum, let's see if something changed in 2021 :p
Giving an HP MSA SAN, connected (Fiber Channel) to some HP servers, what will be the best solution to use SOME (not exclusively) of the SAN disk space with proxmox ?
AFAIK, the recommended method is creating and...
LXC vs KVM it's a long discussion, there is no perfect answer, you must think about you needs and decide.
- VM it's a little "safer" (i.e. better isolation, no shared kernel) - but with the neverending list of bugs from intel & others it's very arguable
- LXC comes with a little overhead (1-3%)...
Frank, just to be sure you don't make a confusion:
With Proxmox you do not replicate a full storage, the replication is done for each VM. So there is no need for a "backup"/"replica"/"whatever special" storage on "destination", but just (at least) a storage with zfs (afaik in this moment...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.