Search results

  1. [SOLVED] ZFS replication for VMs with multiple disks

    most probably it should appear as 'unused disk', and can be erased (if really you are 100% sure that is not needed anymore also it is possible that the volume exists even if doesn't appear in config, so very careful you could delete that volume using zfs commands in console as deleting is a...
  2. PBS: ZFS vs. hardware raid

    actually I saw some error messages on HP servers' RAID status that 'told' me there is some CRC detection/correction on that cards too (even old models); unfortunately I could not find any information about that subject, so I cannot tell if detection and auto-correction are 'real-time' as ZFS...
  3. PBS: ZFS vs. hardware raid

    Maybe a stupid question, but I cannot find any hint in the documentation. Does PBS 2.x benefit in any way of a zfs pool vs. a hardware raid array? I'm not talking about general discussion regarding zfs vs. hw raid (both strategies have pros and cons, it's a long discussion), I'm interested...
  4. HP Proliant DL380e Gen8 - S.M.A.R.T. and other diagnostics in iLO

    You can use this nagios plugin (maybe with some little modifications): It is based on hpacucli/ssacli output. It will not appear "nice" in...
  5. Migration fails (PVE 6.2-12): Query migrate failed

    me too, yesterday, but without issuing a fstrim -av before moving that vm (but like I've said above, I'm not sure if it does the trick); still can't reproduce the failure
  6. Mounting Large ZFS disk but cant use it 100%

    it's your server, but personally for more than 3 TB per disk (maybe even lower that limit) I would use no less than raid6 (raidz2); because at large disk sizes there is a great risk that a second drive will crash when rebuilding/resilvering and in that case you will lose data
  7. Mounting Large ZFS disk but cant use it 100%

    zfs likes raw "dumb" disks, because any layer of raid or smth may hide or lie about some information that zfs needs some modern hw raid implementation have a self-healing mechanism (like checksums in zfs), so you can recover from a situation like in raid1 you get 2 different data blocks from the...
  8. Shrinking a VM within Zpool

    if you created the virtual disk on a storage with thin provisioning then only the actual occupied data is allocated to the disk (plus "garbage") to "garbage collect" use (cron based or from time to time) fstrim -av in the vm (and make sure that discard option is checked on the virtual disk)
  9. Migration fails (PVE 6.2-12): Query migrate failed

    Same problem here, but with debian 10. Unable to replicate the bug, it's now and then, different vms, different hosts. Not sure if "fstrim -av" before migration may help or it's just a "homeopatic" solution. On source: Mar 27 16:09:28 QEMU[19778]: kvm: ../block/io.c:1810...
  10. [SOLVED] Proxmox and Old ProLiant Server

    first, you should define what "old" proliant means (model - generation, memory, hdds/raid) and what is your purpose (fun/testlab vs. some production) only pve6 is supported, pve5 is old (maybe for fun & testlab) & unmaintained, pve4 is too old to talk about it
  11. create lvm-thin local storage same name

    You should create a LVM thinpool on each server (Datacenter/server/disks/lvm-thin -> create thinpool) with same name and then create in Datacenter / storage a LVM-Thin storage. Hope I was explicit because I'm so tired now, so be careful what you do in the interface.
  12. create lvm-thin local storage same name

    You cannot have two storages with the same name in the cluster. But you can have a local storage (i.e. named 'vmdata') on all or some servers in the cluster (in fact there will be x local storages, one on each server, but with the same storage name).
  13. HP SAN + Proxmox = love ?

    A lot of (old) related posts in forum, let's see if something changed in 2021 :p Giving an HP MSA SAN, connected (Fiber Channel) to some HP servers, what will be the best solution to use SOME (not exclusively) of the SAN disk space with proxmox ? AFAIK, the recommended method is creating and...
  14. KVM vs LXC for web server

    LXC vs KVM it's a long discussion, there is no perfect answer, you must think about you needs and decide. - VM it's a little "safer" (i.e. better isolation, no shared kernel) - but with the neverending list of bugs from intel & others it's very arguable - LXC comes with a little overhead (1-3%)...
  15. Replicate to new zfs disks

    Frank, just to be sure you don't make a confusion: With Proxmox you do not replicate a full storage, the replication is done for each VM. So there is no need for a "backup"/"replica"/"whatever special" storage on "destination", but just (at least) a storage with zfs (afaik in this moment...
  16. PM 6.2 KVM Live migration failed (bug or ?)

    couple of weeks / months ago I had a similar issue, errors when live migrate I didn't got any clue, but some VMs were more susceptible to live migration failure, even their usage (even i/o) was very light I tried to "fstrim -av" the guests just before the live migration and no error since then...
  17. [SOLVED] Orphaned VM in list after replacing disks

    Remove (or better just move) the unneeded confs from /etc/pve/local/qemu-server (also /etc/pve/qemu-server) and the VMs will be removed from the webgui. Make sure that those VMs are not running, in order to be 110% on the safe side :p
  18. Summary of allocated RAM?

    If you don't mind get dirty with shell scripts, take this simple example: RAM provisioned to running VMs (in GB) # qm list | fgrep running | awk '{s+=$4}END{print s/1024}' RAM provisioned to all VMs (in GB) # qm list | awk '{s+=$4}END{print s/1024}' do not make a confusion about RAM actually...
  19. Question about corosync / totem ip address

    Hello again! IIRC address of the first installed server in cluster is listed in the corosync.conf, totem section (actually more correct will be: the server where proxmox cluster was created). When that address will be used and how could be changed ? Because I will need to reinstall that server...
  20. Question about VM migration when using local zfs

    Hello! Two migration scenarios, source and destination servers are using local zfs storage, let's call it local-zfs. 1st scenario (with preparing): 1. set replication to destination, schedule manual replication, wait until completed - only used space is copied 2. migrate - will copy disk delta...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!