Search results

  1. Which VM CPU model to choose to have same CPU id string even after PVE upgrade?

    Nice move, never given a try, will consider. Really good approach when no other way works! What's your experience, if there big speed overhead because of that?
  2. Which VM CPU model to choose to have same CPU id string even after PVE upgrade?

    I have to politely admit this is quite old software but we have to use it nevertheless. And it won't tell the difference between VM and physical host. These guys used to bind to both hardware and software, so OS version and all packages have to be intact, too. You can imagine how happy we are to...
  3. Which VM CPU model to choose to have same CPU id string even after PVE upgrade?

    Hello, I used to run KVM VM with some licensed software in it (there is Debian and the software run in it). When the VM was setup the CPU was set to kvm64 and never changed. The software itself used to bind its license to hardware list of the PC (that is, VM), and the problem is, if I somehow...
  4. PVE 6.1 (ZFS setup) keep killing disks

    Yes I can see symlinks: But actually I overcome this (to be on the safe side) by adding another similiar sized disk to VM and restore files that are to be on zvol from backup, so this list includes this zvol too. What's strange is that I have 2 HDDs in mirror, but I can boot from 2nd disk...
  5. PVE 6.1 (ZFS setup) keep killing disks

    No and yes. As we replaced board and one of hdds, server seems to run well, the mirror restore process finished well too (this was yesterday night, actually, so not long ago), so I can say "no problem so far". But last night backup job given me strange output, and I can reproduce it manually by...
  6. PVE 6.1 (ZFS setup) keep killing disks

    No, not at all. Only sata3 ports as Intel chipset provides. This is blade server, it has no space for any hw cards and buses.
  7. PVE 6.1 (ZFS setup) keep killing disks

    Yes, here is it: Blade server SYS-5039MC-H12TRF 2 disks Exos 7E8 4TB 512e SATA 1 disk Samsung m.2 970 Pro Xeon E-2288-G 64 GB of RAM (onboard IPMI) Can check motherboard devices but nothing custom. m.2 disk used as separate disk (zfs pool), not as cache for mirror made of HDDs.
  8. PVE 6.1 (ZFS setup) keep killing disks

    I'd wish to believe it is me who's no luck but I know other clients are happy with the same disks. I doubt hoster will collect bad-looking disks only to supply it to my server :) ZFS did its best to predict problems and it warns me each time. First that there are some errors in checksums on one...
  9. PVE 6.1 (ZFS setup) keep killing disks

    And yes the server have no hw raid so all I can do is to resetup it as md mirror but I'd like to try zfs replicate feature to send VM state to another server. We planned to rent several such blades (the same config), so zfs replicate can be good to try and use, but as we stick with this issue...
  10. PVE 6.1 (ZFS setup) keep killing disks

    Bad batch (maybe) was the reason they replaced disks that many times so easily (they are nice, really), but they told me many other clients use the same disks from the same batch without any problems. That also tried to install disks of the same series but from different batch to overcome that...
  11. PVE 6.1 (ZFS setup) keep killing disks

    Hello, I rented a blade server with two 3.5" 4 Tb SATA disks, added both as ZFS mirror. No problem was seen initially, but soon I got one of drives failed (physically), so hosting company replaced it promptly and I have pool resilvered. After 3 days second disk died physically, and again I have...
  12. 1 SSD and 2 HDD - best storage setup?

    It as a bit unusual to configure such a host but I did the server setup (ZFS is used to be self-minded somewhat like "ok, I know how to do that, don't mind" and it just works - not to mention that is should be tuned for load). It you permit I'd ask the question for idea I heard one day: is I...
  13. 1 SSD and 2 HDD - best storage setup?

    :) I suspect ZFS over SSD (or SSD under ZFS) will be much slower that SSD itself. Even that SSD is NVME one. You see, ordinary way to use SSD for me will be create LVM, create a partionion over it, format is as (ok, let's give a try, instead of ext4) xfs, then mount it in PVE, and add this...
  14. 1 SSD and 2 HDD - best storage setup?

    Thank you very much. This was a point I missed (and it was quite a problem for me to consider ZFS. I've heard a lot of times that while ZFS stores disks as raw it is more efficient due to it has native snapshot/compression support. I would one day check if deduplication is that great thing when...
  15. 1 SSD and 2 HDD - best storage setup?

    Funny thing is that I prefer to keep VM's data in .qcow2 files, not in LVM-thin or (not tried yet) ZFS. The reason is simple and it more important that extra layers for data: I can easily copy over these files, even to different host server, even to external HDD, and it'll still be "file", not...
  16. 1 SSD and 2 HDD - best storage setup?

    Thank you for your recommendations! You see, I'd prefer MD as it is possible to recover it easily (while ZFS recover is hard to understand thing if something go wrong). Sadly, PVE setup won't handle MD mirror out of box, so need to set it up manually (or set up Debian and put PVE as package)...
  17. 1 SSD and 2 HDD - best storage setup?

    I only say PVE won't migrated to XFS so far (and not appears to do so), and they do aware of any FS "goodness" I suspect. Ok, anyway you vote for MD not ZFS?
  18. 1 SSD and 2 HDD - best storage setup?

    XFS even that Debian tends to use ext3/4?
  19. 1 SSD and 2 HDD - best storage setup?

    NVME is enough sized to keep all VM data - the problem is it single one. What you will recommend as robust mirror technology: ZFS or MD or maybe something else?
  20. 1 SSD and 2 HDD - best storage setup?

    Yes I think this is the best approach. The problem is how mirror should be created out of HDDs. I can: 1. Use md. Time-consuming, but proved to be working solution. Not recommended by PVE itself nor supported in ISO based installer. 2. Use ZFS. Rumors claim I should put boot on non-ZFS...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!