Search results

  1. ZFS: raid disks

    It's normal, because part1-2 is used for EFI systeam partition and boot, respectively (if I remember correctly). And it's better to have a partition table anyway to avoid "accidents" with bare disks.
  2. pveproxy not listening on IPv6, only IPv4

    SLAAC provides a dynamic address. At least it used to, in that environment. Now they seem to provide a dynamically assigned, but static EUI-64 address for the servers (makes a lot more sense, honestly). I can add that to the hosts file, but see my other concerns. Plus I see the whole fiddling...
  3. pveproxy not listening on IPv6, only IPv4

    The systems are dual stack, not IPv6-only. If I add the v6 address as well, won't it create confusion in the cluster communication and other things? On the other hand, the HNs use dynamic v6 addresses via SLAAC. This is a technical limitation in that system. So I can't just add an address...
  4. pveproxy not listening on IPv6, only IPv4

    I'd like to access my PVE servers on their IPv6 addresses in a DC with native IPv6 available. Currently it's impossible as the pveproxy service is not listening on IPv6. The systems have proper dual stack setup, default (high) IPv6 preferences. Could it be fixed so the pveproxy service listens...
  5. Separate loadavg for individual containers

    Yeah, my bad, it's a separate package. And yes, my concern is also is that people seem to be infatuated with load values and make (mostly false) assumptions based on them.
  6. Separate loadavg for individual containers

    I can't believe my eyes. This "feature" made its way into upstream. Yes, loadavg is not a very useful metric, but still gives a vague idea of the real load and practically all end every monitoring framework probes on these values. I think it's going to get in fairly quickly as the Proxmox team...
  7. Installation on DL580 G7 with HBA H220

    That's a long thread, what was it specifically that helped your case?
  8. Move a VM without vzdump

    Manual move: just create a vm on the target node/storage with identical specs. Then you can stop the source vm and dd over the block device contents through ssh.
  9. Move a VM without vzdump

    It looks like the restore process creates a target block device with smaller size than required. What is your target storage? If the source archive is not corrupted, this might be a qmrestore bug. It's also possible to copy VM disks without using vzdump, I've done it many times. But then you...
  10. [SOLVED] ZFS Raid

    Hm, you're right. IDK why I thought of mirroring the stripe set. A similar procedure is followed when replacing a disk in a stripe of mirrors. I guess the example you posted was OP's request anyway...
  11. [SOLVED] ZFS Raid

    What do you mean exactly and how is it related to a stripe set created by mistake instead of a mirror?
  12. [SOLVED] ZFS Raid

    Mirrored stripes (~RAID01) are not supported in ZFS. Your only option is re-creating the pool.
  13. proxmox ve best network configuration

    I thought OP's idea was not to use separate bridges per VLAN if possible, but I might have misunderstood him. Certainly that's the other common solution besides OVS.
  14. HW raid-6 to ZFS Raid1

    They differ in speed, depending on the workload (yeah, again)... More independent spindles = more iops; more stripe sets = more sequential speed. And their various mixes, of course. You can add more than 2 disks to a mirror, if that's your thing, it can increase your random reads too, as reads...
  15. HW raid-6 to ZFS Raid1

    Your alternative would be raidz2 which is similar to raid6, with good sequential read/write, but your read iops might suffer (until you hit the arc). You could be good to go, though, with that kind of workload.
  16. HW raid-6 to ZFS Raid1

    Average web sites are mostly random read (but of course it depends), you would benefit greatly from a large ARC. L2ARC might or might not worth it, I tend to see low utilisation with them. If you can afford it, put more ram in the server for the ARC instead. I'd suggest going with striped mirror...
  17. HW raid-6 to ZFS Raid1

    That's not really accurate. The general iowait the kernel shows you is not necessarily tied to your actual workload, most often it has nothing to do with it (I mean, it they are connected but not directly because of coalescing, used queuing principle and depth, etc.). It's a general measure...
  18. proxmox ve best network configuration

    Yes, of course. But you need to take care of setting up the proper VLAN on your interfaces in each VM/CT (you can set it on the web GUI, too, IIRC).
  19. HW raid-6 to ZFS Raid1

    Striped RAID1 vdevs x3 (~RAID10) and SSD slog + L2ARC on SSD vs. HWRAID RAID6 with a relatively small 512 MB cache. Why would the former be objectively slower? For what type of workload? What is the target/acceptable IOPS? The only thing to be wary of usually is the higher IOWAIT with ZFS, when...


The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!