Search results

  1. H

    OVH how can I add IPv6 through vRACK vmbr1

    Still trying to get this to work Node1 IPv4: 51.195.234.xxx/32 IPv6 Block: 2001:41d0:802:4e00::/56 IPv6 Gateway: fe80::1 Node2 IPv4: 51.195.235.xxx/32 IPv6 Block: 2001:41d0:802:4h00::/56 IPv6 Gateway: fe80::1 Inside Node1 the network configuration looks like this auto lo iface lo inet...
  2. H

    CPU limit is being ignored?

    As you can see the CPU should be capped at 75% (3 CPU Limit) but it it constantly goes above this. If I manually update this setting again (changing 3 to 2 and then back to 3 again) then it starts taking effect again. I have seen this happen a few times now. Is there a bug that causes this? I...
  3. H

    swappiness value is being ignored (100% RAM being used)

    Yes it's in the first post # Set to use 10GB Min options zfs zfs_arc_min=10737418240 # Set to use 20GB Max options zfs zfs_arc_max=21474836480
  4. H

    swappiness value is being ignored (100% RAM being used)

    Been at 100 for a while now doesn't look like it's doing anything. What is wrong with swapping nowadays with NVMe drives? It swaps memory in/out very fast from what I can see
  5. H

    swappiness value is being ignored (100% RAM being used)

    I forgot to add my zfs & ksm settings in the first post, updated it now. I don't know if what you laid out is the case but it sounds about right to me. This is the first time I have had this issue and it's with my zfs node. Are there any work arounds if this is the correct design? I need free...
  6. H

    swappiness value is being ignored (100% RAM being used)

    swap is not kicking in until the server reaches 100% RAM. I am not sure why but my Proxmox node is acting as if the swappiness value is set to 0 cat /proc/sys/vm/swappiness 10 free -g total used free shared buff/cache available Mem: 503 498...
  7. H

    OVH how can I add IPv6 through vRACK vmbr1

    I use the following configuration connected to vmbr1 to get IPv4 working through OVH vRACK network: version: 2 ethernets: eth0: addresses: - 51.195.1xx.89/28 - 2001:41d0:802:8xxx::3/128 gateway4: 51.195.1xx.94 gateway6...
  8. H

    Two ZFS pools storage same name different block size?

    Proxmox lets each ZFS pool specify the ashift why is it designed to not do block size too?
  9. H

    ZFS optimal block size RAID 0 3 disks?

    The drives are SAMSUNG MZQLB1T9HAJR-00007 I do not use LXC. My current setting (RAIDZ1) is using way too much CPU (z_wr_iss) so I want to try RAID-0 just for performance with PBS backups every few hours. Downtime is not too big of a deal for me.
  10. H

    ZFS optimal block size RAID 0 3 disks?

    I have been informed that if I am 3 NVMe RAID0 using ashift=12 with 12k block size is optimal however as you can only either use 8k or 16k which is the better option? Also what would the difference be between them?
  11. H

    Two ZFS pools storage same name different block size?

    Both of my ZFS storage need to have the same name however they also need to have different block size settings. How am I able to modify it so each node has their own block size settings but with the same name "zfs" /etc/pve/storage.cfg zfspool: zfs pool zfs blocksize 8k content...
  12. H

    z_wr_iss high CPU usage and high CPU load

    Ok thanks for your help. This will also adjust if I just migrate the machine too? rather than backup/restore.
  13. H

    z_wr_iss high CPU usage and high CPU load

    Ok thanks a lot for the info. Can I ask why do you suggest the 8k for RAID-10 and 16k for RAIDZ1. How are you calculating it? If I change this now new virtual machines will use the new values correct? I can backup/restore the existing ones at a later date. Also in my last message there are...
  14. H

    z_wr_iss high CPU usage and high CPU load

    Ok. Got a problem my cluster both ZFS have the same name If I change them manually will the cluster storage setting change it back?
  15. H

    z_wr_iss high CPU usage and high CPU load

    No I have not. What do you suggest for a combination of Windows/Linux VMs SAMSUNG MZQLB1T9HAJR-00007 on RAIDZ (3 drives). Also the same but for RAID10 (4 drives).
  16. H

    z_wr_iss high CPU usage and high CPU load

    I have created a NVMe RAIDZ zfs pool and have noticed it uses a lot more CPU power than a similar setup using RAID-10. Cloning a 40GB template causes the servers load to skyrocket and you can really feel the lag trying to run anything else during this process. At peaks it's using almost half...
  17. H

    arp changes causing network interruptions (point-to-point gateway)

    I have installed Proxmox on a virtual machine and am attempting nesting (vm within the vm) but am having some issues getting the network to work correctly. Due to the host having MAC filtering I am trying to use the main Proxmox IP 107.189.30.xxx as the gateway IP for virtual machines. .1 is the...
  18. H

    MAC filtering causing issues inside virtual machines (nesting)

    How would I do that? I have read vmbr0 has the same MAC as the main interface but I also tried adding "hwaddress ether xx:xx:xx:xx:xx" inside /etc/network/interfaces for vmbr0 with no luck.
  19. H

    Cluster not using vRACK connection to migrate data

    I can see /etc/pve/.members shows the public IPs how do I fix this?
  20. H

    Cluster not using vRACK connection to migrate data

    While migrating data between the two nodes it appears they are using bond0 rather than bond1 (vRACK) to transfer the data. The following is from the main cluster node. The second node has the same changes. Both ping correctly (nsnode2 pings to 192.168.0.121 from node1 and nsnode1 pings to...