Search results

  1. S

    rpool data size

    It's odd. The usage of local-zfs keeps changing depending on my actions. Yesterday I purged a 5Tb VM and it showed only 13Tib as the size of storage space. Now it shows size 20.4Tib. zfs list -o space,refquota,quota,volsize NAME AVAIL USED USEDSNAP USEDDS...
  2. S

    rpool data size

    When I created a ZFS raid Proxmox allocated 13T out of 28T to data (ie. local-zfs). How do I increase the size allocated to data? zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 19.7T 805G 151K /rpool rpool/ROOT...
  3. S

    Routing with IPv6

    Yep. Looks like I need to learn how to build a IPv4 proxy server.
  4. S

    Routing with IPv6

    Difficult to test routing in a live environment. ISP routes to pre-defined address and that's it. I finally got it to work by splitting the 48 on FW/Router in two using 56 networks. One for WAN and one for LAN and then splitting the LAN to use 64 for RADVD/DHCPv6 with Managed RA-Flags. I used to...
  5. S

    Routing with IPv6

    After a looong strugle with RADVD I finally got Proxmox LXC containers to receive IP:s (and even DNS) from pfSense firewall on WAN side. Woohooo! Problem is that RADVD is REALLY picky about working unless you offer it /64 CIDR network (and it really does not like my /48 network.) I still can't...
  6. S

    Template request

    The trick is to get IPv6 Forwarding to work on LXC container. https://techoverflow.net/2018/06/06/routing-public-ipv6-addresses-to-your-lxc-lxd-containers/
  7. S

    Template request

    Found an interesting tutorial on RADVD. Looks really easy. https://necromuralist.github.io/posts/the-linux-ipv6-router-advertisement-daemon-radvd/ Who wants to build a LXC template? :) Sam
  8. S

    Template request

    Template request: DHCP server LXC template for IPv6. It would be nice to have 1 of these to automatically provide address to new containers.
  9. S

    [SOLVED] Console Timeout

    It's always IPv6 when something times out.
  10. S

    Heavy writes in VM crash host - ZFS - out of memory (plenty of memory)

    I think using ZFS with Proxmox is just begging for trouble. How will you manage all the diffrent systems that want to control your RAM? Will "VM dynamic ballooning RAM" over rule "ZFS arc dynamic memory" handling? What if you are using nested virtualization? What node controls memory usage at...
  11. S

    Debian 10 template

    As a side note the memory issues were caused by combination of ZFS and Ballooning. Both of those techs compete with each other for available memory and even when everything seems to work out you end up having this: kernel: Memory cgroup out of memory: Kill process 13694 (systemd-journal) score...
  12. S

    Proxmox 6 on home NAS - SLOOOWWWW

    Turns out this had nothing to do with ZFS. It's IRQ/ACPI problem on old motherboard BIOS that causes "kernel: irq 18: nobody cared (try booting with the "irqpoll" option)" error message. After that the HD's just slow to crawl. Since there are no updates to the BIOS my attempt to turn this old...
  13. S

    Proxmox 6.0 - Memory Leak?

    Probably ZFS. Try arcstat.
  14. S

    Memory issues

    I have been trying to figure this one out by my self but I think I need some help with translation. One of nested Proxmox servers that has 4 LXC servers running (each using ~1G RAM) claims that it is using 14G out of 16G. What is the rest of the memory being used? root@vh0:~# cat /proc/meminfo...
  15. S

    Proxmox 6 on home NAS - SLOOOWWWW

    Hmm. Speed is accaptable now but I think I somehow managed to ruin systemd-timesyncd because I can't log in anymore. How do I "reset" Proxmox 6 date without installing NTP?
  16. S

    Proxmox 6 on home NAS - SLOOOWWWW

    The problem appears to be zfs_arc_max. Every boot seems to reset the amount of memory I have reserved to ZFS to 0. This command sets available memory for ZFS back to 8Gb and speeds up the storage considerably. echo 8000000000 > /sys/module/zfs/parameters/zfs_arc_max
  17. S

    Proxmox 6 on home NAS - SLOOOWWWW

    Stopping ksmdtuned service running echo and "zfs set sync=disabled rpool" don't seem to help any after the system has been up couple of hours. ==> Me scratching my head!
  18. S

    Proxmox 6 on home NAS - SLOOOWWWW

    It's strange. After reboot the data transfer speeds to ZFS storage is acceptable 80Mbs but after couple of hours it drops to 2Mbs. I'm not quite sure if it's a storage or network issue.
  19. S

    Proxmox 6 on home NAS - SLOOOWWWW

    I gave 8Gb of the 16 to ZFS.