Search results

  1. A

    Datastore garbage collector error

    Unfortunately we also need the datastore to make backups from other sources, how can we do? Do we need to format the datastore and start again from scratch and split it? Ideas? Thoughts?
  2. A

    Datastore garbage collector error

    I have a datastore that i use recently also for restic-backup. Proxmox does'nt bother of restic and work as well. But the daily garbage garbage-collection failed because restic directory is password protected. There is a way to avoid the problem or exclude the directory? Tnx.
  3. A

    Cluster nodes uses too much RAM

    Thank you: do you have an idea on how much arc does it need? What is a good configuration in my scenario? (every host has 128GB ram, and 6 SSD 1TB in ZFS for data)
  4. A

    Cluster nodes uses too much RAM

    Ok, thanks, I know cache is good, but how can I add more VM if system eat all available ram? I have 4 VM with total of 16GB on a host with 128GB of ram and I'm not totally confident to work wit a 90% of RAM occupied...
  5. A

    Cluster nodes uses too much RAM

    I try this command and RAM usage drop from 65% (82GB) to 18% (23GB).... I don't understand why cache is so high after a while... (it's increasing again) After the command:
  6. A

    Cluster nodes uses too much RAM

    [UPDATE] It's all very strange and odd: I upgraded all 3 nodes from 64GB ECC to 128GB ECC ram. Well: with the SAME vm's on top with same memory usage for KVM, all 3 hosts proxmox take another 20GB each... For example: in this node there are 4 KVM for a total of 16GB (2+4+4+6) used, before ram...
  7. A

    [SOLVED] Hetzner network configuration problem...

    You can follow the guide, it's very straightforward: https://www.razva.ro/how-to-setup-proxmox-with-hetzner-vswitch-and-public-ip/ My mistake was to tag with VLAN number the interfaces in VM's. That's not necessary because it's already tagged in the host interface... The VLAN Tag field needs to...
  8. A

    Cluster nodes uses too much RAM

    Tuning manually ZFS could be risky and tricky. There is another approach to work with a network filesystem? I originally think to Ceph that don't need ZFS but handle disks as single OSD. But I have only 10GB dedicated to each node for network storage and seems not enough. Thanks anyway.
  9. A

    Cluster nodes uses too much RAM

    It's a very simple conf: boot: order=scsi0;ide2;net0 cores: 1 ide2: none,media=cdrom memory: 8192 name: web.domain.name net0: virtio=7E:B2:83:AE:88:69,bridge=vmbr4001,firewall=1 numa: 0 ostype: l26 scsi0: DATASTORE:114/vm-114-disk-0.qcow2,size=160G scsihw: virtio-scsi-pci smbios1...
  10. A

    Cluster nodes uses too much RAM

    Seems qemu the responsible, this are the first lines: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 23934 17.3 13.0 10117580 8576248 ? Sl Jan03 203:37 /usr/bin/kvm -id 114 root 27209 4.6 9.4 8048300 6194424 ? Sl Jan02 135:02 /usr/bin/kvm...
  11. A

    Cluster nodes uses too much RAM

    total used free shared buff/cache available Mem: 62Gi 55Gi 6.3Gi 63Mi 1.3Gi 6.9Gi Swap: 31Gi 60Mi 31Gi I guess the problem could be the ballooning? Maybe can I allocate some swap disk from the Os disk? (is a...
  12. A

    Cluster nodes uses too much RAM

    Hi, I have a little project with 3 nodes 64GB ram each I use gluster on top ZFS as network filesystem on dedicated Nic's at 10GB ad another 10GB for corosynch (plus a third 10GB for internet access). I know that ZFS use from 4 to 8GB of RAM, but the problem is that much more ram missing and I...
  13. A

    [SOLVED] Hetzner network configuration problem...

    The guide it's fine, I made a mistake, it's working perfectly now...
  14. A

    [SOLVED] Hetzner network configuration problem...

    I also try this guide without much luck. https://www.razva.ro/how-to-setup-proxmox-with-hetzner-vswitch-and-public-ip/ My configurations seems good, but still: opnsense is not able ti ping the public gateway of the range assigned to vswitch on tag 4000...
  15. A

    [SOLVED] Hetzner network configuration problem...

    I have 3 server PX62 configured with: - 2 LAN nic (I use them only for local traffic, no internet: for storage network and corosync/proxmox cluster) - 1 WAN nic for internet access with a static IP (on each server) I create a new vSwitch, bonded to the 3 servers and purchase a /27 IP's...
  16. A

    GlusterFS 3 node, multiple disks per node...

    Hi, I have 3 node environment, 1TB SSD x 6 on each node. I see that many users recommend to use glusterfs on top ZFS to build a single brick on each server, while in official guide seems to possible create the volume from multiple bricks on the nodes... What do you think it's the most correct...
  17. A

    Ceph on single 10GB

    Ok, now I have the project all set. 3 Nodes with 2x6core Xeon E-2276G, 64GB Ram each, 1x2 Nmve for proxmox os and 1x6 1TB SSD datacenter as storage (each). How can i test the workload on this? There's some tools I need to use? Actually I have proxmox cluster on a dedicated 10GB and a free 10GB...
  18. A

    Ceph on single 10GB

    Well I managed to update to 2 x 10 GB lan... In you opinion ceph running well on his own dedicated nic 10GB? We have 3 nodes proxmox xeon 64GB ram 6 x 1TB SSD datacenter version each...