Search results

  1. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    @admartinator Did you read my question above?
  2. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    @alexskysilk I have 8 interfaces per node (2x 25G / 2x 10G / 2x1G / 2x1G) and i want to avoid the use of a switch for ceph and cluster/corosync as it reduces the points of failure (and there is no need for external connection). So I want 2 separated frr routers for ceph (25G) and...
  3. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    i've tested every possible variation but i don't get it to work...
  4. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    maybe we can find a solution together :) I've added a second configuration (openfabric) to the nodes. now it looks like this (node1): root@prox01:~# cat /etc/frr/frr.conf # default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in # /var/log/frr/frr.log # # Note: # FRR's...
  5. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    I have no clue how to modificate the config file I have posted above to create a second (separated) fabric for e.q. IP 10.10.12.101/32...
  6. M

    Two separated full meshs for cluster / corosync in a 3-node-cluster

    Hello Guys! I'm setting up our new cluster at the moment. The cluster network is a 25 GBit Full-Mesh configuration between the nodes (up and running! ;-) ) To follow the KISS principle and reduce the point(s) of failure I thougt about a second mesh for corosync (with fallback over public...
  7. M

    BackupExec (Windows VM) - Best practice Backup2Disk Storage

    Hello guys. I plan to change the harddisks of the B2D-Storage in our BackupExec VM. Recently this is a zfs-mirror configured on the pve host which is connected to the vm via a virtio block device because of problems with the virtio scsi driver at installation time. (see...
  8. M

    VM: Same name of disks on different storages

    Hello. I have a running VM on ProxVE 8 with 3 disks on 3 different storages. They all have the same (file-) name. That makes it a bit confusing if you check the content: Second problem: There is no "notes" field or similar that shows the name of the corresponding VM. This could be a...
  9. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    It seems to work... till now... ;-) Thank you for your help!
  10. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    I followed the guide und added the following line (24GB) cat /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=25769803776 The result is an entry (after reboot and "update-initramfs -u -k all") in: cat /sys/module/zfs/parameters/zfs_arc_max 25769803776 The UI shows: Actually the ram usage...
  11. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    How do I limit the size of memory for zfs, e.q 24GB ?
  12. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    Is there no control function to lower the memory usage of zfs if there are other processes which need memory?
  13. M

    KVM killed by OOM killer - Out of memory (ZFS problem?) / Proxmox 8.1.4

    Hello! We bought a new backup server last year. The file system is ZFS. The memory usage is always high, which seems to be normal with zfs. The server has 64GB RAM and there is one virtual machine with 16GB RAM (Windows Server 2019). So after one day (since a reboot) the memory usage of...
  14. M

    Consolidate snapshot?

    Shutting down the vm, does the trick! Thank you... But I saw krbd isn't active on our ceph storage. Can it be activated without risk?
  15. M

    Consolidate snapshot?

    Just to be sure (and have someone else to blame ;-) ) This is one of the VM's with snapshots I tried to remove by GUI - which ends in an error (and a locked snapshot): I don't remember (maybe it wasn't me?) but as you can see there are changes made to the harddisk (size). Is this the cause for...
  16. M

    Consolidate snapshot?

    The output of "rbd -p <pool> ls -l" shows all disks of the cluster, right? That means the output is (should be) the same on all cluster-nodes?!
  17. M

    Consolidate snapshot?

    I got the same problem as the tread-starter: I changed the disk from ide to virtio (with snapshots taken before) und after the conversion I'm unable to delete the snapshots in the gui ("disk ide0 not found" or something like that). How can I check if there are any snapshots on disk (ceph...
  18. M

    Consolidate snapshot?

    Hello Guys. Time to dig out an old thread... :cool: What I do not understand is how snapshots are consolidated if I manually delete them from the config file... I'm coming from Hyper-V and there is the oldest disk the "mother" of all. So if I delete any differencing disk between the first...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!