Search results

  1. A

    proxmox on arm64

    There are a couple servers available at Hetzner (Finland region). You can also order additional hard drives and switches is needed, so you can build a cluster.
  2. A

    Proxmox VE 8.0 released!

    Any support for ARM64 CPUs?
  3. A

    Proxmox on aarch64 (arm64)

    We have that smaller server from Hetzner as well. Right now Ubuntu is there, but there is a request from our dev.team to have a dozen of VMs so I would like to build a cluster and attach our CEPH storage.
  4. A

    Proxmox on aarch64 (arm64)

    Not yet, but we are seeking for virtualization solution
  5. A

    Proxmox ulimit hell : how to really increase open files ?

    You can also change limits on already running process as well: #!/usr/bin/env bash for PID in $(ps aux | grep /usr/bin/kvm | grep -v grep | awk '{ print $2 }'); do SOFT_LIMIT="1048576" HARD_LIMIT="2097152" echo "Changing the limits for PID ${PID}" prlimit...
  6. A

    Proxmox ulimit hell : how to really increase open files ?

    Please try my solution (steps 3...6) from this reply. You can check the limits with this script: #!/usr/bin/env bash for PID in $(ps aux | grep /usr/bin/kvm | grep -v grep | awk '{ print $2 }'); do SOFT_LIMIT=$(cat /proc/${PID}/limits 2>/dev/null | grep "Max open files" | awk '{ print $4 }')...
  7. A

    Open files issue on PVE node

    The workaround is the next: 1. **/etc/sysctl.d/90-rs-proxmox.conf**: ``` # Default: 1048576 fs.nr_open = 2097152 # Default: 8388608 fs.inotify.max_queued_events = 8388608 # Default: 65536 fs.inotify.max_user_instances = 1048576 # Default: 4194304...
  8. A

    Open files issue on PVE node

    I have the same issue when I have DB server with 10 virtio drives per VM.
  9. A

    changing ceph public network

    BTW, there is an official manual about this topic
  10. A

    changing ceph public network

    @kifeo just found a draft for myself which I had created some time ago: # Migration running cluster to the new IPs ## Ceph Network overview Ceph Network overview was done [in this article][Ceph Network Configuration Reference]. Please read it before you continue with the current page...
  11. A

    changing ceph public network

    It was something from that big list.
  12. A

    changing ceph public network

    reply to my own question: yes
  13. A

    changing ceph public network

    I wonder if you did any set up for OSDs as well?
  14. A

    Cluster configuration question

    I have the similar setup: IBM storage device promotes the disks over 2 HBA adapters and I'm using multipath in order to access them. During Proxmox 3.x we had clustered LVM as a storage. Seems like with Proxmox 4.x clustered LVM is not available, because clvmd cannot be started (cannot work with...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!