Search results

  1. M

    Proxmox Cluster, ceph and VM restart takes a long time

    Hi guys. for test purposes I've settled a small cluter with two nodes and one quorum device. Everything is working pretty well, also the live migration in case of fault. Min/Max Replicas: 2 Max restart: 1 Max relocate: 10 KVM HW virtualization: enabled What I think is bad is the time that...
  2. M

    Poor disk performances on LXC containers

    https://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZQLW1T9HMJP/
  3. M

    Poor disk performances on LXC containers

    I activated a swap partition on zfs using: zfs create -V 10G rpool/swap1 mkswap /dev/zvol/rpool/swap1 swapon /dev/zvol/rpool/swap1 Then I settled swap=0 on proxmox container's configuration The host is now swapping 12GB but... Performances on the containers are now acceptable... At the moment...
  4. M

    Poor disk performances on LXC containers

    Nevermindi found the way and activated the swap partition. Now let see... I'm assuming this was the bottleneck... What a "wannabe" stupid issue... I will keep you informed
  5. M

    Poor disk performances on LXC containers

    Free -m shows me that there's no swap spaces even after "swapon -av" So I assume I don't have any swap partition... In another post on this forum I read an admin explaining that proxmox uses by default a 8gb swap. Am I wrong? How to create a swap partition on zfs?
  6. M

    Poor disk performances on LXC containers

    Ok, but free -m is not showing me any mounted swap partition...
  7. M

    Poor disk performances on LXC containers

    Well thanks for your support. I already tried to disable compression but I didn't saw any significant improvement (the write speed decreases without zfs compression). For what concerning the swap: I was thinking of disabling swap in the configuration files of the containers and you confirmed me...
  8. M

    Poor disk performances on LXC containers

    The test: wget -qO- wget.racing/nench.sh | bash; wget -qO- wget.racing/nench.sh | bash The configuration: arch: amd64 cores: 1 cpulimit: 1 hostname: HOST_NAME memory: 1024 net0: name=eth0,bridge=vmbr1,firewall=1,gw=XXXXXXX,hwaddr=AXXXXXX,ip=XXXXX,type=veth net1...
  9. M

    Poor disk performances on LXC containers

    So, why containers performances are poor?
  10. M

    Poor disk performances on LXC containers

    In my situation is KVM maybe better than lxc containers?
  11. M

    Poor disk performances on LXC containers

    A raid0 made with zfs. And I'm using enterprise NVME disks. Performances on the host are quite acceptable, inside the lxc container are very bad...
  12. M

    Poor disk performances on LXC containers

    Premise: I do not have a good hardware. My configuration is: 2xNVME disks configured in Software RAID0 with ZFS and compression=lz4 125 containers are running all with the same characteristics and they are all doing the same job each containers seems to me that is having around 10MB/s in...
  13. M

    Proxmox Cluster with 2 nodes on OVH

    Thanks but now I only have two nodes. Are there any issue in having the cluster without ceph and install it later when having the third node? Maybe I will not use migration but this could not be an issue
  14. M

    Proxmox Cluster with 2 nodes on OVH

    All, I'm stuck on the installation of two nodes (one in Canada and one in France) in cluster. The setup went fine, the upgrade too. On both nodes. Now the steps I'm following are: 1- To properly configure the eth interfaces attached to the vRack 2- To create the cluster 3- To join the second...