Recent content by javii

  1. J

    HELP NEEDED: Booting from Hard Disk...

    Hi, I am experiencing the same issue, perhaps because the server has been upgraded to Proxmox 7 but it haven't been rebooted. Notice you are using old kernel: proxmox-ve: 7.0-2 (running kernel: 5.4.128-1-pve) Did you solved the issue? Maybe rebooting the server? Regards,
  2. J

    [SOLVED] Error: HTTP Error 401 Unauthorized: permission check failed

    Hi, I have managed to add PBS to Proxmox VE: # cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup zfspool: local-zfs pool rpool/data content images,rootdir sparse 1 pbs: datastore1 datastore datastore1...
  3. J

    # "ceph config dump" command empty

    Hi, I noticed the following command shows nothing: root@xxxxxxxx:~# ceph config dump WHO MASK LEVEL OPTION VALUE RO Any idea? root@xxxxxxx:~# cat /etc/ceph/ceph.conf [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx...
  4. J

    Poor performance with Ceph

    hehe, it would really nice, but 5x slower seems a lot.... this is what I should expect? Isn't there any bottleneck here? 1200 IOPS (15 OSD Ceph) 6000 IOPS (Single same local disk) Thank you
  5. J

    Poor performance with Ceph

    After getting good numbers with rados bench, I am now testing inside a VM, and I am getting poor perfomance compared to a single disk, not sure if this should happen with Ceph or I have any problem here. # cat /etc/pve/qemu-server/100.conf bootdisk: scsi0 cores: 8 ide2...
  6. J

    Poor performance with Ceph

    Thank you! You saved my life! I misunderstood public and cluster network in Ceph. Now everything is going through the Infiniband network and the performance is good: # rados bench -p scbench 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size...
  7. J

    Poor performance with Ceph

    Some pings: # ping -M do -s 8700 10.12.12.15 PING 10.12.12.15 (10.12.12.15) 8700(8728) bytes of data. 8708 bytes from 10.12.12.15: icmp_seq=1 ttl=64 time=0.111 ms 8708 bytes from 10.12.12.15: icmp_seq=2 ttl=64 time=0.088 ms 8708 bytes from 10.12.12.15: icmp_seq=3 ttl=64 time=0.162 ms 8708 bytes...
  8. J

    Poor performance with Ceph

    Hi, I am building my a Ceph cluster with Proxmox 6.1, and I am experiencing a low performance. Hope you can help me identify where is my bottleneck. At this moment I am using 3 nodes, with 5 OSDs on each node (all SSD). Specs per node: Supermicro Fatwin SYS-F618R2-RT+ 128 Gb DDR4 1x E5-1630v4...
  9. J

    [Tip] fast reboots with kexec

    Hi, it works when I start the service, but I have to start the service manually after reboot, even the service is enabled: root@xxxxxxxx:~# uptime 13:44:03 up 1 min, 2 users, load average: 0.20, 0.12, 0.04 root@xxxxxxxx:~# systemctl kexec Cannot find the ESP partition mount point...
  10. J

    [Tip] fast reboots with kexec

    Hi, have you tried with Proxmox 6? I have tried but it says: # systemctl kexec Cannot find the ESP partition mount point.
  11. J

    ZombieLand / RIDL / Fallout (CVE-2018-12126, CVE-2018-12130, CVE-2018-12127, CVE-2019-11091)

    After aplying new kernel in Proxmox and installing intel-microcode package from debian non-free repo, I get this on the host: # cat /sys/devices/system/cpu/vulnerabilities/mds Mitigation: Clear CPU buffers; SMT vulnerable However in a Centos 7 VM inside this host, with udpated kernel I get...