Recent content by aluisell

  1. A

    Using the API to install HTTPS certificates on PBS

    just upgraded opnsense to 26.1.3 and it works.
  2. A

    Using the API to install HTTPS certificates on PBS

    Hi Chris and needanewername I think there's an issue in the path of the certificate in PBS. I'm using OPNSENSE with acme client automation which in the script is referring to same api path but to me the path in the pbs does not exist. I have posted a case as well on the forum about that...
  3. A

    acme client automation from opnsense upload certificate

    Hi, I'm actually trying to get OPNSENSE though ACME CLIENT automation to upload let's encrypt certificate to PBS. (I got it working properly for PVE). I found script used at git repo fo acme https://github.com/acmesh-official/acme.sh/blob/master/deploy/proxmoxbs.sh. Looking at the script it...
  4. A

    vm migration error on node reboot

    I found it seems to be due to a bug reported already https://bugzilla.proxmox.com/show_bug.cgi?id=7092
  5. A

    if reboot is triggered pve node goes away too fast before ha migration is finished

    Same exact problem here with latest rev. 9.1.1. By the way I got exact same error after luanching qm migrate from an old node version 8.1 to migrate vm to a new cluster (this one with 9.1.1)... I was initially thinking it was due to EFI disk and TPM disk but was wrong also got on vm with...
  6. A

    vm migration error on node reboot

    I'm getting vm migration error during node reboot due to kernel update on a 3 node hyperconverged cluster with ceph installed. Error is here: Cleanup after stopping VM failed - org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not...
  7. A

    Proxmox 9 in a 3 node cluster with CEPH VM migration use VMBR0 network? how to get faster VM migration speed?

    Hi all, is it correct that by default Proxmox 9 in a 3 node cluster configuration with CEPH for VM migration use VMBR0 network? Current node network config does use a separate network for CEPH Cluster (10.10.10.x/24), CEPH Public (10.10.20.x/24), COROSYNC (172.16.1.x/24) and VMBR0...
  8. A

    Installation Proxmox ZFS raid controller HP MR408i-o

    Hi, I'm in a very close situation. I do have HPE DL380 gen11 server with MR408i controller and 8 disks (2 ssd 480 and 6 ssd 1.9TB). I'm trying to check if there's any issue to configure PVE to use ZFS raid 1 on the 2 480GB drives and than CEPH on the other 6 device. In order to get drive...
  9. A

    network performance issue with intel 710QA2 40GB 2 port adapter driver i40e proxmox ve 8.4 latest

    Hi Folks, I have performance issue with a 3 node cluster PVE 8.4 (all updated installed) on network side Hosts are using Intel 710QA2 adapter which are 40Gb QSFP+ 2 port nic. connected through a 5700 HPE switch which obviously support 40Gb connection. Firmware of the card has been updated to...
  10. A

    [SOLVED] qm remote migrate from single node to cluster from zfs storage to ceph storage getting error "...invalid bootorder: device 'sata0' does not exist'"

    Hi Fabian, thanks so much, I went in the VM option and removed sata0 from the boot disks and that's worked. Sorry didn't noticed that it was not removed. Regards Andrea
  11. A

    [SOLVED] qm remote migrate from single node to cluster from zfs storage to ceph storage getting error "...invalid bootorder: device 'sata0' does not exist'"

    Hi folks, I'm trying to manually migrate (live migration) of a VM from a single node proxmox to a proxmox cluster. I have tried many different things, like checking the fingerprint, permission on the token, tried with vm online and offline, and many other but I can't get rid of the error device...
  12. A

    Proxmox 8.3 CEPH upgrade reef to squid error with monitor on 1 node

    I think I have identified the main root cause which is the low space on / lower than 5% that was causing monitor alarm I did freeup space, destroyed and re-created the monitor and after few manual stepping was able to make it working again. Anyway I did notice that once you remove a monitor the...
  13. A

    Proxmox 8.3 CEPH upgrade reef to squid error with monitor on 1 node

    Hi, I do have a cluster with 3 nodes (pve1, pve2, pv3) with CEPH installed 18.2.4 (REEF). Today I have decided to perform update to Squid version, following your instruction here https://pve.proxmox.com/wiki/Ceph_Reef_to_Squid No problem till the upgrade but once I have performed the command...