Search results

  1. P

    [SOLVED] Major issues after upgrading to 7.2

    root@pve:~# systemctl status pve* ● pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2022-05-05 17:09:06 EDT; 5min ago Process: 4948...
  2. P

    [SOLVED] Major issues after upgrading to 7.2

    Was on PVE 7.1 prior to update via web gui. The standard update and dist-upgrade command ran and I rebooted the server. Noticed upon boot-up that I can SSH to the machine, but the web interface does not work. Seems a lot of processes will not start. root@pve:~# pveversion -v proxmox-ve: not...
  3. P

    Snapshot hangs if qemu-guest-agent is running / Cloudlinux

    Just ran into this today too. When using cloudlinux 8 and qemu guest agent is enabled it will lock up the VM on the freeze operation. Turning off guest agent in proxmox works with no issues.
  4. P

    5 Node cluster with ceph and HA. VM/CT not failing over

    Absolutely, what log file would be helpful? Here's resources.cfg :~# cat /etc/pve/ha/resources.cfg ct: 100 group HA state started ct: 102 group HA state started ct: 104 group HA state started vm: 103 group HA state started vm: 106 group HA state...
  5. P

    5 Node cluster with ceph and HA. VM/CT not failing over

    Yeah that's the thing, they were already configured for HA but when that node went offline they disappeared from the ha-manager status list. I got the hardware fixed yesterday on the crashed node, booted it up, and the vm's came back online. They then appeared back in ha-manager status without...
  6. P

    5 Node cluster with ceph and HA. VM/CT not failing over

    Guess I'm restoring from backup! I'll play around with the ha simulator to figure out what's going on.
  7. P

    5 Node cluster with ceph and HA. VM/CT not failing over

    Interestingly from another node, I can see the conf file for the machines that did not move in the directory of the offline node. These machines did not have any local resources and are using shared ceph storage for the drive. In the ha-manager status above, they don't appear to be listed...
  8. P

    5 Node cluster with ceph and HA. VM/CT not failing over

    I've got a 5 node setup with ceph running SSD. VM's and Containers have been setup for HA. We just had a node crash due to hardware, but the machines that were on that node are not failing over. In the GUI they have a grey question mark next to them, and clicking on them shows "no route to hose...
  9. P

    Diagnosing slow ceph performance

    Yup they are enabled. Iperf even shows full 40gbps between all nodes
  10. P

    Diagnosing slow ceph performance

    No SMART errors, these seem to be all over the place.. 1-2% wearout on each drive nearly brand new. osd.0: { "bytes_written": 1073741824, "blocksize": 4194304, "elapsed_sec": 2.2047925269999999, "bytes_per_sec": 487003566.48115581, "iops": 116.1106983378305 } osd.1: {...
  11. P

    Diagnosing slow ceph performance

    Fixed up the cluster and public network, seperating them. Both on 40GbE mellanox. Reran tests with fio - these numbers are even worse. fio: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 fio-3.16 Starting 1 process...
  12. P

    Proxmox 7 replace ZFS disk in boot device

    I've recent setup a cluster using latest Proxmox 7 iso. One of the OS disks using ZFS went bad. :~# proxmox-boot-tool status Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace.. System currently booted with legacy bios WARN: /dev/disk/by-uuid/0E18-2679 does not exist -...
  13. P

    Proxmox Virtual Environment for hosting providers?

    The best route is using the API. If you are using something like WHMCS there are modules to do all of this automatically. For example modulesgarden has a few to do exactly what you describe.
  14. P

    Diagnosing slow ceph performance

    Here's the current network interface on one of the nodes. I'd actually like the heavy lifting parts for ceph to go over the 10.3.32.0/24 network and utilize 10.3.34.0/24 for our front-end mgt/client network and heartbeat. That would mean keeping cluster network as 10.3.32. and moving public to...
  15. P

    Diagnosing slow ceph performance

    Having issues trying to figure out where the issues lies with performance here. I currently have 5 nodes, each node containing 5 SSD samsung evo disks (I know consumer drives are not the best, but I still wouldn't expect performance to be this low) the ceph public and cluster network are using...
  16. P

    [SOLVED] API Authentication - No Ticket

    The cookie domain should have just been the root domain/ip instead of the full path. Resolved.
  17. P

    [SOLVED] API Authentication - No Ticket

    Having an issue figuring out how to authenticate to PMG API via PHP. I was able to generate a new ticket just fine, put passing in the cookie and header seems to be a problem for follow up requests. The API tells me: HTTP/1.1 401 No ticket $pmgToken = Http::withOptions([ 'verify' => false...
  18. P

    Proxmox VE 6.4 available

    I noticed after upgrading that I cannot get SMART values to load on a few servers I have, including my home server. Smartctl is working and has data, the web GUI will never load the values. It's just stuck on "Loading..."
  19. P

    file-level restore

    Yeah I run into this a lot as well. Backups within proxmox built-in utilities have been disaster recovery due to the potentially large sizes in the environment. Utilizing different software for backups within each VM in order to grab individual files when needed. Not a problem with containers...
  20. P

    AsRock (C246) IOMMU / GVT-g

    Finally got it.. root@pve:~# ls /sys/bus/pci/devices/0000\:00\:02.0/mdev_supported_types/ i915-GVTg_V5_4 i915-GVTg_V5_8 I also installed i965-va-driver via apt but not sure if that did anything.