Search results

  1. M

    lrm node1 (old timestamp - dead?)

    We had an issue where this node was forcibly powered down. Since then HA doesn't like it and it has that error on it. I wanted to reboot the node but it has a VM on it right now that is stuck in Migrate. I can power down the VM and reboot the node but I've been trying to find a way to get this...
  2. M

    Yet another CEPH tuning question (comparing to dell san)

    Is there any way to decrease that latency and increase the IO with smaller packets to get better results?
  3. M

    Yet another CEPH tuning question (comparing to dell san)

    Heyo. Let me know if I need to add more information/stats to this. Here is my cluster, proxmox fully updated: 4 node c6220 each node: 128gb # 1333mhz E5-2650v0 dual gige - bonded (network access) - connected to two different 10g switches dual 10g - Failover (HA and migration) - connected to...
  4. M

    Snapshot failed to rollback

    I recreated the TPM State disk as per your suggestion and it booted no problem. Any ideas on why the snapshot rollback broke the TPM disk?
  5. M

    Slow Snapshots?

    @fiona - The vm's HDD is 250gb. probably about 15% utilized. As for the state storage, not sure. It's currently broken and not sure how to fix it. Have another post open for that one.
  6. M

    Snapshot failed to rollback

    PS. When this snapshot rollback failed, the VM became non-responsive. There was the option to try to rollback again but same results as above.
  7. M

    Snapshot failed to rollback

    Hey, Yes the VM was running during that time. root@pve1-cpu1:~# cat /etc/pve/qemu-server/101.conf agent: 0 bios: ovmf boot: order=scsi0;ide2;net0;ide0 cores: 8 cpu: host efidisk0: Ceph-RBDStor:vm-101-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K ide0...
  8. M

    Snapshot failed to rollback

    Windows VM Wanted to test snapshot rollbacks, this is what happened: Task viewer: VM 101 - Rollback OutputStatus Stop Rolling back to snapshot: 1% complete... Rolling back to snapshot: 2% complete... Rolling back to snapshot: 3% complete... Rolling back to snapshot: 4% complete... Rolling...
  9. M

    Slow Snapshots?

    Hey. We have a 4 node proxmox/ceph cluster (ceph on 40g nics, prox interconects on 10g nics, internet is dual 1g nics) ceph is 8x2tb SM863a drives Problem is the snapshots, I don't use them much but wanted to test before we allow clients on here: This part is quick: /dev/rbd4 saving VM state...
  10. M

    Backup of Windows 11 VM keeps erroring.

    Bit slow on my response. Busy busy. I checked syslog and messages but couldn't find anything at that time stamp I checked the proxmox-backup folder for logs but there are none, is there another place for backup logs (other than the backup node) The backup node looked like a regular set of logs...
  11. M

    Backup of Windows 11 VM keeps erroring.

    Been trying to figure out what's going on but I'm not really sure where to start when it comes to backups. Using Backup Server 2.2-3 Proxmox 7.2-7 CEPH 16.2.9 Here are the logs from the backup server, VM 111 is the Win11 VM. 2022-07-20T00:00:38-06:00: starting new backup on datastore...
  12. M

    Mellanox OFED (MLNX_OFED) Software with pve 7.0-2 and/or 6.4-4

    For me, I just used the drivers build into the OS. However I had to change the card to Eth mode instead. Plus tweaked the mtu to 65520 for all the NICs. Got only 1/2 the speed of our 40gb nic thanks to overhead of ethernet but it's better than 7Gb/s in ib.
  13. M

    Proxmox CEPH performance

    root@pve1-cpu4:~# rados bench -p cephfs_data 60 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objects Object prefix: benchmark_data_pve1-cpu4_20932 sec Cur ops started finished avg MB/s cur MB/s last...
  14. M

    Proxmox CEPH performance

    Budget. Ha. I have to rub two pennies together to get a dime around here. I was lucky to get the QVOs. But looking at it now I should have gotten really any TLC drive to work with the writes we are going to need in the future... The plan was, since we have 4 nodes get 4 drives at a time (up...
  15. M

    Proxmox CEPH performance

    Felix, Thanks for the fast response. I'm surprised that it's that much of a difference between the two with the same amount of drives. about 1/6th of the performance, is that right or is there something I have configured wrong? If I increase the amount of OSDs, even if they are QVOs, the speed...
  16. M

    Proxmox CEPH performance

    As if this subject hasn't been brought around enough I thought I would open a new one cause I'm a bit confused. We have two clusters ==| Cluster 1, dev |== Proxmox - Virtual Environment 5.4-3 4x Dell r710 Between 72-128 Gb ram each H700 - sda - 2x500gb spinning, 7200rpm RAID1 H700 - sdb -...
  17. M

    [SOLVED] PVE7 unable to create OSD

    Well, after looking through logs and digging through posts I ran into this one: /usr/bin/ceph auth get client.bootstrap-osd > /var/lib/ceph/bootstrap-osd/ceph.keyring So I compared the key in the /etc/pve/priv/ceph.client.bootstrap-osd.keyring and /var/lib/ceph/bootstrap-osd/ceph.keyring...
  18. M

    [SOLVED] PVE7 unable to create OSD

    Wasn't in the office for a bunch of days but got in to check the journal: Nov 10 14:41:30 pve1-cpu2 systemd[1]: Starting The Proxmox VE cluster filesystem... Nov 10 14:41:30 pve1-cpu2 pmxcfs[1264]: [quorum] crit: quorum_initialize failed: 2 Nov 10 14:41:30 pve1-cpu2 pmxcfs[1264]: [quorum]...
  19. M

    [SOLVED] PVE7 unable to create OSD

    Getting this error: create OSD on /dev/sdb (bluestore) wiping block device /dev/sdb 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 0.543614 s, 386 MB/s Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name...
  20. M

    Mellanox OFED (MLNX_OFED) Software with pve 7.0-2 and/or 6.4-4

    Any ideas if installing proxmox on 10.8 would be the best course of action for getting CEPH working at 40g instead of 7gbs? 7 has a bunch of nice features that I would like to keep using, will it work with 10.8?