Search results

  1. R

    vm missing after failed migrate

    thank you for pointing that out..... I was dealing with this at 2AM and got it wrong ;-) qm unlock 125 worked..
  2. R

    container backup fail

    Ok so these started again. I thought updates and reboots after I last posted had solved the issue.. This time backup to PBS worked, local vzdump failed . # stat /dev/rbd-pve File: /dev/rbd-pve Size: 60 Blocks: 0 IO Block: 4096 directory Device: 0,5 Inode: 2238...
  3. R

    Proxmox 8.1 - kernel 6.5.11-4 - rcu_sched stall CPU

    here is the qm config for a vm that had the issue: # qm config 902 bootdisk: scsi0 cores: 1 ide2: none,media=cdrom memory: 1024 name: ldap-master2 net0: virtio=92:EC:4F:23:5C:37,bridge=vmbr3,tag=3 numa: 0 onboot: 1 ostype: l26 protection: 1 scsi0...
  4. R

    Proxmox 8.1 - kernel 6.5.11-4 - rcu_sched stall CPU

    Hello we have the rcu issue . 5 nodes with all systems using CPU "type 80 x Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz (2 Sockets)" How do I set -pcid ?
  5. R

    Proxmox Failed to Login.

    I had the same issue . Thank you gabriel for the suggestion to turn off lastpass.... that fixed my issue.
  6. R

    container backup fail

    after rebooting the node I could restore the PCT: recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z' /dev/rbd0 Creating filesystem with 3670016 4k blocks and 917504 inodes Filesystem UUID: 4ff60784-0624-452e-abdf-b21ba0f165a5 Superblock backups stored on...
  7. R

    migrations on a node restart

    Hello I think a nice change would be for migrations to occur before the non high availability vm;s are shutdown.
  8. R

    container backup fail

    i see the PCT restore that worked above was from a different backup. so tested restoring the same backup which failed on another node: recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z' /dev/rbd7 Creating filesystem with 3670016 4k blocks and 917504 inodes...
  9. R

    container backup fail

    the issue seems to be with the PVE host. I can not restore a PCT backup. KVM restore works okay. here is part of the output: recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z' /dev/rbd0 The file...
  10. R

    container backup fail

    Also the 1st post error was using NFS storage. the same result occurs when using local storage . 604: 2023-10-15 02:10:49 INFO: Starting Backup of VM 604 (lxc) 604: 2023-10-15 02:10:49 INFO: status = running 604: 2023-10-15 02:10:49 INFO: CT Name: bc-sys4 604: 2023-10-15 02:10:49 INFO...
  11. R

    container backup fail

    Note: the container is also backed up to a PBS server and that works okay .
  12. R

    container backup fail

    Hello We have around 15 containers. Just one has backup failures - 2 times in the past 4 days here is more info: dmesg [Sat Oct 14 08:44:46 2023] rbd: rbd1: capacity 15032385536 features 0x1d [Sat Oct 14 08:44:46 2023]...
  13. R

    Monitoring ceph with Zabbix 6.4

    Thanks. What should be used for $PVE.URL.PORT ? 443? or 8006 or ?
  14. R

    Monitoring ceph with Zabbix 6.4

    Hello I have followed https://geekistheway.com/2022/12/31/monitoring-proxmox-ve-using-zabbix-agent/ and have that working . I am confused on how to get ceph data to zabbix. You seem to mention that the following needs to be set up: * Integratet Zabbix Proxmox Tempalte Key...
  15. R

    [SOLVED] 'ceph pg 55.0 query' not working

    Arron - Thank you, I was able to edit the pool at the ui. Proxmox has made it so much easier to edit a pool then I remember from the old days!!
  16. R

    [SOLVED] 'ceph pg 55.0 query' not working

    Y Yes years ago I had set up a ceph rule like that , and we have since replaced the drives. could you point me to documentation on changing the crush map rule?
  17. R

    [SOLVED] 'ceph pg 55.0 query' not working

    # pveceph pool ls --noborder Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name %-Used Used .mgr 3 2 1 1 on...
  18. R

    [SOLVED] replacing an rpool disk , question on grub-install

    I think there was probably a glitch due to not following the documentation . i ended up using /usr/sbin/grub-install.real /dev/nvme0n1 . I'll mark this closed
  19. R

    [SOLVED] 'ceph pg 55.0 query' not working

    so ceph health still has the original warning: # ceph -s cluster: id: 220b9a53-4556-48e3-a73c-28deff665e45 health: HEALTH_WARN Reduced data availability: 1 pg inactive services: mon: 3 daemons, quorum pve15,pve11,pve4 (age 6h) mgr: pve11(active, since 6h)...
  20. R

    re installing pve on a system with osd's

    we've 5 pve hosts with 7 OSD's each. If for some reason I had to reinstall pve to one of the nodes is there a way to preserve the osd's ? the reinstall would be fast and noout set beforehand. PS: I assume this: these days with very reliable ssd or nvme [ having good DWPD ] available I do...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!