Search results

  1. R

    Unexpected Transfer Speeds

    we have an issue with one of our VM's and slow backup. we see this each time just on that vm during backup: INFO: scsi0: dirty-bitmap status: existing bitmap was invalid and has been cleared I am working on why that is , searched forum and see you have the same issue. I assume that bad...
  2. R

    [SOLVED] tags - what are they and where to get more info.

    Hello Where can I get information on what tags are and how to use? I tried to search tags on forum and did not see an answer. Also is there a way to search documentation for something like this? best regards, Rob Fantini
  3. R

    vm missing after failed migrate

    thank you for pointing that out..... I was dealing with this at 2AM and got it wrong ;-) qm unlock 125 worked..
  4. R

    container backup fail

    Ok so these started again. I thought updates and reboots after I last posted had solved the issue.. This time backup to PBS worked, local vzdump failed . # stat /dev/rbd-pve File: /dev/rbd-pve Size: 60 Blocks: 0 IO Block: 4096 directory Device: 0,5 Inode: 2238...
  5. R

    Proxmox 8.1 - kernel 6.5.11-4 - rcu_sched stall CPU

    here is the qm config for a vm that had the issue: # qm config 902 bootdisk: scsi0 cores: 1 ide2: none,media=cdrom memory: 1024 name: ldap-master2 net0: virtio=92:EC:4F:23:5C:37,bridge=vmbr3,tag=3 numa: 0 onboot: 1 ostype: l26 protection: 1 scsi0...
  6. R

    Proxmox 8.1 - kernel 6.5.11-4 - rcu_sched stall CPU

    Hello we have the rcu issue . 5 nodes with all systems using CPU "type 80 x Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz (2 Sockets)" How do I set -pcid ?
  7. R

    Proxmox Failed to Login.

    I had the same issue . Thank you gabriel for the suggestion to turn off lastpass.... that fixed my issue.
  8. R

    container backup fail

    after rebooting the node I could restore the PCT: recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z' /dev/rbd0 Creating filesystem with 3670016 4k blocks and 917504 inodes Filesystem UUID: 4ff60784-0624-452e-abdf-b21ba0f165a5 Superblock backups stored on...
  9. R

    migrations on a node restart

    Hello I think a nice change would be for migrations to occur before the non high availability vm;s are shutdown.
  10. R

    container backup fail

    i see the PCT restore that worked above was from a different backup. so tested restoring the same backup which failed on another node: recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z' /dev/rbd7 Creating filesystem with 3670016 4k blocks and 917504 inodes...
  11. R

    container backup fail

    the issue seems to be with the PVE host. I can not restore a PCT backup. KVM restore works okay. here is part of the output: recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z' /dev/rbd0 The file...
  12. R

    container backup fail

    Also the 1st post error was using NFS storage. the same result occurs when using local storage . 604: 2023-10-15 02:10:49 INFO: Starting Backup of VM 604 (lxc) 604: 2023-10-15 02:10:49 INFO: status = running 604: 2023-10-15 02:10:49 INFO: CT Name: bc-sys4 604: 2023-10-15 02:10:49 INFO...
  13. R

    container backup fail

    Note: the container is also backed up to a PBS server and that works okay .
  14. R

    container backup fail

    Hello We have around 15 containers. Just one has backup failures - 2 times in the past 4 days here is more info: dmesg [Sat Oct 14 08:44:46 2023] rbd: rbd1: capacity 15032385536 features 0x1d [Sat Oct 14 08:44:46 2023]...
  15. R

    Monitoring ceph with Zabbix 6.4

    Thanks. What should be used for $PVE.URL.PORT ? 443? or 8006 or ?
  16. R

    Monitoring ceph with Zabbix 6.4

    Hello I have followed https://geekistheway.com/2022/12/31/monitoring-proxmox-ve-using-zabbix-agent/ and have that working . I am confused on how to get ceph data to zabbix. You seem to mention that the following needs to be set up: * Integratet Zabbix Proxmox Tempalte Key...
  17. R

    [SOLVED] 'ceph pg 55.0 query' not working

    Arron - Thank you, I was able to edit the pool at the ui. Proxmox has made it so much easier to edit a pool then I remember from the old days!!
  18. R

    [SOLVED] 'ceph pg 55.0 query' not working

    Y Yes years ago I had set up a ceph rule like that , and we have since replaced the drives. could you point me to documentation on changing the crush map rule?
  19. R

    [SOLVED] 'ceph pg 55.0 query' not working

    # pveceph pool ls --noborder Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name %-Used Used .mgr 3 2 1 1 on...
  20. R

    [SOLVED] replacing an rpool disk , question on grub-install

    I think there was probably a glitch due to not following the documentation . i ended up using /usr/sbin/grub-install.real /dev/nvme0n1 . I'll mark this closed