Recent content by RobFantini

  1. Backup: ERROR: unable to find configuration file for VM

    twice in 3 weeks at least one node had failed backups. this thread showed a similar situation: https://forum.proxmox.com/threads/proxmox-ve-vzdump-backup-of-vms.54291/post-250155 there are about 10 vm's on the node. the 1st kvm backed up ok: INFO: status: 98% (21063794688/21474836480)...
  2. [SOLVED] ceph upgrades and zabbix

    yes that was a reminder - i think zabbix is very good for monitoring ceph latency and sending email when ceph health is not OK. is there a place i could put a how to set that up, or should a wiki page be used?
  3. osd move issue

    Hello please note: the ' now active ' : vgchange -a n /dev/ceph-eab7fc8d-051a-4756-a8e5-1a3acb3e92c0 0 logical volume(s) in volume group "ceph-eab7fc8d-051a-4756-a8e5-1a3acb3e92c0" now active as of now the only way i can move an osd from one system to another is to 1- stop/out/destroy...
  4. osd move issue

    I do not know what you mean by 'no patch needed' Question - this should have worked ? vgchange -a n /dev/ceph-eab7fc8d-051a-4756-a8e5-1a3acb3e92c0 0 logical volume(s) in volume group "ceph-eab7fc8d-051a-4756-a8e5-1a3acb3e92c0" now active
  5. osd move issue

    Hello I tried moving an osd without lvm deactivate, that did not work. So moved osd back, rebooted to activate it [ could not get it up otherwise] 1- at pve set osd out 2-at pve stop the osd 3- worked on deactivate lvm: this seemed to work as no output resulted: lvchange -an...
  6. [SOLVED] ceph class issue with nautilus nvme p3700

    solved with info from https://access.redhat.com/solutions/3341491 ## Initial members in nvme class # ceph osd tree|grep nvme 1 nvme 1.81929 osd.1 up 1.00000 1.00000 4 nvme 1.81929 osd.4 up 1.00000 1.00000 # ceph osd crush set-device-class nvme osd.0 Error...
  7. [SOLVED] ceph class issue with nautilus nvme p3700

    here is part of tree: more info # ceph device ls|grep nvme INTEL_SSDPEDMD020T4D_HHHL_NVMe_2000GB_CVFT5190000Q2P0EGN pve14:nvme0n1 osd.4 INTEL_SSDPEDMD020T4D_HHHL_NVMe_2000GB_CVFT735300072P0OGN pve10:nvme0n1 osd.1...
  8. [SOLVED] ceph class issue with nautilus nvme p3700

    I have a class set up for nvme. that worked fine with pve5 / luminous for our intel nvme p3700's. now when i add a nvme : 1- pve screen give warning about raid controller: Note: Ceph is not compatible with disks backed by a hardware RAID controller. For details see the reference...
  9. osd move issue

    Hello Alwin - How do I deactivate an osd LVM? I read the man page to ceph-volume , and i did not see an option to deactivate lvm.
  10. Unable to boot Containers after update

    try setting lxc > option > features : nested
  11. [SOLVED] apache2.service: Failed to set up mount namespacing: Permission denied

    may have due to an update to apache2 or something ? or lxc on the host.
  12. osd move issue

    Hello, I am traveling until the weekend and will test then. .
  13. ceph latency spikes 2-3 times per day

    to apply those settings: ceph tell osd.* injectargs '--osd_enable_op_tracker=true'
  14. osd move issue

    I understand. And there should be a defined way for nautilus created osd to be moved from one node to another If you or someone else has a suggested plan I can test in our lab.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!