Search results

  1. A

    Ceph: how to add a namespace to a pool

    I agree, proxmox disks are visible in "rbd -p poolname ls" and "rbd -p poolname info ID" . There's no namespace support in there yet.
  2. A

    Node shows no vm's, but vm's are running.

    I've seen something like this in the firefox browser - refreshing the browser shows the VMs. The VMs show if you go to the node and run "qm list" ?
  3. A

    Ceph: how to add a namespace to a pool

    I think you mean: how to add a permission role name. Click on Datacenter->Permissions tab. Add your users and groups in the obvious way, then to add a role name to a user you need to click Permissions tab and then add button. Or maybe you mean ceph pools?
  4. A

    Ceph OSD Performance Issue

    Just taking this bit first - the VMs would be accessing ceph so their data wouldn't need to be any particular node or osd. This sounds like a separate problem. So back to the main, I take it that your 3 node "good" cluster is live with a lot of data on it but could you delete all the osd on...
  5. A

    Ceph OSD Performance Issue

    ceph tell osd.0 bench -f plain bench: wrote 1024 MB in blocks of 4096 kB in 8.649105 sec at 118 MB/sec This is with no ssd and no 10G network, so I would guess either your ssd is not there or your 10G network is 1G. Just a guess mind.
  6. A

    A stop job is running...

    I wouldn't want to speculate any further, sorry.
  7. A

    A stop job is running...

    systemctl disable lvm2-monitor ?
  8. A

    Unused disk problem with backup/restore

    proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve) pve-manager: 5.1-41 (running version: 5.1-41/0b958203) pve-kernel-4.13.13-2-pve: 4.13.13-32 libpve-http-server-perl: 2.0-8 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0: 1.0.1-1 pve-cluster: 5.0-19 qemu-server: 5.0-18 pve-firmware: 2.0-3...
  9. A

    Unused disk problem with backup/restore

    No, disk_ct is set to "Container" and disk_vm is set to "Disk image". Also it has only happened twice, not every time (from 4 so far)
  10. A

    Unused disk problem with backup/restore

    I have backed up and restored a few VMs from another proxmox. After the restore has completed the hardware of the VM sometimes has two disks of the same image : Hard Disk (virtio0) disk_vm:vm-108-disk,size=50G Unused Disk 0 disk_ct:vm-108-disk-1 The ceph pool is called "disk"...
  11. A

    Migration without cluster

    You can dd the disk into a network pipe and copy it out at the other end. The pipe could be "net cat" or "udp send" or ssh. The receiving end could be "system rescue cd" This works best if the disk is quiet at the time and better if it is off (use system rescue cd to send also).
  12. A

    Tiny bug:Ceph status from pool view not clickable

    From pool view->Data Center->Summary the ceph status changes to a pointer as if it is clickable, but it isn't.
  13. A

    Check VM is running with ping or better ideas?

    You should monitor the services that the VMs provide, you could use httping to check a webserver for instance. You could use a more featured program like nagios or zabbix to get an overview of your system.
  14. A

    Proxmox CEPH Cluster's Performance

    From: http://docs.ceph.com/docs/master/rados/operations/placement-groups/#set-the-number-of-placement-groups ceph osd pool set {pool-name} pg_num {pg_num}
  15. A

    pveceph init --network x.x.x.x/24 needed on all nodes?

    OK, so you think that the reboot was the important part to fix the "problem"...I note that if you do not run init on all nodes then the useful links to ceph config are not created on all the nodes and "ceph -s" will not work on all the nodes.
  16. A

    pveceph init --network x.x.x.x/24 needed on all nodes?

    Hi, I recently re-installed proxmox 5.1 and ceph and followed instructions: "After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network" I found that ceph-osd.?.log had references to the non-ceph network in it, so I ran "pveceph init...
  17. A

    Templates in High Availability

    But you can only click on the template to clone it, if the node it is on is up. So I've added my templates to HA too.
  18. A

    Templates in High Availability

    I am wondering about thin-cloned storage being there after migration?
  19. A

    Ubuntu 16.04 Crash in Proxmox

    Could it be backup? Backup jobs happen weekly...
  20. A

    Templates in High Availability

    If I have VMs in High Availability, should I have their templates in H.A. too or is it handled automatically by H.A.?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!