Search results

  1. A

    Possible bug in HA

    task started by HA resource agent Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/QemuServer.pm line 2026. TASK OK It seemed to work properly - after I did "shutdown -h now" in the VM, the VM was started up again by HA then this message was seen in Task Viewer...
  2. A

    suggestion: Slave Proxmox Node

    Is there any possibility for a light proxmox node that joins a cluster to just provide extra Ceph disk space ? I have a proxmox cluster that I am very pleased with, but maybe I could expand its ceph storage with some storage nodes that are managed by the cluster. (I can see that others might...
  3. A

    kb:How to see disk space used by LVM thin template VM

    Since it took me an hour to work out how to do this... lvs does not show disk space used in Data%: root@hack1:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert base-100-disk-2 data Vri---tz-k 20.00g thindata...
  4. A

    Migrate VM with local storage, progress of

    Hi, When proxmox 5.1 migrates a VM that has local storage there is no progress reported, just started then finished. I noticed that the migrating process uses "dd". You can get progress from "dd" by sending it a USR1 signal. i.e. "pkill -USR1 dd" Andrew
  5. A

    Unused disk problem with backup/restore

    I have backed up and restored a few VMs from another proxmox. After the restore has completed the hardware of the VM sometimes has two disks of the same image : Hard Disk (virtio0) disk_vm:vm-108-disk,size=50G Unused Disk 0 disk_ct:vm-108-disk-1 The ceph pool is called "disk"...
  6. A

    Tiny bug:Ceph status from pool view not clickable

    From pool view->Data Center->Summary the ceph status changes to a pointer as if it is clickable, but it isn't.
  7. A

    pveceph init --network x.x.x.x/24 needed on all nodes?

    Hi, I recently re-installed proxmox 5.1 and ceph and followed instructions: "After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network" I found that ceph-osd.?.log had references to the non-ceph network in it, so I ran "pveceph init...
  8. A

    Templates in High Availability

    If I have VMs in High Availability, should I have their templates in H.A. too or is it handled automatically by H.A.?
  9. A

    Auto Migration

    Hi, I've been thinking about auto-migration based on cpu usage: #!/bin/bash # script to auto migrate VMs to lower loaded node debug=1 baseload=4000 #get list of nodes, there is some other text to strip out nodes=$(pvecm nodes| grep -Ev "^$|Member|-------|Nodeid|(local)"|awk '{print $3}') if...
  10. A

    Multicast usage in proxmox

    Hi, I have installed a 16 node proxmox 5.1 cluster with a ceph hdd osd on each node, 5 monitor/mgrs, 2Gbit network. During installation I wasn't able to get multicast networking on the network switch. Everything seems to work so I am wondering what problems I should look out for. Is multicast...