Search results

  1. T

    vm can not migrate because of it show a local cd/dvd

    I found the issue, I had a snapshot of this vm, and it had loca; dvd at that time. I deleted the snapshot then mirgrate works.
  2. T

    vm can not migrate because of it show a local cd/dvd

    Hi, I have a vm, and I am trying to migrate it to another node, but it shows can not migrate because of a local cd/dvd, but I have no cd/dvd listed in vm's hardware list, wondering how to resolve this. Here is what happen before this: 1. this vm is on HA, and I had nodes cluster network down...
  3. T

    upgrade ceph public network from a signle nic to a bond of two nics

    I have a ceph (5 nodes with 2 osds on each node) running with a 10g (single nic) public network, and a 40g (single nic) ceph private network, both networks are on a different vlan. now, I want to upgrade my 10g nic to a bond of two nics (both are separated 10g adaptor), given my ceph is running...
  4. T

    setup a vm backup plan to repeat every 7 days

    Hi, All; Just had a pbs installed, and working fine, and I am kind of new to setup a custom backup plan. Here is what I want to do to backup one VM: 1. a full backup on sunday, and then monday to sat do an incremental backup. 2. then every sunday rotate to a new full backup, then repeat an...
  5. T

    ceph report low on monitor space

    my Ceph turn back to normal after a while, but following is my email notification at that time, can I display some old warning in detail? I do not know how to do that if there is anyway. HEALTH_WARN --- New --- [WARN] MON_DISK_LOW: mon proxmox6 is low on available space mon.proxmox6...
  6. T

    ceph report low on monitor space

    Hi, thanks for your help, here is output: root@proxmox6:~# df -h /var/lib/ceph/mon/* Filesystem Size Used Avail Use% Mounted on rpool/ROOT/pve-1 216G 76G 140G 35% /
  7. T

    ceph report low on monitor space

    HI, I have a ceph cluster running for a while now, today one of my monitor node "proxmox6" ceph report a low on available space 19% avail. as shown below, I have a lot of available space, not sure where and how I can assign more space to ceph mon. root@proxmox6:~# df -h Filesystem...
  8. T

    HA issue with a pass through pci network card

    That is what I thought too. Thanks
  9. T

    HA issue with a pass through pci network card

    Hi, I have 3 nodes cluster HA, and I have a vm with a pass through pci network card on node1, yesterday when I shutdown node1, this vm auto migrate to node2 as per design, but this vm failed to start on node2 because of this pass through pci network card, is this normal? I am running 7.2-7. Thanks
  10. T

    LAN # shown in a VM

    Got it, thank you very much.
  11. T

    LAN # shown in a VM

    Hi; I have a VM with one network device (net0) and a pci passthrough 10g card, in this VM, it shows lan1 = net0 network device, and lan2 = pci passthrough 10g card. My issue is: I want to change VM's lan1 to my pci passthrough 10g card, and lan2 to my net0 network device, how can I do that...
  12. T

    how to access a ceph pool from none ceph node

    on node #1 and #2, this "ssd-vm" drive show gray color, and when I click on it, it show not available. Here is little more detail about my networks: 1. all 7 nodes are on a 1g network for proxmox admin, and all on a separate 1 g network for pve cluster. 2. node #3-#7 are on a 10g network for...
  13. T

    how to access a ceph pool from none ceph node

    Hi, I have a 7 nodes pve-cluster on a 1g network with following setup: 1. 7 nodes pve-cluster on a 1g cluster network. 2. node #1 and #2 are older machine, does not have ceph installed 3. node #3-#7 are newer, and have ceph installed, ceph is on a 10g network. 4. I created a pool (I called it...
  14. T

    ceph RBD PG # setting

    Hi, all; I have been running a 5 nodes ceph for a while now, I have 10 OSDs. I had one RBD drive setup with 128 pgs and auto scale on. My question is, since I have autoscale on, do I need to increase pg number on this RBD drive? or, proxmox ceph will automatically increase pg number for me?
  15. T

    vm migration error on a cluster

    Hi, I believe it was resolved by myself, but I do not remember what did I do, but you can try following: 1. find out which network link you are using to do migration, could be your ip for proxmox admin port, or cluster heart beat link. you try to reset that link, it may resolve the issue. or...
  16. T

    All VMs locking up after latest PVE update

    I have no NFS disk, only has ceph drive, I can not see why only this vm has this issue, rest of vm backup takes 20% longer than before, but not sure it worth to raise alarms or not. Here is my output of pvesm status: VM_Backup cifs active 28107205512 21264030312...
  17. T

    All VMs locking up after latest PVE update

    Hi, I just start having the same problem only on one VM 310, at backup, I got following error msg in syslog: Apr 23 09:39:20 proxmox5 pvedaemon[1493255]: VM 310 qmp command failed - VM 310 qmp command 'query-proxmox-support' failed - unable to connect to VM 310 qmp socket - timeout after 31...
  18. T

    how to adjust node's location in a cluster view tree

    Thank you, the link to edit corosync.conf did help me a lot, but I did not rename node, I am afraid some strange thing may happen.