Search results

  1. I

    Boot failed Not a bootable disk

    Hi all: One of my VMs in my Proxmox 5.4 cluster stopped working after a stop / start. Yesterday the VM was working just fine. I have tried to restore the VM to another storage (from weekly backup). Backup was done on 27.07 and 3.08 and the machine was working on 4.08.2019. After the restore...
  2. I

    [SOLVED] Diferent version of pve in the same cluster, unable to clone VM

    Linux prox1 4.13.13-2-pve #1 SMP PVE 4.13.13-32 (Thu, 21 Dec 2017 09:02:14 +0100) x86_64 root@prox1:~# more /etc/apt/sources.list deb http://ftp.ro.debian.org/debian stretch main contrib # security updates deb http://security.debian.org stretch/updates main contrib...
  3. I

    Long startup time for Linux VM after kernel update

    Hi all I have a cluster version: Virtual Environment 5.1-41 After updating the Kernel on some Linux VMs in my cluster, I get a really long boot time. Processor stays in 100% for about 20 minutes while I get a prompt blinking on the black screen. Adding more cores to the VM does not help as...
  4. I

    [SOLVED] Diferent version of pve in the same cluster, unable to clone VM

    Hello I have a cluster with six nodes. VMs have storage on Synology NFS shares. The problem: When trying to clone one of the VMs on node5, I have discovered that I'm not able to select the target storage (the combo box is empty). Interesting enough, I have discovered that this is true for 4...
  5. I

    Migration of VM between nodes failed - could not activate storage 'local-zfs', zfs error: cannot imp

    Got it. I've restricted the storage to the node that need it and now the migration is working. I just can't figure how this happened because I did not enabled it on purpose for both nodes.
  6. I

    Migration of VM between nodes failed - could not activate storage 'local-zfs', zfs error: cannot imp

    Hi Thanks for your reply. How do I do that? Here is the output on the first node: root@prox3:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 928G 2.21G 926G - 8% 0% 1.00x ONLINE - root@prox3:~# more /etc/pve/storage.cfg...
  7. I

    Migration of VM between nodes failed - could not activate storage 'local-zfs', zfs error: cannot imp

    Also the cluster status: root@prox5:~# pvecm status Quorum information ------------------ Date: Mon Jan 22 10:45:14 2018 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 0x00000002 Ring ID: 1/64 Quorate: Yes Votequorum information...
  8. I

    Migration of VM between nodes failed - could not activate storage 'local-zfs', zfs error: cannot imp

    root@prox5:~# more /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,backup,vztmpl zfspool: local-zfs pool rpool/data content rootdir,images sparse 1 nfs: ISO export /volume1/ISOuri path /mnt/pve/ISO server 192.168.10.29...
  9. I

    Migration of VM between nodes failed - could not activate storage 'local-zfs', zfs error: cannot imp

    Hi I have cluster with 2 nodes version 5.1-35. Ii wanted to reboot the 2 nodes so first I migrated all VMs from the first note to the second one and everything went fine. Then, I wanted to come back with all VMs to the first node. and here the problems started: 2018-01-19 11:00:13 starting...
  10. I

    Isolated nodes

    OK, I'll do that. One more thing, not sure if it is related. I perform a backup of all VMs on a NFS storage (FreeNAS). Some VMs seem to fail as I show in this thread: https://forum.proxmox.com/threads/backup-of-vm-102-failed-vma_queue_write-write-error-broken-pipe.28480/#post-145182 Every...
  11. I

    Isolated nodes

    Hi I'm running a v4 cluster with 7 nodes. Once every 3-4 days, each node seems isolated from the rest. Howevver communication seems fine (i can login from one another, ping, ssh everything else seems ok. However I cannot manage one node from the interface of another. To solve this, I shut...
  12. I

    Backup of VM 102 failed - vma_queue_write: write error - Broken pipe

    Hi everybody Having the exact same problem. As you can see only VM 112 fails. The next day another machine fails or maybe none will fail. 110cassiopeaOK00:06:154.22GB/mnt/pve/prox4bkp_vault/dump/vzdump-qemu-110-2016_08_23-23_00_01.vma.lzo 112volansFAILED00:29:26vma_queue_write: write error -...
  13. I

    VM migration between 2 NFS harddisks

    My purpose is to migrate yggdrasil VM from one cluster to another. I'm doing this by doing a backup of yggdrasil from cluster 1 and than restoring it in cluster 2. Here on the new cluster I build an empty machine (skadi) and then restore the original one (yggdrasil) over it. However skadi was...
  14. I

    VM migration between 2 NFS harddisks

    # qm config 101 balloon: 1024 bootdisk: ide0 cores: 1 ide0: yggdrasil_nfs:101/vm-101-disk-3.raw,size=80G memory: 2048 name: yggdrasil net0: e1000=D6:F7:52:DA:4C:D3,bridge=vmbr0 onboot: 1 ostype: l26 sockets: 1 ------------------ # qm showcmd 101 /usr/bin/systemd-run --scope --slice qemu --unit...
  15. I

    VM migration between 2 NFS harddisks

    nfs: ISOuri path /mnt/pve/ISOuri export /volume1/ISOuri server 192.168.10.28 options vers=3 maxfiles 1 content iso dir: local path /var/lib/vz content iso,rootdir,images,vztmpl maxfiles 0 nfs: proxback path...
  16. I

    VM migration between 2 NFS harddisks

    Just to make sure that I make myself clear: as soon as I change the name of a share (even if I modify accordingly the HDD of the coresponding VM) that VM stops working.
  17. I

    VM migration between 2 NFS harddisks

    Actually this is a second test where I've done the same thing for a VM called yggdrasil (previous was with atlas). # df -h Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev tmpfs 394M 41M...