Hi all:
One of my VMs in my Proxmox 5.4 cluster stopped working after a stop / start.
Yesterday the VM was working just fine.
I have tried to restore the VM to another storage (from weekly backup).
Backup was done on 27.07 and 3.08 and the machine was working on 4.08.2019.
After the restore...
Linux prox1 4.13.13-2-pve #1 SMP PVE 4.13.13-32 (Thu, 21 Dec 2017 09:02:14 +0100) x86_64
root@prox1:~# more /etc/apt/sources.list
deb http://ftp.ro.debian.org/debian stretch main contrib
# security updates
deb http://security.debian.org stretch/updates main contrib...
Hi all
I have a cluster version: Virtual Environment 5.1-41
After updating the Kernel on some Linux VMs in my cluster, I get a really long boot time.
Processor stays in 100% for about 20 minutes while I get a prompt blinking on the black screen. Adding more cores to the VM does not help as...
Hello
I have a cluster with six nodes.
VMs have storage on Synology NFS shares.
The problem:
When trying to clone one of the VMs on node5, I have discovered that I'm not able to select the target storage (the combo box is empty).
Interesting enough, I have discovered that this is true for 4...
Got it. I've restricted the storage to the node that need it and now the migration is working. I just can't figure how this happened because I did not enabled it on purpose for both nodes.
Hi
Thanks for your reply. How do I do that?
Here is the output on the first node:
root@prox3:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 928G 2.21G 926G - 8% 0% 1.00x ONLINE -
root@prox3:~# more /etc/pve/storage.cfg...
Also the cluster status:
root@prox5:~# pvecm status
Quorum information
------------------
Date: Mon Jan 22 10:45:14 2018
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000002
Ring ID: 1/64
Quorate: Yes
Votequorum information...
root@prox5:~# more /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl
zfspool: local-zfs
pool rpool/data
content rootdir,images
sparse 1
nfs: ISO
export /volume1/ISOuri
path /mnt/pve/ISO
server 192.168.10.29...
Hi
I have cluster with 2 nodes version 5.1-35.
Ii wanted to reboot the 2 nodes so first I migrated all VMs from the first note to the second one and everything went fine. Then, I wanted to come back with all VMs to the first node. and here the problems started:
2018-01-19 11:00:13 starting...
OK, I'll do that. One more thing, not sure if it is related.
I perform a backup of all VMs on a NFS storage (FreeNAS). Some VMs seem to fail as I show in this thread:
https://forum.proxmox.com/threads/backup-of-vm-102-failed-vma_queue_write-write-error-broken-pipe.28480/#post-145182
Every...
Hi
I'm running a v4 cluster with 7 nodes.
Once every 3-4 days, each node seems isolated from the rest. Howevver communication seems fine (i can login from one another, ping, ssh everything else seems ok. However I cannot manage one node from the interface of another.
To solve this, I shut...
Hi everybody
Having the exact same problem. As you can see only VM 112 fails. The next day another machine fails or maybe none will fail.
110cassiopeaOK00:06:154.22GB/mnt/pve/prox4bkp_vault/dump/vzdump-qemu-110-2016_08_23-23_00_01.vma.lzo
112volansFAILED00:29:26vma_queue_write: write error -...
My purpose is to migrate yggdrasil VM from one cluster to another. I'm doing this by doing a backup of yggdrasil from cluster 1 and than restoring it in cluster 2.
Here on the new cluster I build an empty machine (skadi) and then restore the original one (yggdrasil) over it.
However skadi was...
Just to make sure that I make myself clear: as soon as I change the name of a share (even if I modify accordingly the HDD of the coresponding VM) that VM stops working.
Actually this is a second test where I've done the same thing for a VM called yggdrasil (previous was with atlas).
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 394M 41M...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.