after this night all vms on one of our proxmox srevers are down and it seems that the lvm-thin has problems.
lvs prints a lot of warnings:
in the syslog there seems one harddisk out of these mirror raids to be defect:
how can i get a backup out of these machines?
i want to migrate a machine from one node to another but it fails with:
Host key verification failed.
ERROR: migration aborted (duration 00:00:00): Can't connect to destination address using public key
TASK ERROR: migration aborted
what is going wrong here?
both nodes can connect to each other...
I wanted to ask if there is any work going on for copy paste keys support in the vnc viewer? At least software buttons in the top header would be very helpful for long passwords on machines where networking doesn't work . or bios stuff!
Found this old Question regarding the same question...
hey,
i am searching for an old backup of a vm but in the backup overview only the vm ids are listed.
i was hoping that i could find the right one by extracting the configs from the backups - but wehen i select a backup and click onto "show configuration" i get an
Method 'GET...
we have three nodes in a cluster
all are using nfs shared storage but for io intense vms we use local ssd mirror raids as a faster storage.
but what to do if i want to migrate a vm from one local ssd lvm-thin storage to another node without having to move the storage to the nfs before this?
is...
Is it a problem if i want to create a cluster on an proxmox 4.2-5/7cf09667 and add an full updated proxmox 4.2-11/2c626aa1 as the second node? or do they need to have the same patchlevel?
To ask different:
How to patch a cluster propperly? just one after the other and the re-add themself...
if i create a new thin pool named "1tb_thin" i can't create a new data pool within that through the webinterface cause it claims "illegal characters"
I think this is wrong on your side - but i'm not sure... - for now i'll recreate this LV with another name without the underscore... ;-)
we want to have the ability to maove running VMs between hosts online - but we nodn't really need HA - so is there something to consider regarding the setup that is explained in https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x to achive that or am i going the wrong way?
if this setup...
i installed a second server next to our old pve 4.1 machine for extended testing of cluster systems.
to achive that i installed pve on a ssd and after isntallation and setup I created a raid6 via mdadm out of 5 3tb hdds.
i mounted these into /var/lib/vz - to be able to move the .qcow2 storage...
My Proxmox VE 4.1 Server can not start newly created LXC Containers.
What is going wrong here?
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.