Hey,
/proc/mdstat doesn't work here cause it's a hardware raid.
but there are two hardware raids and i poked around a bit to realize that the "not so important" 600gb mirror raid had the problems - so i removed that one and *tataaaa* the system is up and running again: memo to myself: "get rid...
after this night all vms on one of our proxmox srevers are down and it seems that the lvm-thin has problems.
lvs prints a lot of warnings:
in the syslog there seems one harddisk out of these mirror raids to be defect:
how can i get a backup out of these machines?
i want to migrate a machine from one node to another but it fails with:
Host key verification failed.
ERROR: migration aborted (duration 00:00:00): Can't connect to destination address using public key
TASK ERROR: migration aborted
what is going wrong here?
both nodes can connect to each other...
This is funny - the button wasn't there anymore - when you asked - but today it is again...?!?
So i clicked on the machine that i was searching the right backup for (btw. it would be nice to have a row in the backup overview with the "name" of the machine the backup was created from).
than i...
I wanted to ask if there is any work going on for copy paste keys support in the vnc viewer? At least software buttons in the top header would be very helpful for long passwords on machines where networking doesn't work . or bios stuff!
Found this old Question regarding the same question...
hey,
i am searching for an old backup of a vm but in the backup overview only the vm ids are listed.
i was hoping that i could find the right one by extracting the configs from the backups - but wehen i select a backup and click onto "show configuration" i get an
Method 'GET...
what i meant was something like this:
At the moment it looks like this:
pve00 has a local 1tb sdd raid1 and some not io intense stuff via nfs (small vms, backup etc.)
pve01 has a raid6 out of 6x5tb HDDs and the same nfs acces like above
if i want to migrate a vm from the raid6 to the raid1...
Thanks that makes things clearer - and what about the "local iscsi" stuff? is this something that could work to move things directly between local storages?
Thanks for clarifying this.
But why does the creation of a local lvm-thin on one device lead to a creation of these local lvm-thin devices on the other nodes? this is disturbing...
I can only identify on which machine this is really available through the summary page:
Here the local lvm-thin...
we have three nodes in a cluster
all are using nfs shared storage but for io intense vms we use local ssd mirror raids as a faster storage.
but what to do if i want to migrate a vm from one local ssd lvm-thin storage to another node without having to move the storage to the nfs before this?
is...
Is it a problem if i want to create a cluster on an proxmox 4.2-5/7cf09667 and add an full updated proxmox 4.2-11/2c626aa1 as the second node? or do they need to have the same patchlevel?
To ask different:
How to patch a cluster propperly? just one after the other and the re-add themself...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.