No I wanted to migrate from one node to another (pve4 to pve6). I had to migrate vm from SSD_old to local on pve4 firstly and then I was able to migrate to pve6.
Proxmox allow migrate VM between nodes only from local disk, not from another.
I have a situation.. 1 node with 2 disks arrays, one is classic lvm and 2nd is mounted to dir.
How can I migrate vm from 2nd array smartly? Because I have to move disk of VM to lvm and then I `m able to make online or...
No, because I have not enough resources for destabilizing another node. This is 24/7 traffic and I have to have 100% safety I`ll have a stable node afer upgrade.
Pve4 is extremly unstable from fresh install and I`m not able to fix it.
This is part of log from yesterday restarting.
I have 5 nodes cluster.
Pve1 was upgraded from 6.0.1 to 6.4-15, Pve4 has fresh install from 6.4-4, rest pves have 6.0.1.
Pve1 working properly.
But pve4 has a strange problem...during 7 days goes to "grey mode". I`m able to connect to pve4 via web shell, I`m able to connect via ssh. I can see...
I`ll have to do it (upgrade all nodes) because node was grey after 4 days again.
Restarting services (cluset, daemen, proxy...) helped and node is online/green again but it can`t see nfs storage again.
And i have a strange indication in logs.
Maybe some relationship with unreachble NFS storage...
I have cluster form 5 nodes with pve-manager 6.0.9. Pve4 was freshly installed with 6.4.4 version and joined correctly.
Now, after 11 days uptime pve4 can`t see nfs backup storage and I can`t migrate any machine to another node because all machines value don`t match regex pattern.
I know there...
I have one VM..there is win DC server and server is working correctly.
I`d like to change array in this node (ssd for sas) and array will be destroyed. There is one disk failed in current sas array (raid6 from 8 disks).
But...I can do nothing with this machine. I can`t backup, I can`t migrate, I...
Of course using boot media (some linux) and dd with tar and restore it from another disk/array is a way too.
But we have 2022...there is no better way in this year?
I`v been using dd for backup 20 years ago and I hoped we have some newer, smarter solution.
I`ll have tu upgrade server`s array from sas to ssd, so I`d like to backup and restore node OS smartly.
I have to create a new array in raid controller because there is no another way.
How can backup and restore node OS smartly?
I know, I can make fresh install and then replace...
It can be a way but this is not answer what should I do with SSD in pve4.
Use lvm(thin) on Datastore or make solo directory in pve4 or something
And another question is if is it good idea to make PBS like VM in some PVE (I have no other free physical machine for it).
I have a cluster from 5 nodes and I have an unused SSD array in PVE4.
I`d like to make 2 lvm, one for VM, one for backup.
My idea is to have backup storage like directory and lvm like...lvm disk.
I`m little confused becuase I`v made only backup storage and these are my questions.
Storage is...
I don`t understand why is this case different like power off or freeze, crash, etc because I`m lost 99.9999% HA in this case.
My logic is simple...the quorum decides and if node (rebooting, power off, lost network connecion...etc) lost connection from cluster/quorum it must stop the VM and rest...
I`m little confused/disappointed from HA manager in PVE. I tried to use DRBD (there was problem with IP via NFS if primary node failed), next iscsi(LVM) in double (active-active) NAS (this scenario is ok) but what`s the role of HA manager in PVE? Ok, it can be able to migrate machine from one...
1st note...I tried to resolve this issue with Proxmox support but they closed my ticket because I haven`t subs for all nodes in cluster...shame on you guys, because I want running ceph only for one node;oP.
Ok I have to ask comunity.
I have a test ceph in 3 nodes from 5 na v5 version, now I`m...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.