This is trivial task but I`m not sure for 100% so I`l like to ask you.
If I have a classic servers with several NICs I always try to make ideal settings, so separate network for nodes, corosync, ceph...etc and every network has own physical interface.
But this is 2 nodes cluster based on 2...
There are several threads about not comfortable way how to shrink some VM (especially Win VM) with not clear results always.
This is my succesful way today.
I have 1TB VM Win2019 server but with only 90GB data in. I wanted to resize VM disk from 1TB to 500GB because there is waste time to backup...
I have Windows VM machine virtualized from physical machine via virtio.
I had to reduce lvm size to 1TB because I use 2TB disk for migrating (physical server is only 1TB)
There is one problem...I had about 600GB cams recording in the server, so I delete it and server has only 200GB/1000GB...
There are several topics in this forum about this issue but I can`t be able migrate VM from zfs node to lvm node.
I`ve tried offline, online, move VM`s disk from zfs to local(dir) with no success.
This is my storage.
Local-lvm is on pve2, pve1 has zfs FS.
I`ve read in this forum there can be...
I have pve1 node with 4TB disk with fresh instalation local + local-zfs. I`ve made another node with 1TB disk with local and local-zfs storage (classic 100GB for local, rest for local-zfs).
When I made a cluster and join node2 to node1, node 2 has a problem with own local-zfs...I can see...
Webgui from one node stopped working in tasks area (bottom edge). I can see last task from 15.1.23. Nearby Cluster log shows present info. Another nodes show tasks correctly.
I suppose I have to restart some process in this node...but which one?
Proxmox allow migrate VM between nodes only from local disk, not from another.
I have a situation.. 1 node with 2 disks arrays, one is classic lvm and 2nd is mounted to dir.
How can I migrate vm from 2nd array smartly? Because I have to move disk of VM to lvm and then I `m able to make online or...
I have 5 nodes cluster.
Pve1 was upgraded from 6.0.1 to 6.4-15, Pve4 has fresh install from 6.4-4, rest pves have 6.0.1.
Pve1 working properly.
But pve4 has a strange problem...during 7 days goes to "grey mode". I`m able to connect to pve4 via web shell, I`m able to connect via ssh. I can see...
I have cluster form 5 nodes with pve-manager 6.0.9. Pve4 was freshly installed with 6.4.4 version and joined correctly.
Now, after 11 days uptime pve4 can`t see nfs backup storage and I can`t migrate any machine to another node because all machines value don`t match regex pattern.
I know there...
I have one VM..there is win DC server and server is working correctly.
I`d like to change array in this node (ssd for sas) and array will be destroyed. There is one disk failed in current sas array (raid6 from 8 disks).
But...I can do nothing with this machine. I can`t backup, I can`t migrate, I...
I`ll have tu upgrade server`s array from sas to ssd, so I`d like to backup and restore node OS smartly.
I have to create a new array in raid controller because there is no another way.
How can backup and restore node OS smartly?
I know, I can make fresh install and then replace...
I have a cluster from 5 nodes and I have an unused SSD array in PVE4.
I`d like to make 2 lvm, one for VM, one for backup.
My idea is to have backup storage like directory and lvm like...lvm disk.
I`m little confused becuase I`v made only backup storage and these are my questions.
Storage is...
I`m little confused/disappointed from HA manager in PVE. I tried to use DRBD (there was problem with IP via NFS if primary node failed), next iscsi(LVM) in double (active-active) NAS (this scenario is ok) but what`s the role of HA manager in PVE? Ok, it can be able to migrate machine from one...
1st note...I tried to resolve this issue with Proxmox support but they closed my ticket because I haven`t subs for all nodes in cluster...shame on you guys, because I want running ceph only for one node;oP.
Ok I have to ask comunity.
I have a test ceph in 3 nodes from 5 na v5 version, now I`m...
Guys am I idiot? I hade 7 years exeperiences with Proxmox (from v2) but I`m still failing in simply case.
I`v upgraded 2 nodes cluster (v4).
I made 3rd node with fresh install v6. Next I upgraded 2 v4 nodes to v5.
Node 2 was upgraded to v6 and joind to node 3.
Node 1 was upgraded to v6 but I...
I have 2 nodes cluster.
You can see 2 lvms in pve3, pve2 has only local (lvm-thick). Can I migrate machine from pve3 to pve2?
I tried it but:
Ok I Tried rename lvm in pve2 to vmdata and then...
Is there any chance to migrate vm between nodes and lvm thin vs lvm?
and
I have one cluster in 2 nodes (PVE5). Now I add 3rd node in PVE6 and make new cluster. I`ve upgrade node2 do pve6 and 1st cluster was off (because corosync3). I tried to run this https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_separate_node_without_reinstall in node 2 and add it to new...
I have 2 nodes cluster and I want to upgrade it from 4 to 6 (remotely).
I`v migrated (via backup,&restore and 3rd NFS storage) all VM and upgrade node1 from 4 to 5 and later to 6.
Now I`d like move all VMs from node2 to node1 but several machines are in 'testing" and I have to move its offline...
I have DRBD in 2 nodes in 5 nodes cluster and I`d like share it via NFS.
I did it a few months ago and it is working corretlly.
I was only test array so I forgot for it and today I wanted make some tests and I can see drbd has only 90GB, not 1.7TB.
Ok I tried to redig my config and my cluster...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.