This is trivial task but I`m not sure for 100% so I`l like to ask you.
If I have a classic servers with several NICs I always try to make ideal settings, so separate network for nodes, corosync, ceph...etc and every network has own physical interface.
But this is 2 nodes cluster based on 2...
There are several threads about not comfortable way how to shrink some VM (especially Win VM) with not clear results always.
This is my succesful way today.
I have 1TB VM Win2019 server but with only 90GB data in. I wanted to resize VM disk from 1TB to 500GB because there is waste time to backup...
I`vè already have changed scsi controller to default (instead virtio), so now I checked discard=on but I thought optimize must do node, not VM. Now, it seems good.
Thanks for explanation.
Solved
I have Windows VM machine virtualized from physical machine via virtio.
I had to reduce lvm size to 1TB because I use 2TB disk for migrating (physical server is only 1TB)
There is one problem...I had about 600GB cams recording in the server, so I delete it and server has only 200GB/1000GB...
There is problem the same label and same type because I tried to make local on the bot nodes but one node has zfs type and 2nd lvm and migration not worked.
I `m only curious now, because backup/restore is more effective for me now (backup (21GB zstd about 5min, restore the same) vs migration...
There are several topics in this forum about this issue but I can`t be able migrate VM from zfs node to lvm node.
I`ve tried offline, online, move VM`s disk from zfs to local(dir) with no success.
This is my storage.
Local-lvm is on pve2, pve1 has zfs FS.
I`ve read in this forum there can be...
AarIon I think you``re probably right but tis is my theory...I`ve made node 1 a few weeks ago so I can`t remember configuration exactly... I thought there was the same, but nod1 has 4TB zfs-local and node2 only 1TB. Cluster wanted only one zfs-local but node2 hasn`t enough capacity.and this was...
There is strange issue. If I`d try to mount data via fstab Proxmox won`t boot up .
I`ve tried to set up in fstab /dev/pve/data /var/lib/vz ext4 defaults 0 2, hen without ext4 and finally only mount /dev/pve/dat /var/lib/vz.
Proxmox cras during boot because it can bre able mount this volume data...
I have pve1 node with 4TB disk with fresh instalation local + local-zfs. I`ve made another node with 1TB disk with local and local-zfs storage (classic 100GB for local, rest for local-zfs).
When I made a cluster and join node2 to node1, node 2 has a problem with own local-zfs...I can see...
There is no problem with browser but browser + this node, because another nodes are working right in this browser. A there is no problem in the node because in another broswer (Opera) I can see it right.
It must be some cookies issue but I don`t like to reset all browser/profile because there...
Only ceph but I`m not using Ceph in this cluster. Which service send/show tasks to the webgui?
Edit: This is webbrowser problem, I tried another browser a there are all data correctly. I can`t fix it, I tried to delete cookies for it`s IP but no change. FF 111.0.1.
Webgui from one node stopped working in tasks area (bottom edge). I can see last task from 15.1.23. Nearby Cluster log shows present info. Another nodes show tasks correctly.
I suppose I have to restart some process in this node...but which one?
No I wanted to migrate from one node to another (pve4 to pve6). I had to migrate vm from SSD_old to local on pve4 firstly and then I was able to migrate to pve6.
Proxmox allow migrate VM between nodes only from local disk, not from another.
I have a situation.. 1 node with 2 disks arrays, one is classic lvm and 2nd is mounted to dir.
How can I migrate vm from 2nd array smartly? Because I have to move disk of VM to lvm and then I `m able to make online or...
No, because I have not enough resources for destabilizing another node. This is 24/7 traffic and I have to have 100% safety I`ll have a stable node afer upgrade.
Pve4 is extremly unstable from fresh install and I`m not able to fix it.
This is part of log from yesterday restarting.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.