Hello all and sorry for the title of the post but I failed searching for a better one ... So I simply used the error message itself.
Since some days (let's say 2 weeks) one of my PVE node started to log in kernel.log, messages and syslog the following messages over and over...
Hello,
I hope I can continue to post on this thread for question regarding DRDB9 ...
In the Datacenter/Storage view for my (2) DRBD nodes, the "Disk Usage %" for drbd1 is 70%. What is it based on ?
root@pve2:~# vgs
VG #PV #LV #SN Attr VSize VFree
drbdpool 1 9 0 wz--n- 320.00g...
Funny thing, in fact the "Take snapshot" is not grayed out as stated by dietmar, it just takes few second to become available. But it opens a box saying the feature is not available.
I have 2 solutions in case I need to secure an update:
1.- Backup the VM and restore it in case of problem
2.-...
Ah OK I did not catch the 'new resource only' thing when reading the doc...
I need basic snapshot in order to 'secure' an update or a software modification that could 'break something'.
Most of the time I create then delete snapshots, or rare occasions I need to restore.
What about this...
@dendi: Thanks for the link ! I'm not used to drbdmanage/drbd9, I've only played a bit with 8 ...
@fabian: What's your feeling about the drbdmange's snapshot how-to linked by dendi ?
Thanks Fabian.
Does that mean it will be implemented or there's no plan to do it yet ?
BTW, can I use the manual procedure I've described in my first post safely ?
Cheers.
Hello,
I've migrated my VMs to DRBD for storage redundancy purpose. So now, my disks are raw type instead of qcow2. The "take snapshot" button is grayed out because of that.
How can I do snapshot now ? Maybe directly @lvm2 level ?
Create snapshot:
pve1# lvcreate -L1G -s -n vm105snap...
As your second picture shows, you have your ~5TB storage available for images/iso/etc ...
The 100GB filesystem reported by the GUI in your first picture is the root (/) filesystem of the host (used for/by the OS not for PM usage) which is full for only-you-can-know reason ...
This is not...
Continuing my monologue :)
I've finally took the time to stop/start the remaining VMs in order to change cache=write through and move the disks to DRBD/raw.
I now have 2 questions:
1.- How do you explain the fact having storage on DRBD makes the live migration faster ?
Before DRBD/raw (so...
Hello,
I was up to point #5 (choosing the FS I'll use over DRBD) when I've found this doc: https://pve.proxmox.com/wiki/DRBD9
As I figured most of the required steps by myself (adding a dedicated NIC on my servers and NAS), I've then rolled back up to point #1 and followed the doc to implement...
Hello all,
I'm using Proxmox@home for months now and I really like the product. Well designed, stable, etc, ...
My environment is based on 2 NUCs (pvetest repo) and 2 Synos.
Right now, each Syno provides NFS storage to the cluster and one of them is also providing a VM (phpvirtualbox) used as...
Hi all,
I have a 3 nodes HA cluster running the latest pvetest. I've planed to upgrade (replace) one or more servers.
Hardware will be quite close, faster CPUs (still i5) same SATA controller and different NIC but still handled by e1000e.
My "migration plan" is:
1.- Migrate VMs to other nodes...
Nice how-to !
I have it working as well for few days (RPI1B) but I've finally chosen the "real PVE 3rd node" based on VM solution (using phpvirtualbox hosted on a NAS) which provides me more options to validate PM updates, test packages, secured by snapshots.
Forget about it, for some reason it's the nut/upsmon client that went crazy... So nodes were shutting down at boot because upsmon failed to connect to the server on both NAS.
I'll have to investigate this deeper... Should you mark the thread as solved or just delete it as it does not involve PM...
Hello,
I have a 4.2.5 PVE cluster that was running fine for few weeks but now I have an issue I can't figure out.
This is a basic cluster based on 2 physical nodes and one VM running on a NAS. I have a single vlan and storage is based on 2 NAS (nfs/iscsi). I've created an HA group with the 2...
I think you have to play with nofailover (disable) option in HA group configuration to do that. And configure nodes preference. If node1 is preferred over node2 and node3 in a HA group and it fails, VMs will be migrated to node2 and node3 and then automatically failed back to node1 once...
Nope, unfortunately PM does not migrate resources for a planed/clean reboot unlike most software clusters do. It will force stop running VM/CT instead and then reboot.
I'd like to see an option (HA group level) to automatically migrate HA resources in case of reboot in order to avoid to manually...
Hello,
A feedback and a solution finally !
Since one of my PVE's hardware crashed I've replaced it with the same model as the one with faulted pve-ha-lrm.
So both physical PVE were not able to correctly host HA VMs at that time ...
Then I've discovered one line in the following page that I did...
Hello,
So it was too good to be true. AMT was indeed enabled on the node's BIOS but even after disabling it (and removing softdog blacklisting in /lib/modprobe.d/blacklist_pve-kernel-4.4.6-1-pve.conf, softdog module is still not loading at boot and watchdog-mux + pve-ha-lrm services are failing...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.