Search results

  1. B

    vmbr0: received packet on eth0 with own address as source address

    Hello all and sorry for the title of the post but I failed searching for a better one ... So I simply used the error message itself. Since some days (let's say 2 weeks) one of my PVE node started to log in kernel.log, messages and syslog the following messages over and over...
  2. B

    DRBD based storage and snapshot ?

    Hello, I hope I can continue to post on this thread for question regarding DRDB9 ... In the Datacenter/Storage view for my (2) DRBD nodes, the "Disk Usage %" for drbd1 is 70%. What is it based on ? root@pve2:~# vgs VG #PV #LV #SN Attr VSize VFree drbdpool 1 9 0 wz--n- 320.00g...
  3. B

    DRBD based storage and snapshot ?

    Funny thing, in fact the "Take snapshot" is not grayed out as stated by dietmar, it just takes few second to become available. But it opens a box saying the feature is not available. I have 2 solutions in case I need to secure an update: 1.- Backup the VM and restore it in case of problem 2.-...
  4. B

    DRBD based storage and snapshot ?

    Ah OK I did not catch the 'new resource only' thing when reading the doc... I need basic snapshot in order to 'secure' an update or a software modification that could 'break something'. Most of the time I create then delete snapshots, or rare occasions I need to restore. What about this...
  5. B

    DRBD based storage and snapshot ?

    @dendi: Thanks for the link ! I'm not used to drbdmanage/drbd9, I've only played a bit with 8 ... @fabian: What's your feeling about the drbdmange's snapshot how-to linked by dendi ?
  6. B

    DRBD based storage and snapshot ?

    Thanks Fabian. Does that mean it will be implemented or there's no plan to do it yet ? BTW, can I use the manual procedure I've described in my first post safely ? Cheers.
  7. B

    DRBD based storage and snapshot ?

    Hello dietmar, My version is not "that old": proxmox-ve: 4.2-64 (running kernel: 4.4.16-1-pve) pve-manager: 4.2-18 (running version: 4.2-18/158720b9) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.2.6-1-pve: 4.2.6-26 pve-kernel-4.4.8-1-pve: 4.4.8-52 pve-kernel-4.4.15-1-pve: 4.4.15-60...
  8. B

    DRBD based storage and snapshot ?

    Hello, I've migrated my VMs to DRBD for storage redundancy purpose. So now, my disks are raw type instead of qcow2. The "take snapshot" button is grayed out because of that. How can I do snapshot now ? Maybe directly @lvm2 level ? Create snapshot: pve1# lvcreate -L1G -s -n vm105snap...
  9. B

    How do I increase the size of my proxmox disk space? 2016

    As your second picture shows, you have your ~5TB storage available for images/iso/etc ... The 100GB filesystem reported by the GUI in your first picture is the root (/) filesystem of the host (used for/by the OS not for PM usage) which is full for only-you-can-know reason ... This is not...
  10. B

    DRBD on Proxmox cluster nodes ?

    Continuing my monologue :) I've finally took the time to stop/start the remaining VMs in order to change cache=write through and move the disks to DRBD/raw. I now have 2 questions: 1.- How do you explain the fact having storage on DRBD makes the live migration faster ? Before DRBD/raw (so...
  11. B

    DRBD on Proxmox cluster nodes ?

    Hello, I was up to point #5 (choosing the FS I'll use over DRBD) when I've found this doc: https://pve.proxmox.com/wiki/DRBD9 As I figured most of the required steps by myself (adding a dedicated NIC on my servers and NAS), I've then rolled back up to point #1 and followed the doc to implement...
  12. B

    DRBD on Proxmox cluster nodes ?

    Hello all, I'm using Proxmox@home for months now and I really like the product. Well designed, stable, etc, ... My environment is based on 2 NUCs (pvetest repo) and 2 Synos. Right now, each Syno provides NFS storage to the cluster and one of them is also providing a VM (phpvirtualbox) used as...
  13. B

    PM Server upgrade. Considerations ?

    Hi all, I have a 3 nodes HA cluster running the latest pvetest. I've planed to upgrade (replace) one or more servers. Hardware will be quite close, faster CPUs (still i5) same SATA controller and different NIC but still handled by e1000e. My "migration plan" is: 1.- Migrate VMs to other nodes...
  14. B

    RB PI for third node

    Nice how-to ! I have it working as well for few days (RPI1B) but I've finally chosen the "real PVE 3rd node" based on VM solution (using phpvirtualbox hosted on a NAS) which provides me more options to validate PM updates, test packages, secured by snapshots.
  15. B

    Nodes keep shutting down at boot time.

    Forget about it, for some reason it's the nut/upsmon client that went crazy... So nodes were shutting down at boot because upsmon failed to connect to the server on both NAS. I'll have to investigate this deeper... Should you mark the thread as solved or just delete it as it does not involve PM...
  16. B

    Nodes keep shutting down at boot time.

    Hello, I have a 4.2.5 PVE cluster that was running fine for few weeks but now I have an issue I can't figure out. This is a basic cluster based on 2 physical nodes and one VM running on a NAS. I have a single vlan and storage is based on 2 NAS (nfs/iscsi). I've created an HA group with the 2...
  17. B

    HA Cluster

    I think you have to play with nofailover (disable) option in HA group configuration to do that. And configure nodes preference. If node1 is preferred over node2 and node3 in a HA group and it fails, VMs will be migrated to node2 and node3 and then automatically failed back to node1 once...
  18. B

    HA Cluster

    Nope, unfortunately PM does not migrate resources for a planed/clean reboot unlike most software clusters do. It will force stop running VM/CT instead and then reboot. I'd like to see an option (HA group level) to automatically migrate HA resources in case of reboot in order to avoid to manually...
  19. B

    pve-ha-lrm keeps failing

    Hello, A feedback and a solution finally ! Since one of my PVE's hardware crashed I've replaced it with the same model as the one with faulted pve-ha-lrm. So both physical PVE were not able to correctly host HA VMs at that time ... Then I've discovered one line in the following page that I did...
  20. B

    pve-ha-lrm keeps failing

    Hello, So it was too good to be true. AMT was indeed enabled on the node's BIOS but even after disabling it (and removing softdog blacklisting in /lib/modprobe.d/blacklist_pve-kernel-4.4.6-1-pve.conf, softdog module is still not loading at boot and watchdog-mux + pve-ha-lrm services are failing...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!