Search results

  1. A

    [SOLVED] Cluster Nodes Grey(?) after Enabling Datacenter Firewall

    Finally managed to solve the issue...solution below. Hope this helps someone out there in future. :) SOLUTION: 1. Stop both corosync and pve-cluster on all nodes except one. 2. Run pvecm expected 1 and revert cluster firewall settings to 'No' (enable: 0) on the remaining node. 3. Start corosync...
  2. A

    [SOLVED] Cluster Nodes Grey(?) after Enabling Datacenter Firewall

    Hi all, It appears that my nodes are showing a grey question mark after enabling Datacenter's Firewall with default Input Policy being DROP. Other nodes are now inaccessible with this error in node's Summary 'hostname lookup 'pve123' failed - failed to get address info for: pve123: No address...
  3. A

    pve-zsync Interval + Snapshots

    Hi guys, By default, pve-zsync runs on 15 mins interval. Currently, I have pve-zsync configured 15 mins interval with 2 snapshots enabled. If I would like to keep a snapshot on weekly or monthly basis, do I require another sync job and dataset? Is it possible to use the same dataset and...
  4. A

    Connection Error 595

    Try running this in the affected node. # pveproxy status # pveproxy start
  5. A

    local-lvm Disk Usage does not match actual usage

    Hi Udo, spot on. This was a temporary Proxmox node for disaster recovery and I am trying to move this to a local ZFS based storage, followed by using pve-zsync to complete the "move out" to production node with minimal downtime. However, I am now experiencing issue using 'Move Disk' function...
  6. A

    local-lvm Disk Usage does not match actual usage

    Hi, I am facing some issues regarding the disk usage of local-lvm (thin LVM). The current disk usage (800GB+) in lvm showing in Proxmox is 200% more than the actual disk usage of the VM (400+gb). There is only 1 VM on the Proxmox node. I have tried using fstrim within the VM but the disk usage...
  7. A

    Multiple Replications from Multiple Nodes to Single Storage Node

    Hi @guletz , thanks again for your advise! I now use the pve-zsync for disaster recovery, and another backup software for block level backup. I would also like to share that the pve-zsync has been successfully implemented on 4 nodes as of now. Previously, I had no luck with the built-in...
  8. A

    Multiple Replications from Multiple Nodes to Single Storage Node

    Thank you @guletz ! I have implemented and tested it fully using your method and it works perfectly well! :):):):D I noticed you had configured maxsnap as '18' in your example, could you advise how you would select the snapshot that you wish to boot from?
  9. A

    Multiple Replications from Multiple Nodes to Single Storage Node

    Hi @guletz Thank you for sharing. Could you advise how we can bring up the VM on another Node Z using the latest snapshot after moving the conf file? Each VM has a standard conf like the following bootdisk: ide0 cores: 1 ide0: local-zfs:vm-101-disk-1,size=32G ide2: none,media=cdrom memory...
  10. A

    Multiple Replications from Multiple Nodes to Single Storage Node

    root@pveXXX:~# cat /etc/pve/replication.cfg local: 100-0 target pve-repl-1 root@pve110:~# pveversion -v proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve) pve-manager: 5.1-36 (running version: 5.1-36/131401db) pve-kernel-4.13.4-1-pve: 4.13.4-25 pve-kernel-4.4.19-1-pve: 4.4.19-66...
  11. A

    pve-zsync Disaster Recovery Methods

    UPDATE: Issue has been resolved when pve-zsync and snapshots were sent to destination backup server successfully. Mods please close/delete this thread due to erroneous question.
  12. A

    Multiple Replications from Multiple Nodes to Single Storage Node

    Hi, I am trying to configure Replication of multiple Proxmox nodes (v5.1) to a single storage node (v5.1) in a cluster as per below. Node A <> Replicate <> Node Z Node B <> Replicate <> Node Z Node C <> Replicate <> Node Z However, if there are multiple Replications from multiple nodes...
  13. A

    ProxMox 4.x is killing my SSDs

    root@X5:~# w -bash: /usr/bin/w: Input/output error root@X5:~# w -bash: /usr/bin/w: Input/output error root@X5:~# uptime -bash: /usr/bin/uptime: Input/output error so another Slave just died and became non bootable.
  14. A

    ProxMox 4.x is killing my SSDs

    Hi hybrid512, I found your thread while googling for answers for similar issues (I searched for "Promox killing my hard disks" BTW). My current Proxmox 4.3 Cluster setup is for testing purposes and I am facing similar issues. 1 x Master (1 x SSD each) 4 x Slaves (1 x SSD each) I am...
  15. A

    Proxmox VE 4.2 Default (Thin LVM/Raw)

    Hello, Thin LVM and Raw are by default in Proxmox VE 4.2. With Thin LVM, from my understanding it is possible to over provision disk space. However, since we cannot monitor the exact disk utilization of each VM, how do we avoid running into disk issues (i.e. 100% disk utilization on disk) ...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!