Finally managed to solve the issue...solution below. Hope this helps someone out there in future. :)
SOLUTION:
1. Stop both corosync and pve-cluster on all nodes except one.
2. Run pvecm expected 1 and revert cluster firewall settings to 'No' (enable: 0) on the remaining node.
3. Start corosync...
Hi all,
It appears that my nodes are showing a grey question mark after enabling Datacenter's Firewall with default Input Policy being DROP. Other nodes are now inaccessible with this error in node's Summary 'hostname lookup 'pve123' failed - failed to get address info for: pve123: No address...
Hi guys,
By default, pve-zsync runs on 15 mins interval.
Currently, I have pve-zsync configured 15 mins interval with 2 snapshots enabled. If I would like to keep a snapshot on weekly or monthly basis, do I require another sync job and dataset?
Is it possible to use the same dataset and...
Hi Udo, spot on. This was a temporary Proxmox node for disaster recovery and I am trying to move this to a local ZFS based storage, followed by using pve-zsync to complete the "move out" to production node with minimal downtime. However, I am now experiencing issue using 'Move Disk' function...
Hi,
I am facing some issues regarding the disk usage of local-lvm (thin LVM). The current disk usage (800GB+) in lvm showing in Proxmox is 200% more than the actual disk usage of the VM (400+gb).
There is only 1 VM on the Proxmox node. I have tried using fstrim within the VM but the disk usage...
Hi @guletz , thanks again for your advise! I now use the pve-zsync for disaster recovery, and another backup software for block level backup. I would also like to share that the pve-zsync has been successfully implemented on 4 nodes as of now.
Previously, I had no luck with the built-in...
Thank you @guletz ! I have implemented and tested it fully using your method and it works perfectly well! :):):):D
I noticed you had configured maxsnap as '18' in your example, could you advise how you would select the snapshot that you wish to boot from?
Hi @guletz
Thank you for sharing.
Could you advise how we can bring up the VM on another Node Z using the latest snapshot after moving the conf file?
Each VM has a standard conf like the following
bootdisk: ide0
cores: 1
ide0: local-zfs:vm-101-disk-1,size=32G
ide2: none,media=cdrom
memory...
UPDATE: Issue has been resolved when pve-zsync and snapshots were sent to destination backup server successfully.
Mods please close/delete this thread due to erroneous question.
Hi,
I am trying to configure Replication of multiple Proxmox nodes (v5.1) to a single storage node (v5.1) in a cluster as per below.
Node A <> Replicate <> Node Z
Node B <> Replicate <> Node Z
Node C <> Replicate <> Node Z
However, if there are multiple Replications from multiple nodes...
root@X5:~# w
-bash: /usr/bin/w: Input/output error
root@X5:~# w
-bash: /usr/bin/w: Input/output error
root@X5:~# uptime
-bash: /usr/bin/uptime: Input/output error
so another Slave just died and became non bootable.
Hi hybrid512,
I found your thread while googling for answers for similar issues (I searched for "Promox killing my hard disks" BTW).
My current Proxmox 4.3 Cluster setup is for testing purposes and I am facing similar issues.
1 x Master (1 x SSD each)
4 x Slaves (1 x SSD each)
I am...
Hello,
Thin LVM and Raw are by default in Proxmox VE 4.2.
With Thin LVM, from my understanding it is possible to over provision disk space.
However, since we cannot monitor the exact disk utilization of each VM, how do we avoid running into disk issues (i.e. 100% disk utilization on disk) ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.