Further improvement of the program logic…
Since this program is supposed to run on cron every minute, I want to make sure that I loop through all VMs before starting another cycle, so there’s a need for a lock.
This works...
The use case is a production Proxmox server sending incremental backups via pbs to a remote datastore on another Proxmox backup server.
The other Proxmox backup server would act as a hot standby ( couldn’t be as synchronized as https://pve.proxmox.com/wiki/Storage_Replication, due to longer...
For virtual machines incremental is very fast ( based on QEMU dirty bitmaps, a matter of seconds) .
For LXC containers, it seems that there isn't any incremental implementation...
LVM- Thin storage, backup mode snapshot
1) initial backup
INFO: starting new backup job: vzdump 104 --node...
My bad... Even the VM was running, I was trying to do backup after restart, so dirty-bitmaps from RAM were lost. Incremental works on LVM-thin storage also, if the VM stays running
Brilliant news!
I’m comparing it with https://pve.proxmox.com/wiki/Storage_Replication, which is allowed in combination with High-Availability. But Proxmox Backup Server cannot act as a hot-standby, in case of disaster you need to restore every container/VM from backup. I guess the Proxmox...
The issue is present with OVH Cloud https://www.ovhcloud.com/en-ie/public-cloud/sandbox/ s1-2. Today, I’ve tested with OVH VPS Starter https://www.ovhcloud.com/en-ie/vps/ and it works flawlessly. Funny thing, 2 years ago, the very same setup works with b2-7, from OVH Cloud range.
Some debugging
root@lon:~# perl -T -d /usr/bin/pvecm status
Loading DB routines from perl5db.pl version 1.53
Editor support available.
Enter h or 'h h' for help, or 'man perldebug' for more help.
IO::Socket::SSL::CODE(0x556ade4dc8a0)(/usr/share/perl5/IO/Socket/SSL.pm:260):
260...
You were right, the “real error” is “Unable to get local IP address”, and “pve-cluster.service: Start request repeated too quickly” Is just the consequence, with restart counter is at 1 to restart counter is at 5.
See the relevant part of logs
Jun 11 23:11:20 lon systemd[1]: Starting The...
root@lon:~# systemctl start pve-cluster
Job for pve-cluster.service failed because the control process exited with error code.
See "systemctl status pve-cluster.service" and "journalctl -xe" for details.
root@lon:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE...
root@lon:~# journalctl -b -u pve-cluster
-- Logs begin at Fri 2020-06-12 05:04:58 UTC, end at Fri 2020-06-12 05:56:30 UTC. --
-- No entries --
I've removed also the ip and port
There are plenty of post forum with this kind of error, usually triggered by wrong configuration of /etc/hosts https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster#Add_an_.2Fetc.2Fhosts_entry_for_your_IP_address or network
Mine seems to be correct:
root@lon:~# cat /etc/hosts...
The feature "and also enables the use of vGPUs (aka mediated devices) like Intel KVMGT (aka GVT-g) or Nvidias vGPUS." has landed since https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.3. Shouldn't opengl_dmabuf be enabled by default?
What is the alternative management tool for nat in proxmox, besides post-up/post-down? Ufw or other tools have high probability to interfere with built-in proxmox firewall
In the end, the DEBUG information, for a KVM VM, as requested by https://btrfs.wiki.kernel.org/index.php/Btrfs_mailing_list#What_information_to_provide_when_asking_a_support_question
There are 2 btrfs mounts ( only /dev/sda9 - /dev/sdb9 has the VM causing erros)
root@izabela:~# btrfs fi...
Due to my dummy error ( not sending the email in plain text format to btrfs mailing list linux-btrfs@vger.kernel.org), it was rejected several times. Next days, sending as plain text, the very same content from a different email, it wasn’t published to https://www.spinics.net/lists/linux-btrfs/...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.