Some more digging revealed that the proxmox-backup-daily-update.service creates the /var/log/proxmox-backup/tasks directory with root:root ownership if the directory doesn't exist.
maybe this has already been fixed in newer PBS versions but here's how to reproduce in pbs 1.1.13-2:
1. Stop all...
After some digging i found out that the culprit was that the /var/log/proxmox-backup/tasks directory was owned by root:root in stead of backup:backup.
Whenever you delete /var/log/proxmox-backup directory it is automatically created upon service restart. The api directory is correctly set to...
I seem to have botched my installation of pbs and cannot seem to find the reason why.
whenever i backup a vm it results in:
INFO: starting new backup job: vzdump 100 --node stellar01 --remove 0 --mode snapshot --storage pbs
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started...
I've setup many ZFS two node clusters in the past (technically 3 node using a small HA node) and CEPH clusters in the past but i'm thinking about the following idea:
4 main nodes and a small HA node
1. Create a cluster using all five nodes.
2. Setup ZFS storage replication between...
Would it be possible to combine both ZFS Storage Replication and pve-zsync for a single VM?
For instance, we have a 'main' site with a 3-node ZFS cluster (2 full nodes and a small node for HA) using Storage Replication so we have High Availability for the VM
Next i would like to have some...
Thanks for the hints! It turned out i had /etc/systemd/system/systemd-udevd.service.d/override.conf configured only containing:
Removing this file before the upgrade fixed the problem. I didn't expect this small change could potentially lead to a broken...
I seem to be running into some major problems during upgrade. The udev and/or systemd crash upon restarting udev after upgrade:
When the upgrade proces comes to udev:
Setting up udev (241-5) ...
Configuration file '/etc/init.d/udev'
==> Modified (by you or by a script) since...
Well it's actually the other way around:
To use erasure coded pools for RBD you need to create an RBD image on a replicated pool and specify a seperate data pool (which is erasure coded). This ensures all metadata to be written on the replicated pool and the actual data is written to the...
In a 3 node CEPH/Proxmox 4 HA Cluster, I recently had a Windows 7 Guest VM hang (BSOD).
As expected HA never kicked in because in proxmox' point of view the VM is up and running.
I thought maybe the QEMU Guest Agent would help checking for hung VM's but when I checked the wiki page it only...
We have a 4 node proxmox 4.2 cluster.
We cannot seem to get the HA manager in a healthy state.
The config of HA looks empty (resources.cfg = empty and ha-manager config shows nothing).
BUT the status shows different (same output on all 4 nodes):
# ha-manager status -verbose
Thank you both for the replies.
I managed to pinpoint the problem to the USB Controller and or firmware.
I installed proxmox into a USB Harddisk of 256GB, I guess the HP's have a limit of some sort to be able to boot USB devices beyond a certain size (I've read 4GB will boot, but 8GB and 16GB...
I'm trying to revive some old hardware to have some use. I was hoping to create a small test Proxmox 4.2 Ceph cluster with 4 HP Proliant DL380 G5 Servers.
Installing Proxmox 4.2 finishes without errors HOWEVER booting fails (in fact loading grub fails)
When I shutdown a pve node (using the GUI, select the node and then select the shutdown button in the upper right) I noticed that one of the first services to actually stop is CEPH. CEPH is stopped before pve-manager for instance. When pve-manager is stopped, it tries to cleanly...