By default mdraid scans all disks and partitions for MD superblocks (metadata) on boot. It then checks for a match of the array name or UUID (depending on your config in /etc/mdadm/mdadm.conf) of the RAID (not the disk UUID!). Therefore, it does not matter what device name the disk receives...
The operating system of our servers is always running on a RAID-1 (either hardware or software RAID) for redundancy reasons. Since Proxmox VE 7 does not offer out-of-the-box support for mdraid (there is support for ZFS RAID-1, though), I had to come up with a solution to migrate...
Consider the following example config in /etc/network/interfaces:
iface lo inet loopback
bond-slaves north0 north1
Using ifupdown, the following config used to be valid and correctly displayed in the Proxmox VE 7 webinterface:
iface vlan1120 inet static
Using ifupdown2 there is no need to add inet static any longer. See...
Thanks for your input. `sync` seems to fix the problem for a testfile:
root@proxmox-b:/srv/proxmox/backup# du -sh /srv/proxmox/backup
root@proxmox-b:/srv/proxmox/backup# ceph df detail
SIZE AVAIL RAW USED %RAW USED OBJECTS
Thanks for your reply.
Sorry for not being more specific in my initial post. We are only keeping x days of backups and the total capacity needed for those backups is somewhere around 1 TB - and remains more or less constant.
When I add a 100 GB file and remove it again, `du` shows...
We are using ceph-fuse to mount a CephFS volume for Proxmox backups at `/srv/proxmox/backup/`.
Recently, I noticed that the backup volume kept running out of free space and therefore the backup jobs were failing (we had a Ceph quota of 2 TB in place on the pool for safety reasons)...
We have created a Proxmox VE 4.2 cluster and discovered that support for DRBD9 is built-in. Nice work!
When we tried resizing the volume of a VM we got the following error message:
We were able to resize the volume using basic commands as described in...