If backups are what you value most, install PBS on bare metal following best practices (special device, care with RAIDz depending on performance needed, etc). Leave some room for a TrueNAS VM (or OMV or any other appliance) if you really need...
No, you can't wipe the disks if you want to use the data in them. Don't remember the exact steps ATM, can't check them out and isn't super trivial to carry out. You are essentially in disaster recovery scenario. From the top of my mind, you need...
Thanks a bunch!
It seems that you have something in common with what was reported over at #6635—you're using Kubernetes, too, and there's also something stuck in your RBD trash. In particular, it seems that tasks for removing...
Hi!
You need to connect the "HPE DL38X Gen10 4-port 8 NVMe Primary slim SAS FIO riser." cables to U3 type backplane ( server front side ) to use the SAS disks. You have U2 type backplane in front of the Server, thats why not working.
You cannot...
We are pleased to announce the first stable release of Proxmox Backup Server 4.0 - immediately available for download!
The new version is based on the great Debian 13 "Trixie" but we're using a newer Linux kernel 6.14.8-2 as stable default in...
We are pleased to announce the first stable release of Proxmox Virtual Environment v9.0 - immediately available for download!
The new version is based on the great Debian 13 "Trixie" but we're using a newer Linux kernel 6.14.8-2 as stable...
We are pleased to announce the first stable release of Proxmox Virtual Environment v9.0 - immediately available for download!
The new version is based on the great Debian 13 "Trixie" but we're using a newer Linux kernel 6.14.8-2 as stable...
Verify that /etc/corosync/corosync.conf and /etc/pve/corosync.conf - have the very same content on all nodes. Check "config_version: 123".
(( Anecdotical: once I have had a node turned off (for power saving in my homelab) and did manipulate the...
Create a mirror vdev and add it to the current RAID10 zpool, which will have 3 mirror vdevs instead of the current 2 mirror vdevs. Capacity will increase in ~8TB.
No data will be moved to the new disks, so most of your I/O will still hit your...
Hello,
<TLDR>
Seems that PVE or LXC or even Ceph change ext4's mmp_update_interval dynamically.
Why, when and how it does?
</TLDR>
Full detailes below:
In a PVE8.1 cluster with Ceph 18.2.1 storage, had a situation yesterday where a privileged...
Use RAID10 (stripped mirrors). The capacity of the storage will be 50% of the total of all drives. You can select ZFS during installation and the raid type too or even install in a mirror of two drives and use the rest later as a different...
I know, I was involved in that conversation. I did not for two reasons:
- Had no time to implement a proper test methodology.
- Modifying each host systemd's files is a no go as that becomes unmanageable and hard to trace over time, so I'll just...
The bill compares in the same range too? Cause few people need a Lambo and from them even less needs a Lambo. Feels like Ceph and that Hammerspace thing target completely different use cases/budgets.
Another proof about @LnxBil arguments is that you cannot use LXC container's disk on ZFS over iSCSI storage because on LXC there is no QEMU.
Maybe we are mixing terms and referring to different kinds of storage from PVE perspective even if they...
The reference documentation does not mention neither multipath capability nor multipath not-capability of ZFS-over-iSCSI, so you have to inspect the qemu command line running the VM. The qemu process does indeed use direct iscsi connection, as...
Yes, full default settings. Install package, do the restore from webUI. This is the full log, which shows it used 4 restore threads, 16 parallel chunks:
new volume ID is 'Ceph_VMs:vm-5002-disk-0'
restore proxmox backup image: [REDACTED]...
Literally first search result on Google for "pve zfs replication cannot create snapshot out of space":
https://forum.proxmox.com/threads/replication-error-out-of-space.103117/post-444342
Please, make the effort to use CODE tags and format you...
Did a test right now with production level hardware (Epyc Gen4, many cores, much ram, 5 node cluster + Ceph 3/2 pool on NVMe drives, 25G networks and PBS with a 8 HDD raid10 + special device 74% full, nearly 15000 snapshots)...