Yes, that would solve most of our problems :)
I did re-create our datastores, this time adding the special vdev's when we create the datastores and first impressions are good. Listing backups is faster than before and no random sync errors like before. The first verify & garbage collect jobs...
Yes, that would be very interesting. We could do something today with two datastores, one on SSD's and one on HDD's, and a sync job. One downside is that to restore from tier-2 datastore, you would have to add both datastores to PVE. Not a big problem, we already do something similar where we...
I recently added a special vdev to an existing pool, and after monitoring it for a couple of days and seeing ~10 GB of data on the special dev for a 20 TB datastore, I though "that's fine, all the data in the datastore will get re-written eventually and all the metadata will be on the special...
I was thinking I could get away with smaller SSD storage for latest backups, and tape for longer term. With de-duplication and compression, the storage needed on SSD might not be that much less that having incremental snapshots all on SSD.
You're right, I added the special dev to an already...
I have two servers running PBS. Three clusters backup to the first PBS server and the other PBS server synchs backups from the first one.
PVE does hourly backups of 10-20 VM's, staggered between clusters so they're not backing up at the same time.
Both servers have 6x10 TB HDD in ZFS "RAID10"...
I have two servers, each with 6x10 TB HDD's in RAID10 ZFS (three mirrors striped). 30 TB total usable, 15 TB currently used.
It works, but as you can imagine, verification and garbage collection takes a long time.
I was considering buying PCIe m.2 cards and adding a mirror of nvme drives to use...
Hi,
I've posted before about this (https://forum.proxmox.com/threads/ceph-mirroring-between-datacenters.66890/#post-300712).
We're still working out our DR plans. I'm a little spooked about reports of greatly reduced IO on ceph when using journal based
mirroring.
We have two identical 3-node...
We're running a three node Proxmox cluster on 6.2. In the last months we've been replacing a lot of our networking gear, replacing older Cisco swithces and routers with Juniper EX3400, EX4600 and MX204.
Shortly after moving our Proxmox cluster from a Cisco Nexus 5010 to a virtual chassis with...
Interesting. What is the plan if the primary site goes down (fire, flood, nuclear meltdown etc.)? Could you start a fourth monitor on the secondary site and get it up and running?
Regarding write overhead with journaling, all the drives in the ceph cluster(s) are NVM.e SSD drives.
I'm setting up a new Proxmox environment, in two datacenters, with three nodes in each datacenter. The new setup
will use ceph as shared storage. Our old cluster is a mixed bag of servers spread across both datacenters with no
shared storage.
We have a redundant 10G connections between the...
I have a 3 node cluster setup in production. Recently we discovered a problem where fragmented UDP packets
were being dropped somewhere along the way from our vm's. Finally we tracked to culprit down, and it
was the fact that proxmox had set
net.bridge.bridge-nf-call-ip6tables = 1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.