Now it seems better with GFS2, I took this hint :
One thing I've do reconise: I didnt't mount the device on boot, always manuel after reboot an node. When mounting it on boot I got some Problems with the machine. Maybee I_ve to...
Thank you for the information.
So I was able to made an setup with ocfs2 and proxmox.
Unfortunately I got some kernel stucks and read here in the forum that ocfs2 isn't realy stable.
GFS2 could be another candidate and I remeber I try it on an test setup years ago with drbd. Maybe I test it...
Shure you are right, zfs should speak with the disk direct.
But if proxmox didn't support one in this configuration I don't know how,...
VMWare has vmfs for this case and MS CVFS . With an Hyper-V Server and CVFS we 've running an System at a daughter enterprise. My hope is to replace it with...
what you write is right, but made me not realy happy ;-)
So, I could play and will test to set additional volumes on the MSA. then I use them to provide it to the nodes and made an ZFS on them. Then I try to ZFS replication for the VMs. It is not realy shared, but something nearly it :)...
I've here some older hardware:
3 x HP Proliant DL360p G8
1 x HP MSA P2000
I connect the MSA via SAS and setup it with multipath. After some beginning problems it seems to work like expected. I create an shared lvm and now I could install VMs and migrate them from one to an other node...
proxmox VE 7.1
VM Store with ZFS and 6 SSDs
I setup an VM for an secondary Domain Controller.
When I try to promote the Server to the Domain it got an Error and I see in the dmcpromo.og that it tells me it reconised an active cache.
I try to switch the settings...
I've define following prune settings:
Now I wonder why nothing is pruned, for example one vm keep the last month back:
Therfor I made an dry run on this VM and see, that hourly 24 keep the backups of the last days:
A that's the clue ;-)
I switch it from 3 to 2 weeks and now the tape is shown as expired.
But when I read about the retention in the current documentation, I can't comprehend the algorithem from it. Maybee you can add an example there,...
I made a new test with the HP DL380 an an other CPU:
Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz
Formatting '/ZFSPOOL/VMSTORE001/images/121/vm-121-disk-0.raw', fmt=raw size=536870912000
new volume ID is 'VMSTORE001:121/vm-121-disk-0.raw'
restore proxmox backup image: /usr/bin/pbs-restore...
I still have a question about the retention.
After I switch the Settings of the media pool:
│ Name │ Value │
│ name │ WEEK │
│ allocation │ sun 00:01 │
I'm still searching.
On an AMD Ryzen PC the rate is even better, sorry no result yet,..
But what I additional see on the original Server:
When I start multipe restore jobs parallel the iorate raise to 150-200 MB/s, but the rate of an single job is still on 40-50 MB/s:
On this machine I...
I made a new test with an other machine:
Xeon(R) CPU E3-1246 v3 @ 3.50GHz
16 GB RAM
1x Seagate Ironwolf PRO ST4000 4TB SATA
2x 1TB SAMSUNG NVME PM983
1x Seagate Ironwolf PRO ST6000 6TB SATA
The 4TB ST400 is for OS
The 6TB ST6000 is for the PBS Backupstore
One NVME is for PVE VMStore
I want to setup an PVE/PBS Server on the same machine.
We have an HP DL380 Gen9 with
E5-2640 v3 @ 2.60GHz
256 GB RAM
2x 1TB SAMSUNG NVME PM983
12x 8 TB HP SAS HDDs
The goal is an backup server which could be even a desaster restore virtual server.
Unfortunaly I never got good restore rates...
hmm, then it is my fault, I supposed it.
In the moment I've two datastores on my pbs.
I can't create a job which include both to one tape.
Is this in any setup possible? Or 've I use one tape for each store?
There is no real error message:
From Tape Job:
2021-08-30T00:00:08+02:00: TASK ERROR: alloc writable media in pool 'WEEK' failed: no usable media found
I supose for my config (4 Tapes, altered weekly and rentention of 3) that the actual tape should be override.
Even when I've look on the...