I should have prefaced the post with the following disclaimer:
let me make some clarifications:
1) We will NOT be synching from Proxmox to the NAS using ZFS based sync. I would use PVE-Sync for host to host replication. We will however back up the Userdata (VM's) (and now also the Host...
Any chance you are referring to the following script :
Seems to be based on the following forum thread:
Disaster Recovery; How do you handle that at your Org ?
Planning on replacing a SBS/Terminal Server solution (Windows based) at a SMB as a favor
They currently use a software called ShadowProtect SPX on their Windows Servers backing to a NAS incrimentally and full, and USB-HDD's...
Kingston SSDNow V300 120GB, SATA (SV300S37A/120G)
Is rated for 64 TBW.
IMHO these types of SSD's are not suitable as a caching device.
Edit: You use twice as many OSDs in the rapid wearout Server. Basically you have created 4 times as many writes to the RapidWearoutServers OSD compared to the...
Been less then 20 months, its fine :D
Replicated Pool ?
8 OSD per node ?
Same failure Domain ? (As in Host/Node as opposed to OSD)
Q4 (if Q1, Q2 and Q3 = yes):
Did you set size == 3 and min_size == 1 for said replicated pool ?
Is that your only pool ? What settings do those...
well, I was trying to minimize the Ram footprint of the ZFS-Pool.
So Rule of thumb then is:
[base of 2GB] + [1 GB/TB of storage] + [5GB/TB for Dedupe]
In my Case that would be 5GB of Arc.
What is your planned maximum size?
That is actually what I meant. Should have asked "What is the...
I have a ZFS pool called SSD running on SSD's that is used for VM OS Data. <-- need advice for ARC allocation.
VM-Data resides on a Raid 6 HDD Pool provided by a hardware RC with Cache + BBU.
Proxmox sits on a SSD LVM
Edit: just want to know how much Ram i'll need to...
I have a 512 GB pool based on SSD's.
Used only for VM OS Data
How much ARC do I realistically need with Proxmox's implementation of ZFS ?
Background info (not relevant to the problem): The system sits on a LVM-thin SSD POOL. VM-data sits on a Raid-6...
when i hit http://proxmox.com/ , i get redirected straight to https://www.proxmox.com
Maybe in the 40 minutes they fixed it ? :p
edit: but the wiki is throwing some (can not access database) issues.
For the 10G network you are looking at this config from the proxmox wiki:
For the 1G network gear (assuming it is not Jumbo frame capable you go down this route) you take this openvswtch bond approach, assuming your network gear is not LACP capeable.
If you DO NOT want to use Openvswitch for...
Sry for the late reply, was on vacation.
Disclaimer: I am not a network guy either (I just have to reingeneer our 200+ ceph servers on a weekly basis, to get the most out of our 3k+ spinners and 2k+ Flash, and have them talk to our georedundant (among others) proxmox-clusters). But I think you...
Using a SSD for OS and OSD is not stupid in an off itself.
It just makes it more complex. to the point where it becomes nonsense-sical.
Best practice for OSD's is to use the storage of the same size and performance characteristics.
If you were to utilize separate sizes and write speeds, you'd...
Just so I do understand this correctly.
You performed the test as follows:
Test 1: 3 Nodes with 3 SSD OSD's acting as own journal.
Test 2: 2 Nodes with 3 SSD OSD's acting as own journal. 1 Node with 3 SSD OSD's having journal on P3600 ?
If so, then you are right there is will only be a...