That's fine. The setup is an experiment. Once I'm happy with it, I plan on changing them out for Micron's which do have PLP and handles the load better.
But thanks for the advise :)
I am playing with it now thank you.
Side question though, Does this then not happen with Proxmox Backup servers too?
On that scale, with 3 stripes of 12 drives on raidz it would be hard for me to pick up by myself, How can one check if the padding is an issue in that array too?
It is defaulting to 8K and recordsize of 64K
root@pm2:~# zfs get volblocksize,recordsize zStorage
NAME PROPERTY VALUE SOURCE
zStorage volblocksize - -
zStorage recordsize 64K local
Ok that's interesting.
What would you recommend I do to resolve the issue?
I don't want a massive performance loss issue, but I assume I need to tweak it to better the loss?
I have added the output below, Unsure if what to look for in this regard @Dunadan?
root@pm2:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 135G 3.08G 132G -...
Hey Guys,
I have scoured through the forum to try and answer my question with no success...
I have 3 PM nodes running windows VM's with the storage being ZFS backed, with thin provision enabled.
I noticed the 1 ZFS array reporting 17TB used, but in the VM there's 20TB storage assigned, but...
but the 10G is the main network. I would then rather move the 10G network to an internal range and the 1G network to the live IP facing ranges :).
But the 1 question still remains. What's causing the reboot?
Was it because the traffic was on the live IP range? Or was something else causing it...
I did not even know about that option thank you!.
So now the question becomes. Which IP rangedo I need to avoid sending the traffic over?
Meaning, If I need to avoid using the corosync range, or if I need to avoid the lice range or what would you recommend?
Hey Guys,
I'm hoping for some advise.
I've got a cluster of 5 servers. Each has 1 10GB SFP+ port and 2 1Gb network ports connected.
The ip address ranges for each differ, i.e.
10G = 10.0.0.2-7
1G = 10.0.1.2-7
1G = 10.0.2.2-7
And i've setup corosync to use the last 1G port range while I use...
Hey Guys,
I am having an issue. I have a stale Ceph pool that I can't remove from the server.
I've removed all the nodes except the last one, but when I run `pveceph purge` on the server I am met with `Unable to purge Ceph! - remove pools, this will !!DESTROY DATA!!` but when I try to remove...
Hey Guys,
We made a buggerup and removed a harddrive from a server, then afterwards deleted the OSD using the UI.
Only after doing so realized the osd's data was not copied over completely yet. So we need to remount it to recover the missing pg.
How can we re-add the osd back into ceph, Keep...
Hey Guys,
Ok so I found the problem. The server had previously been compromised. I was able to resolve the hack but missed 1 cronjob they made:
* * * * * root bash -c 'for i in $(find /var/log -type f); do cat /dev/null > $i; done' >/dev/null 2>&1
Which ended up nulling out the logs every...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.