yes, it was well clear in the script to apply the -e argument. but it did not help much. only around 500Mb or so was cleaned
and the digging process also gives me no leads ;(
du -h --max-depth=1 /
108G /LTdata
1.2T /LTData2
512 /media
144M /var
46M /dev
0 /sys
2.0K /mnt...
Did you ever dind the answer?
I am running into the same issue. I have a 128GB ssd zfs 2 way mirror. almost totally full.
Running the cleanup script did not help ;(
Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/pve-1 113G 109G 3.7G 97% /
ok Leesteken, I was under the impression that both Martin and Tom were Proxmox team members. From the same team that never addressed this specific questions of mine.
For the rest of us yes, all volunteer and such.
and before we start asking. 'what is your use case'please address my earlier questions related to the level of supprt regarding certain issues first then.
So my raising a concern about ZFS is NOT amongst those.
I really do not mean disrespect @martin
I respect and love proxmox and your team.
But I have asked onthis forum some times if something would be covered by getting a subscription.
The answer was always
radio silence.
So I am not sure what...
I am ringing the alarm bells.!!!
ZFS should not be idle. I mean it is a small price to pay when every 30 seconds the tiniest of IO operation if done. Maybe every 1 hour or what ever seems useful.
Doing no health checks is beyond my understanding at the moment. Can someone please shed some...
yes Fiona, A file create operation did trigger the pool to notice one of it's drives was missing.
Is this intended behavior from the source code? If so then I would like to follow up with the OpenZFS people that build the code to learn about the reasoning behind this decision. It might be...
also, if I would run a FreeBSD system and do the same. Would it behave similar by being silent some of the times?
The point I am trying to make is that I no longer believe it is the ZFS code. It might be there have been done some slight adaptations by the Proxmox team that allows for this weird...
anyone? How do I create an intensive IO operation so I can pull a disk from the pool involved in said operation?
I do stress out though that this should not be needed in the first place. I'd be so much helped to learn why the current mechanism is how it is.
In an effort to shine more light on the matter I am trying to create a disk pull out scenario while the pool is in operation
..
zpool status
pool: notimportant
state: ONLINE
scan: resilvered 1.18T in 02:42:31 with 0 errors on Thu Nov 24 15:48:51 2022
config:
NAME...
While I get my thoughts together again, can one of you please motivate why it would be good thing for ZFS (so in no way am I involving the proxmox team here) to only do health checks when there are IO operations?
the more I think of it the more it does not make sense to me as one just a...
I had to break down my test because of... reasons ;(
Ok, Please let me be your method then to settle this debate once and for all.
What test shall I run to make it certain that there is an issue worth exploring or whether it is just a misunderstanding from a user perspective.
I have a superMicro and a SilverStone case. Both with HDD BackPlates configured to let the motherboard do all the "thinking"
Hence the removal of a drive does get noticed by the OS in both cases, whether system a or b I tried this in.
More to follow
I did not even start the test yet (and I will soon) but having a pool in active data flux should not be the only way to have ZFS tell you when things might go bad.
I have more to say when I did my test
Inserting point,
No I never did do this while the disks were in obvious operation. The pools on where I ran the tests were not in use so no changes.
Hold on and I will do the same test but then force a change to the pool.
I am getting really worried.
Just tried the drive removal from a 3 way mirrored pool on a different system.
I pulled out one of the disks.
syslog:
Nov 24 08:37:24 pver1 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Nov 24 08:37:29 pver1 kernel: ata6: SATA link down (SStatus 0 SControl...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.