No, in fact it is not working with ntfs deduplicated volumes.
Some pve versions (depending on in kernel ntfs driver) gives an error when trying to restore, and latest versions gives no error but restored file is empty!
And don’t forget : dedup tasks on Windows volumes starts a few days after...
Yes, we worked on that but at the PVE host level, and concluded everything works better with ntfs3 (given by Paragon Software for Linux 5.15?).
Everything works well EXCEPTED deduplicated files, despite some notes about it (https://www.paragon-software.com/ntfs-deduplication-support/#features)...
Hi,
Thank you very much for clarification.
I need some more : suppose a chunk belongs to 50 VMs. And a verification task begins (datastore or namespace level).
How many times will this chunk be verified ?
Christophe.
Well, problem seems solved by adding an IP alias in same subnet than the main one (/21), then bench via this alias was ok!!
Tested again on main IP : ok!
No more bad test, even after a reboot.
I have NO idea where does the problem came from, still there after 2 reboots...
Christophe.
No, nothing.
2 NICs.
- One quad port I350 Gigabit Network Connection (1Gbps)
- One dual port FastLinQ QL41000 Series 10/25/40/50GbE Controller.
No, tests aree now done locally from PBS to PBS, via localhost, or IP.
Via localhost : 390MB/s
Via "real" IP : 1,20MB/s. nothing special about this...
Hi,
I don't know if this is still true, but we have 2 x 3 nodes PVE 8 clusters + PBS with lots of timeout, both built on same hardware, and by the same guy. Both clusters suffers abyssal TLS perf a soon as traffic goes through network stack.
Something seems wrong with TLS :
[root@pbs1:~]#...
Hi all,
It seems PBS 3.0 is able to file-restore files from a deduplicated ntfs Windows 2022 VM. This is great! PBS 2.x was NOT able to do so.
I haven't seen this feature in any doc yet, but more and more of our users are happy.
Is someone from proxmox team aware of that? ;)
It seems PBS 3.0...
Yes.
It is enough to see VG.
As soon as multipathing is working, you can hide iSCSI part of the storage.
But in the end, this is NOT what we did : keeping them in proxmox GUI gives an easy way to know a path is down.
To hide them, iirc, edit /etc/pve/storage.cfg is the way.
Regards...
Yes, it is my understanding.
In my case it's a move disk from a Ceph storage to an iSCSI one.
Best solution for me : chose an async_io : threads, which seems to live together with any cache policy and any storage type.
Regards,
Christophe.
Hi all,
This kind of error is annoying...
A default option generating errors is not an acceptable option, at least for us.
Can we hope someone will solve the problem?
If not, I will change on our clusters from io_uring to native.
Don't know performance gain / loss but with our workloads...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.