I am not convinced!
4 binary TB (correct: 4 TiB) is 4398046511104 Bytes. And it is 4294967296 KILOBytes. The last one is possibly used to show wrongly 4.2 TB...
On this NAS are multiple VMDK disks over 4TB. Moved there with VMware. That makes me confident that the NAS does not have an issue with files over 4TB. And the PURE-Fibre-Thick-LVM volumes also have multiple disks over 4TB.
It is a bit curious...
Hi!
I have the same problem and it looks like I found a solution.
NFS shares from the Truenas scale virtual machine are mounted to unprivileged LXC via the host.
In /etc/fstab, I added vers=3 and lookupcache=none to the NFS parameters.
One day...
ask your nas again and check big filetransver from your desktop pc to your nas.
Maybe you have a big video file?
i use e.g. rsync to my zfs nas based an Proxmox VE and store there all my raw video files.
Hey, did you ever find a solution for that? Because I'm just stuck with a pretty similar issue. After exactly 4.2TB the transfer seemingly stops. Going from 400mb/s from the NAS NFS share to 0.
root, data, und swap sind LVM Volumes denen ein fixer Platz in der Volume Group zugewiesen ist.
Das hier bedeutet einfach nur, dass 88% des Speichers der Volume Group (vgs) vergeben ist. Dieser muss nicht belegt sein.
Du kannst dir das etwas wie...
I encountered a similar issue. I desired to execute a hook script that would reinitialize the AMD GPU after a virtual machine (VM) shutdown. The script functioned correctly when the VM was shut down via the web interface. However, if I initiated...
Vielen Dank für deine Rückmeldung. Das ganze ich für mich nicht so richtig zu packen.
Es ist also nicht so, wie bei Windows, wo ich z.B. einer Partition 20GB fest zuordne, sondern ich habe in diesem Fall hier z.B.dem Container vm-106-disk-0...
So I am thinking of creating a ProxMox cluster, three (3) nodes, and use direct network between the nodes (no switch). Now that requires two network ports dedicated just for this. Plus the normal ports. I don't think I will be able to get small...
That's surprising and not expected, of course.
How did you create that pool? What gives "zpool status" before reboot and "zpool import" after it?
We are talking "write cache", right? It tells ZFS that data has been written to disk while it...
I managed to fix it using these steps:
https://medium.com/opsops/recovering-ceph-from-reduced-data-availability-3-pgs-inactive-3-pgs-incomplete-b97cbcb4b5a1
DO THAT ONLY IF INACTIVE PGs ARE EMPTY (MEANING OBJECTS = 0, seen using command ceph pg...
this is exactly my hardware, but I did not disable the cache. The zpool completely disapears after each reboot. Is this the cache? Do I need to disable it?
You can try to use the NFS as a removable storage in PBS, if the PBS host dies, you can create a new one, and add the storage again as a removable existent media.
If you use this trick you can access again to your backups.
Signed up because of you and your HowTo
I am super thankful for this, since I could not figure it out in the previous posts.
Followed your how to and have an old laptop up an running with wireless.
This is only for POC and once I have figured...
You can try to boot with a live debian image, and do a copy of the disks in the lvm volume to a external usb or via rsync to a new machine. To find the disks you can use:
pvesm list local-lvm
Then for example:
qemu-img convert -O qcow2...