Nemesiz - this made all the difference in the world. One last question - I have some latency on a 15TB Ceph volume. It's set to Writethrough cache as well. The Ceph I use this particular Ceph mount for backup storage which is fairly static. Other than the 5 second lag for caching, are...
Thanks again for your help. I'm going to let things run like this for a day or two and do some testing tonight to see how things behave. Fingers crossed.
Thanks Nemesiz... They are Samsumg 850 Pro's - but there are better/stronger/faster.
I read these threads yesterday:
https://forum.proxmox.com/threads/zfs-sync-disabled.37900/
https://forum.proxmox.com/threads/proxmox-zfs-raidz1-pool-on-ssd-drives-sync-parameter.31130/#post-155543
So...
And then: Local-ZFS
root@mox2:~# zfs get all Local-ZFS
NAME PROPERTY VALUE SOURCE
Local-ZFS type filesystem -
Local-ZFS creation Tue Jun 19 23:55 2018 -
Local-ZFS used 573G -...
Hi Nemesiz - not sure if this is helpful or what you are looking for - and I appreciate you reaching out and trying to help!
Just a single disk zpool RAID-0.
dir: local
path /var/lib/vz
content rootdir,images,vztmpl,iso
maxfiles 0
dir: Local-Backups
path /home/backups/
content...
Here's the output - am I doing something wrong? :( 36 minutes for ~100 gigs... Worse - it took the entire server down. This is restoring from a local spinner to a local SSD.
Virtual Environment 5.2-5
Virtual Machine 100 (host6.x.com) on node 'mox2'
Logs
()
restore vma archive: lzop -d -c...
I've never seen this issue before but I'm doing a restore from a local spinner to a local ZFS volume.
It's taken the entire host node down to a crawl and while I can ping the node, I can't ssh into it.
I've never had this happen. I've seen some really slow behavior with ZFS reading and...
Hello all, I have a VM that in the ProxMox UI Summary tab regularly shows usage stuck/pegged at 95-99%.
However, when I log into the machine and run top/htop it shows my usage as 15%.
The only way I can get ProxMox to reset the count for this VM is to restart it.
Any advice or thoughts?
Need some advice regarding a simple and probably silly question. All of my systems have a small SSD as the system disk and a secondary single larger SSD for VM's. If I destroy the rpool which is created by default and recreate it on /dev/sdb will I screw anything up?
2nd SSD is ZFS. I basically...
Can anyone tell me if the above is normal? It appears if there are journals, they are assigned to the OSD and not the SSD. My concern is that the last two upgrades required significant rebuilding/re-balancing even though I did everything per doc which expressly says re-balancing shouldn't be...
Good morning all,
On each of my Ceph nodes I have 2 SSD's for journals. /dev/sdj & /dev/sdk
While upgrading from Hammer -> Jewel I noticed something that I think is odd, but I'm not sure. It appears that some of my OSD's either may not have journals, or the journal is not set to one of the...
Hi all,
I have some Ceph Journal SSD's that will likely hit their wear limits sometime during the next quarter.
Is there a graceful step-by-step method to swap them out and replace them with new drives before they finally expire?
I'd like to have a plan in place and do this with as little...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.