Using the root pool is a bad practice and you can set that as no mount (-O canmount=off)
Having it mounted to /rpool could lead to some bad practise
In example if you want to send rpool somewhere, you can't do it safely because receiving a full filesystem stream destroys the one already in...
Thank you.
As far as I know, using the "root pool" is a very bad practise and I see that proxmox is mounting that:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 480G 353G 140K /rpool
is it safe to unmount ? Why are you mounting that ?
It's a shame that PVE is still requiring a dedicated USB drive and can't be placed on the same USB driver with tons of other ISO images.
it's a known 18 months bug. I've also seen a bugzilla about this.
If your VMs are PV, yes, you have to install grub (if not already present) beacuse PV VMs doesn't have a bootloader (boot process is done directly with PyGrub from the Xen host)
Usually, all debian VM still have a proper bootloaded loaded and I don't have to install anything (except virtio...
AFAIK, any mirror (RAID1, RAID10, ....) is much faster than any parity RAID. So, a 3way mirror should be better than a RAID-Z2 (and also cheaper, as I need 1 disk less)
Using 4x2TB disks in a RAID10 is not an issue, but I really hate any 2-way mirror (and a RAID10 has 2 2way mirrors inside).
I had multiple full data-loss when using mirror, now all my server are on RAID-6 or at least 3way mirrors. That's why i've talked about 3way mirror
My use case are web hosting VMs. There is almost 0 sequential write like in any web hosting VM.
More or less, I have to migrate 4-5 VM, with about 400-500 sites each, that's why an L2ARC could be usefull, where ZFS will store most read files from the VM. (if ZFS is able to cache VM blocks when...
I don't have fio installed on XenServer and I prefere to not install additional software on this junk environment.
I have "sysstat", thus I can provide you "iostat"
So, you are putting OSD and MONs on the same server. Interesting.
If i understood properly: 3 ceph servers for both, OSDs and MONs, 2x10GB for redundancy with public and private on the same link, then proxmox is connected to these 3 servers via 10GB link (also used for LAN)
How many ram on ceph...
Could you describe your ceph environments ? How many servers, how many switches and so on. 10GBe ?
Not exactly right this. Gluster works, but from what I can see in dev mailing list, there isn't a real roadmap to follow, every release add tons of feature, most of the time with tons of bugs...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.