Are you still on for mid/late October? I have a small lab project that I'd love to do with a "final" 5.1, but if the release is sliding into November I won't be able to wait. Quick status update appreciated.
Separate OSDs. Don;t bother with the Raid1. Ceph will be doing your replication, etc, and the raid layer will just reduce your overall capacity (raid1 local replications cuts capacity in half, but Ceph will still do replication across the hosts) with limited performance gains. Ceph works more...
Yes. Correct. Awkward, but correct, which probably explains why people complain but don't actually scream about it. If/when the "script" for getting through the installer pages changes you'll have to re-draft the working sequence of keyboard shortcuts again.
What if you MUST boot in UEFI mode to make it work. For example, when installing onto and using NVMe as boot media on most Supermicro motherboards (which is only supported in NVMe and not legacy mode)?
Note that the installer actually works - you just have to memorize/script the keyboard...
@Pablo Alcaraz - could you explain what advantage you get with BTRFS vs ZFS? Seems like ZFS meets all of your requirements, is stable, and is portable across a wide variety of operating environments. On the other hand, BTRFS is nascent, its stability is subject to question, and it does't...
The NIC naming is also more of a Debian issue than a Proxmox issue (actually, more of a "mainline Linux" issue since it is currently being adopted by most all major distributions as they get onto the 4.x kernel train).
I actually got through the "new" naming convention for network interfaces...
This isn't really a Proxmox issue - its Debian. ifconfig (and the rest of net-tools) has been deprecated in Stretch. The Debian community is trying to force the transition to new tools (ip, iw, etc). They are still in the repos and can be installed with apt (as you noted) but are not...
If you can wait I would. The transition from Jewel to Luminous is likely to be troublesome given that the Ceph team is doing some pretty major surgery in the OSD on-disk formatting, etc. From what I have seen/tested of the pre-release I believe it will be worth waiting for it.
As for your...
The host OS (Proxmox) requires some CPU to run its various upkeep jobs (doing the hosts share of IO activity, background daemons, etc.). If you assign 24 cores to a VM and you have 24 cores the only way to service the host is by "stealing" some cycles (I prefer the word "scheduling" or...
That should work well. The older Samsung drives would probably not be the optimal choice (they are known to have some write latency issues under heavy load) but they will probably be OK unless you have a particularly write intensive workload. The 400GB S3710s will be outstanding for OS+Mon...
I'm assuming you installed Proxmox onto the DOM so that you can allocate all six SSDs to OSD? This will work, but...
- The MON does have some disk access requirements (limited)
- CEPH logs everything, with log entries every couple of seconds.
These two things together could cause trouble...
Because you have to be able to describe the fault domains in Ceph to ensure that your failure happens exactly that way. And your choices to create the fault domain are the OSD, the host (with a collection of OSDs) or a rack (with a collection of hosts, etc). Your example works using the OSD as...
There's not really anything to "catch up" with.
If you look deeper into the capabilities of Ceph's EC pools, you find that they are not currently suitable for use with RBD (virtual block device). In order to use EC pools with RBD you needed to use a "cache tier" using regular replicated pools...
- Shut down the VM
- Move "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2" to "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2.save" (being cautious).
- Copy your .qcow2 image to "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2"
- Restart the VM.
Even easier - since he's using pfsense - there is an OpenVPN package for pfsense. Use that one. Its fully integrated with the pfsense distro and you don't need to port foward anything inside your firewall, the VPN exists at the firewall edge so you don't need to forward "dirty" traffic to the...
(A) is the "normal" approach for Ceph. You didn't describe your disk configuration but unless you have multiple SSD/host on the OSD hosts you are not likely to saturate the 10Gbe links (or if you do it will only be for short bursts).
You could do the bond/LAG approach, but in practice you...
Glad I could help.
BTW, you didn't actually need to build a new pool to increase the number of Placement Groups after adding OSDs. You can always increase the number of placement groups in a pool - you just can't decrease them. You also can't do it inside the Proxmox GUI, at least AFAIK...
This may seem like a stupid and obvious question - but did you set up the keyring after you created the new pool? Read back through the thread and can't see any mention of it...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.