The limit isn't Proxmox, per se. The limit is with KVM and the way it presents virtual "hardware" to the VM. The interface simulates the PCI bus and there is a limit of 32 devices that you can attach.
You could present a single interface with all of the VLANs trunked on it and let the VM's...
Its a bit dated, but there is a blog explaing how to run Mikrotik RouterOS as a VM (KVM) under Proxmox here: http://www.linux-howto.info/configure-mikrotik-routeros-in-proxmox-kvm-virtual-machine-on-ovh-dedicated-server/
I don't know if it will help you but I used this in the past to launch...
It really doesn't matter much. Upgrades are cumulative and "dist-upgrade" is a superset of "upgrade". So it is an un-needed axtra step to get to the same place. No harm, no foul (or, if you prefer real sports, advantage - Play On).
To expand a bit on @wolfgang's response:
- Install from ISO
- fix up the repo info if necessary (i.e., if you don't have a subscription)
- apt-get update && apt-get dist-upgrade -y <---- this step to get the current point release which includes Jewel
- then "pveceph..."
The baseline 4.4 did...
I believe that is a leftover in the user documentation. The Ceph team has been very clear that Jewel makes CephFS a "production ready" part of the release. Specifically, the fsck and recovery tools that are referenced in the item you quoted above most certainly are part of Jewel.
From the...
I'd suggest you buy a switch.
I know that sounds like an arrogant/smart-alek "suggestion" but running a full mesh of 5 nodes is complex and asking for trouble. You sound as though you are very concerned about downtime (e.g., 'it's not possible to loose connectivities...") but your proposed...
Regardless of the combination of ssd/hdd, etc, I don't believe you'll find a satisfying solution with a single C6100 with 12 3.5" drives (3 per node) and 1gbe networking. I know the C6100 well and I just don't think you'll get there.
The only reasonable way to get decent performance from Ceph...
I don't think you really want to use separate journal for an all SSD cluster. You won't gain any speed (the journal write and the final commit have to be serialized, so there is no threading gain and both journal and data disks are the same speed). Worse - you actually increase your risk...
You don't...
Ceph does not provide a filesystem the same way NFS does - at least not directly. Ceph provides a storage model based on "objects". An "object" is basically a blob of bits that you can access using a unique id or handle.
Ceph provides a mechanism called RBD to simulate a block...
@gkovacs - the feature you request is reasonable, why you need it has been well explained. As you point out, the implementation should not be terribly difficult since all of the required parts already exist - its mainly a matter of pulling them together and testing (which, to be fair, may not...
Fair play to that, but at least when Jewel is around you get to make those kind of engineering trade-offs in how you deploy. You and I might not agree with peoples choices, but depending on their situation it might make sense.
For now, prior to Jewel being available under Proxmox, CephFS is...
I don't think I'd trust CephFS for much until you are running Jewel. Before that is was quite unstable and didn't have complete recovery tools (file system check/recovery). Since proxmox 4.2 is still at Hammer - unless you've done something to upgrade it - you are wise not do do this yet.
As...
One small quibble with the above. The default behavior is to acknowledge the write when it has been registered in (n/2) journals (not all of them). Assuming you keep an odd number of replicas in the pool then this guarantees a "quorum" of replicas in case things need to be recovered. This...
I wouldn't really recommend what @syadnom is doing either. A two-node Ceph cluster (even if it has a third MON to manage Quorum) won't be a very satisfying experience. But he didn't really ask if he should do it - he asked if he could do it. And if he has his OSDs spread over two nodes there...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.