Search results

  1. P

    VM + 110 VLANs

    The limit isn't Proxmox, per se. The limit is with KVM and the way it presents virtual "hardware" to the VM. The interface simulates the PCI bus and there is a limit of 32 devices that you can attach. You could present a single interface with all of the VLANs trunked on it and let the VM's...
  2. P

    VM + 110 VLANs

    Its a bit dated, but there is a blog explaing how to run Mikrotik RouterOS as a VM (KVM) under Proxmox here: http://www.linux-howto.info/configure-mikrotik-routeros-in-proxmox-kvm-virtual-machine-on-ovh-dedicated-server/ I don't know if it will help you but I used this in the past to launch...
  3. P

    [SOLVED] Issues getting Proxmox 4.4 + Ceph Jewel running

    It really doesn't matter much. Upgrades are cumulative and "dist-upgrade" is a superset of "upgrade". So it is an un-needed axtra step to get to the same place. No harm, no foul (or, if you prefer real sports, advantage - Play On).
  4. P

    [SOLVED] Issues getting Proxmox 4.4 + Ceph Jewel running

    To expand a bit on @wolfgang's response: - Install from ISO - fix up the repo info if necessary (i.e., if you don't have a subscription) - apt-get update && apt-get dist-upgrade -y <---- this step to get the current point release which includes Jewel - then "pveceph..." The baseline 4.4 did...
  5. P

    HowTo: Upgrade Ceph Hammer to Jewel

    I believe that is a leftover in the user documentation. The Ceph team has been very clear that Jewel makes CephFS a "production ready" part of the release. Specifically, the fsck and recovery tools that are referenced in the item you quoted above most certainly are part of Jewel. From the...
  6. P

    Proxmox Cluster - Openvswicth - Mesh network

    I'd suggest you buy a switch. I know that sounds like an arrogant/smart-alek "suggestion" but running a full mesh of 5 nodes is complex and asking for trouble. You sound as though you are very concerned about downtime (e.g., 'it's not possible to loose connectivities...") but your proposed...
  7. P

    Proxmox VE 4.4 released!

    Release notes do not mention Ceph updated to Jewel. Prior objectives for 4.4 made this a primary goal. Could you comment on the status of Ceph?
  8. P

    Cluster with 30+ nodes?

    So what did you do to get increase in 4k random write IOPS?
  9. P

    New Setup 4 nodes

    Regardless of the combination of ssd/hdd, etc, I don't believe you'll find a satisfying solution with a single C6100 with 12 3.5" drives (3 per node) and 1gbe networking. I know the C6100 well and I just don't think you'll get there. The only reasonable way to get decent performance from Ceph...
  10. P

    Cluster with 30+ nodes?

    I don't think you really want to use separate journal for an all SSD cluster. You won't gain any speed (the journal write and the final commit have to be serialized, so there is no threading gain and both journal and data disks are the same speed). Worse - you actually increase your risk...
  11. P

    Insane load on PVE host

    Don't know much about why the Windows VM went wonky on you - but the 1.5+ years uptime on the PVE host is impressive...
  12. P

    Ceph version in 4.3?

    Nothing in the release notes about an upgrade to Ceph. Did Jewel upgrade make it in?
  13. P

    rookie NFS share ceph?

    Thanks for pointing out my misunderstanding. Edited/corrected my post so the it doesn't mislead readers who come later.
  14. P

    rookie NFS share ceph?

    You don't... Ceph does not provide a filesystem the same way NFS does - at least not directly. Ceph provides a storage model based on "objects". An "object" is basically a blob of bits that you can access using a unique id or handle. Ceph provides a mechanism called RBD to simulate a block...
  15. P

    When will KVM live / suspend migration on ZFS work?

    @gkovacs - the feature you request is reasonable, why you need it has been well explained. As you point out, the implementation should not be terribly difficult since all of the required parts already exist - its mainly a matter of pulling them together and testing (which, to be fair, may not...
  16. P

    Creating and Mounting CephFS on 4.2 - HowTo

    Fair play to that, but at least when Jewel is around you get to make those kind of engineering trade-offs in how you deploy. You and I might not agree with peoples choices, but depending on their situation it might make sense. For now, prior to Jewel being available under Proxmox, CephFS is...
  17. P

    Creating and Mounting CephFS on 4.2 - HowTo

    I don't think I'd trust CephFS for much until you are running Jewel. Before that is was quite unstable and didn't have complete recovery tools (file system check/recovery). Since proxmox 4.2 is still at Hammer - unless you've done something to upgrade it - you are wise not do do this yet. As...
  18. P

    SSD CEPH and network planning

    One small quibble with the above. The default behavior is to acknowledge the write when it has been registered in (n/2) journals (not all of them). Assuming you keep an odd number of replicas in the pool then this guarantees a "quorum" of replicas in case things need to be recovered. This...
  19. P

    proxmox ceph minimum reasonable OSD?

    I wouldn't really recommend what @syadnom is doing either. A two-node Ceph cluster (even if it has a third MON to manage Quorum) won't be a very satisfying experience. But he didn't really ask if he should do it - he asked if he could do it. And if he has his OSDs spread over two nodes there...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!