Search results

  1. P

    Add existing qcow2 image to a VM without ovewrite it

    - Shut down the VM - Move "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2" to "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2.save" (being cautious). - Copy your .qcow2 image to "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2" - Restart the VM.
  2. P

    pfSense & ProxMox Remote Access

    Even easier - since he's using pfsense - there is an OpenVPN package for pfsense. Use that one. Its fully integrated with the pfsense distro and you don't need to port foward anything inside your firewall, the VPN exists at the firewall edge so you don't need to forward "dirty" traffic to the...
  3. P

    Ceph networking question

    (A) is the "normal" approach for Ceph. You didn't describe your disk configuration but unless you have multiple SSD/host on the OSD hosts you are not likely to saturate the 10Gbe links (or if you do it will only be for short bursts). You could do the bond/LAG approach, but in practice you...
  4. P

    ceph - added pool, can not move kvm disk to it

    Glad I could help. BTW, you didn't actually need to build a new pool to increase the number of Placement Groups after adding OSDs. You can always increase the number of placement groups in a pool - you just can't decrease them. You also can't do it inside the Proxmox GUI, at least AFAIK...
  5. P

    ceph - added pool, can not move kvm disk to it

    This may seem like a stupid and obvious question - but did you set up the keyring after you created the new pool? Read back through the thread and can't see any mention of it...
  6. P

    VM + 110 VLANs

    The limit isn't Proxmox, per se. The limit is with KVM and the way it presents virtual "hardware" to the VM. The interface simulates the PCI bus and there is a limit of 32 devices that you can attach. You could present a single interface with all of the VLANs trunked on it and let the VM's...
  7. P

    VM + 110 VLANs

    Its a bit dated, but there is a blog explaing how to run Mikrotik RouterOS as a VM (KVM) under Proxmox here: http://www.linux-howto.info/configure-mikrotik-routeros-in-proxmox-kvm-virtual-machine-on-ovh-dedicated-server/ I don't know if it will help you but I used this in the past to launch...
  8. P

    [SOLVED] Issues getting Proxmox 4.4 + Ceph Jewel running

    It really doesn't matter much. Upgrades are cumulative and "dist-upgrade" is a superset of "upgrade". So it is an un-needed axtra step to get to the same place. No harm, no foul (or, if you prefer real sports, advantage - Play On).
  9. P

    [SOLVED] Issues getting Proxmox 4.4 + Ceph Jewel running

    To expand a bit on @wolfgang's response: - Install from ISO - fix up the repo info if necessary (i.e., if you don't have a subscription) - apt-get update && apt-get dist-upgrade -y <---- this step to get the current point release which includes Jewel - then "pveceph..." The baseline 4.4 did...
  10. P

    HowTo: Upgrade Ceph Hammer to Jewel

    I believe that is a leftover in the user documentation. The Ceph team has been very clear that Jewel makes CephFS a "production ready" part of the release. Specifically, the fsck and recovery tools that are referenced in the item you quoted above most certainly are part of Jewel. From the...
  11. P

    Proxmox Cluster - Openvswicth - Mesh network

    I'd suggest you buy a switch. I know that sounds like an arrogant/smart-alek "suggestion" but running a full mesh of 5 nodes is complex and asking for trouble. You sound as though you are very concerned about downtime (e.g., 'it's not possible to loose connectivities...") but your proposed...
  12. P

    Proxmox VE 4.4 released!

    Release notes do not mention Ceph updated to Jewel. Prior objectives for 4.4 made this a primary goal. Could you comment on the status of Ceph?
  13. P

    Cluster with 30+ nodes?

    So what did you do to get increase in 4k random write IOPS?
  14. P

    New Setup 4 nodes

    Regardless of the combination of ssd/hdd, etc, I don't believe you'll find a satisfying solution with a single C6100 with 12 3.5" drives (3 per node) and 1gbe networking. I know the C6100 well and I just don't think you'll get there. The only reasonable way to get decent performance from Ceph...
  15. P

    Cluster with 30+ nodes?

    I don't think you really want to use separate journal for an all SSD cluster. You won't gain any speed (the journal write and the final commit have to be serialized, so there is no threading gain and both journal and data disks are the same speed). Worse - you actually increase your risk...
  16. P

    Insane load on PVE host

    Don't know much about why the Windows VM went wonky on you - but the 1.5+ years uptime on the PVE host is impressive...
  17. P

    Ceph version in 4.3?

    Nothing in the release notes about an upgrade to Ceph. Did Jewel upgrade make it in?
  18. P

    rookie NFS share ceph?

    Thanks for pointing out my misunderstanding. Edited/corrected my post so the it doesn't mislead readers who come later.
  19. P

    rookie NFS share ceph?

    You don't... Ceph does not provide a filesystem the same way NFS does - at least not directly. Ceph provides a storage model based on "objects". An "object" is basically a blob of bits that you can access using a unique id or handle. Ceph provides a mechanism called RBD to simulate a block...