Search results

  1. P

    Planning Proxmox VE 5.1: Ceph Luminous, Kernel 4.13, latest ZFS, LXC 2.1

    Are you still on for mid/late October? I have a small lab project that I'd love to do with a "final" 5.1, but if the release is sliding into November I won't be able to wait. Quick status update appreciated.
  2. P

    Planning Ceph 3 Nodes - 6 OSD vs 3 Hardware Raid

    Separate OSDs. Don;t bother with the Raid1. Ceph will be doing your replication, etc, and the raid layer will just reduce your overall capacity (raid1 local replications cuts capacity in half, but Ceph will still do replication across the hosts) with limited performance gains. Ceph works more...
  3. P

    New ISO image for Proxmox VE 5.0 (installer bug fixes)

    Yes. Correct. Awkward, but correct, which probably explains why people complain but don't actually scream about it. If/when the "script" for getting through the installer pages changes you'll have to re-draft the working sequence of keyboard shortcuts again.
  4. P

    New ISO image for Proxmox VE 5.0 (installer bug fixes)

    What if you MUST boot in UEFI mode to make it work. For example, when installing onto and using NVMe as boot media on most Supermicro motherboards (which is only supported in NVMe and not legacy mode)? Note that the installer actually works - you just have to memorize/script the keyboard...
  5. P

    What are you using to keep for 3rd node ?

    But it won't support a 4.x kernel (yet - very close). So full node Proxmox 5.x is probably a no-go for now.
  6. P

    proxmox 5.0 works great with btrfs! :D

    @Pablo Alcaraz - could you explain what advantage you get with BTRFS vs ZFS? Seems like ZFS meets all of your requirements, is stable, and is portable across a wide variety of operating environments. On the other hand, BTRFS is nascent, its stability is subject to question, and it does't...
  7. P

    Proxmox VE 5.0 released!

    The NIC naming is also more of a Debian issue than a Proxmox issue (actually, more of a "mainline Linux" issue since it is currently being adopted by most all major distributions as they get onto the 4.x kernel train). I actually got through the "new" naming convention for network interfaces...
  8. P

    Proxmox VE 5.0 released!

    This isn't really a Proxmox issue - its Debian. ifconfig (and the rest of net-tools) has been deprecated in Stretch. The Debian community is trying to force the transition to new tools (ip, iw, etc). They are still in the repos and can be installed with apt (as you noted) but are not...
  9. P

    New Proxmox VE 5.0 with Ceph production cluster and upgrade from 3 to 5 nodes

    If you can wait I would. The transition from Jewel to Luminous is likely to be troublesome given that the Ceph team is doing some pretty major surgery in the OSD on-disk formatting, etc. From what I have seen/tested of the pre-release I believe it will be worth waiting for it. As for your...
  10. P

    cpu steal

    The host OS (Proxmox) requires some CPU to run its various upkeep jobs (doing the hosts share of IO activity, background daemons, etc.). If you assign 24 cores to a VM and you have 24 cores the only way to service the host is by "stealing" some cycles (I prefer the word "scheduling" or...
  11. P

    Proxmox VE + Ceph = how many nodes?

    That should work well. The older Samsung drives would probably not be the optimal choice (they are known to have some write latency issues under heavy load) but they will probably be OK unless you have a particularly write intensive workload. The 400GB S3710s will be outstanding for OS+Mon...
  12. P

    Proxmox VE + Ceph = how many nodes?

    I'm assuming you installed Proxmox onto the DOM so that you can allocate all six SSDs to OSD? This will work, but... - The MON does have some disk access requirements (limited) - CEPH logs everything, with log entries every couple of seconds. These two things together could cause trouble...
  13. P

    Strange SSH/SFTP behaviour

    IP address collision on the original VLAN? Some other device(s) using the same address(es) assigned by your DHCP?
  14. P

    Ceph: Erasure coded pools planned?

    Because you have to be able to describe the fault domains in Ceph to ensure that your failure happens exactly that way. And your choices to create the fault domain are the OSD, the host (with a collection of OSDs) or a rack (with a collection of hosts, etc). Your example works using the OSD as...
  15. P

    Ceph: Erasure coded pools planned?

    There's not really anything to "catch up" with. If you look deeper into the capabilities of Ceph's EC pools, you find that they are not currently suitable for use with RBD (virtual block device). In order to use EC pools with RBD you needed to use a "cache tier" using regular replicated pools...
  16. P

    Add existing qcow2 image to a VM without ovewrite it

    - Shut down the VM - Move "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2" to "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2.save" (being cautious). - Copy your .qcow2 image to "/PoolRZ2/PROXMOX/images/100/vm-100-disk-1.qcow2" - Restart the VM.
  17. P

    pfSense & ProxMox Remote Access

    Even easier - since he's using pfsense - there is an OpenVPN package for pfsense. Use that one. Its fully integrated with the pfsense distro and you don't need to port foward anything inside your firewall, the VPN exists at the firewall edge so you don't need to forward "dirty" traffic to the...
  18. P

    Ceph networking question

    (A) is the "normal" approach for Ceph. You didn't describe your disk configuration but unless you have multiple SSD/host on the OSD hosts you are not likely to saturate the 10Gbe links (or if you do it will only be for short bursts). You could do the bond/LAG approach, but in practice you...
  19. P

    ceph - added pool, can not move kvm disk to it

    Glad I could help. BTW, you didn't actually need to build a new pool to increase the number of Placement Groups after adding OSDs. You can always increase the number of placement groups in a pool - you just can't decrease them. You also can't do it inside the Proxmox GUI, at least AFAIK...
  20. P

    ceph - added pool, can not move kvm disk to it

    This may seem like a stupid and obvious question - but did you set up the keyring after you created the new pool? Read back through the thread and can't see any mention of it...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!