Search results

  1. Missing command line tools

    Then as dietmar said above. Your running Proxmox 6.x which does not support OpenVZ, so most of them commands are no longer valid, I imagine the guide your using is probably aimed at Proxmox 4.x or 5.x
  2. How to Enable turbo boost ?

    Where was you looking to see the CPU Frequency change? On the host node if you run "cat /proc/cpuinfo | grep "MHz"" You should see the CPU Frequeny change, if you run something demanding you should see the frequency ramp up if possible depending on the CPU and the amount of active cores.
  3. Installation with m.2 - OK ?

    If the m.2 is being used just for the boot disk then I see no issues at all, your have a single point of failure, just make sure you keep a backup of your VM config files encase the m.2 ever dies. Obviously if this is a mission critical server then you would be better off placing the boot drive...
  4. [SOLVED] Debian 10 LXC Container won't start after 'apt upgrade'

    From another thread with the same issue it was noted their has been some fixes in updates on Promox for this. You should update to the current 6.0-7 and try again
  5. How to Enable turbo boost ?

    What makes you think it isn't enabled? Out of the box I have no issues with CPU's with Intel Turbo Boost from boosting as and when they can / required
  6. Write speeds fall off/VM lag during file copies ZFS

    Have you monitored free -m output on the host node whist your running one of the transfers that causes the issue?
  7. Write speeds fall off/VM lag during file copies ZFS

    You say ARC is set to use 16GB, so that leaves you with only 16GB. What do you have assigned to the VM's? May be worth pasting the full .conf output for the Windows VM
  8. How many OSDs?

    You can create one OSD Per a disk, the amount of OSD you require really depends on your usage case. How big are the disk? How much useable storage do you require on CEPH? 6 OSD Will work, 4 nodes would be better, but it is really down to what you require from the cluster in performance and...
  9. How many OSDs?

    Sorry, where did I say that? All I stated that you said you have 3 servers each with 2 disks. Therefore you have the capacity for 6 OSD's. That will work fine, only issue being with a replication of 2 if you lose a whole host CEPH wont be able to automatically repair and will run on a...
  10. How many OSDs?

    12 is just a base line recommendation. From the looks of it you will have 6 OSDs? This would be a small cluster but no issues and would work and function. It just means if one node goes down your only have 2 replication while you repair or bring a new node online.
  11. How to configure bridged networking with systemd network?

    On Hetzner install image they have a custom image that is Debian 10 + Proxmox. Once installed you just need to enable the bridge by adding the following to /etc/network/interfaces auto eno1 iface eno1 inet manual auto vmbr0 iface vmbr0 inet static address "Server IP" netmask...
  12. resizing proxmox disk to make it smaller.

    What format is the current disk in? LVM / QCOW file e.t.c
  13. Ceph SSD for wal

    From the CEPH ML it's pretty much recommended that any disk that was good for File store Journal (https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/) Will be good for the BL WAL + DB, as they both have similar I/O demands.
  14. Thin provision ceph

    What your seeing in your screenshot is the size of the disk (it's max size) that you specified when creating the disk. CEPH stores RBD data in 4MB objects, so will only use the amount of disk space that is actually used on the filesystem within the VM. If you have saved lot's of files and then...
  15. Limitations of free PVE, How many hardware sockets i can use?

    The main difference is the enterprise you get support via ticket's and the repo runs slightly older but more well tested packages. OpenSource is more bleeding edge (still tested) and has the message when you login apart from that no differences in what it can do.
  16. Limitations of free PVE, How many hardware sockets i can use?

    There is no physical hardware limits / or differences between the enterprise and opensource version.
  17. Adding extra used node to a cluster

    I currently have a cluster of 6 node's, I plan to bring up some new extra node's, however due to a network limitation I won't be able to connect them straight away to the cluster network however need to get some VM's running on them. If I make sure I manually change the new VM ID's so they...
  18. Ceph: Recommended size for caching SSD in a hybrid cluster

    Only you can answer this with the performance / amount of storage you require. If you need storage then SAS disk's with the DB + WAL on a good DC rated SSD or even better NVME is your best option.
  19. Ceph: Recommended size for caching SSD in a hybrid cluster

    Caching is not recommend in CEPH and is slowly being filtered out of the software over time. What you can do with Bluestore is place the WAL & DB of the OSD onto a SSD, so the metadata is retrieved quickly via the SSD and the SAS disk is left with just the raw data I/O this can be read about...
  20. [SOLVED] How to replace 500GB HDD (CEph, OSD RBD) with 1TB one ? (safely!)

    The config file just sets defaults, when you make a pool you can override these as you must have done to create a 2/2 pool. Doing what you did was correct and once the sync is finished your get a health good message. On the upgrade of the disks from 500GB -> 1TB, how much data does each disk...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!