Strange that this benchmark result shows so low given the specs. Perhaps because of the drives.
CPU: Xeon E5520 Dual Socket 16 Cores
RAM: 48GB RAM
HDD: 10x2TB 3.5" Seagate Barracuda in ZFS Raid-Z3
NIC: 2x10Gbps SFP+
Uploaded 94 chunks in 5 seconds.
Time per request: 54014 microseconds.
TLS...
First thing to do is to run benchmark on the PBS server and see how fast your server actually is. The output should give you some indication.
# proxmox-backup-client benchmark
Your title says VDI but you are intending automated access into VMs. So without knowing the details, I am assuming you are trying to give user console access to a VMs using VNC/RDP/SSH of Guacamole. I will highly highly recommend against it as will many others. Regardless of the need, you want...
I actually abandoned backy2 many months ago. It just not a good fit for a large environment in my opinion. It is powerful in it's own right no doubt but few mechanics of backy2 just hard to deal with. After finding out how simple and effective eve4pe-barc was to backup/restore Ceph disk images I...
Is there a way to prevent Proxmox from rewriting ceph.conf? I have added some radosgw configs. If I happen to add/remove MON/MDS or in some case restart any ceph service, Proxmox seems to remove all radosgw manual config. It only keeps what is being managed through GUI. Such as MON, MDS.
It is probably too late for a reply, but in case someone else looking for info.
As far as I know, this is the safest way to upgrade without impacting cluster performance to a point it becomes unusable during re-balancing. You can speed up recovery greatly with tweaks such as increasing Backfill...
To replace older drives with newer ones in a Ceph cluster, add the new one, wait for the rebalancing to finish. Then stop and out the old drive. Wait for the rebalancing to finish and remove the drive. Repeat till all the drives have been replaced. This is a fairly common scenario.
As for...
This is what I use when reinstalling OS drive on Ceph nodes and OSDs are not automatically recognized. This is an OSD by OSD process so take your time to avoid any mistake and lose any OSDs:
1. Reinstall Proxmox as usual.
2. Find out which OSD does the drive /dev/sdX belongs to. For example...
I went ahead with Zram as well. The host itself got plenty of RAM. But some LXC running inside is limited to allocated memory which may not need to swap time to time. I used the following procedure:
1. Load zram module
$ modprobe zram
2. Make the module autoload during reboot by adding zram in...
If you are putting those Ceph nodes on 40g SFP+ for Sync cluster then you will be fine. Otherwise you are going to saturate your 10g network with 24 SSDs per node during rebalancing. This chassis comes with dual 10g so I am assuming you are going to have 2 separate networks for Ceph Public and...
That is it. Rechecking the nodes, all the ceph pmx node OSs are on single enterprise HDD. The pmx nodes in question OSs are on mirror ZFS SSD. All these nodes were cleanly installed with Proxmox 5.4 and now upgraded to 6. Any node with ZFS mirror SSD the swap is missing. It looks like I never...
I am aware of the swaippness parameters. I have been using vm.swappiness=0 for all nodes. What I am asking, is there a way to have LXCs use shared storage as swap device and not the host swap space.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.