I installed a new setup in my server. (Unfortunately old one crashed completely). I had to rebuild it from scratch.
In capacity, it's 2tb HDD SATA + 500 GB SSD, with 64GB RAM.
I followed the steps as per documents. I selected Ext4 for my installation.
Surprisingly, once I loaded vm's. It...
Hello there lovely people.
So, as the title says, Memory Performance is really bad. I tried to debug this since 3 or 4 Weeks now and I´m all out of Ideas. In a Linux VM i get around 24GB/s with 1M BS which is around the maximum my Board/System can handle. I used the Phoronix Test Suite as a...
i have old small HP server with CPU E3-1265L V2 @ 2.50GHz (8 cores) and 4GB RAM. I use Proxmox 6.1, installed Win10 + Ubuntu server and all works just fine. Sometimes i run some Ubuntu server distro and for years no problem.
Recently i bought full server HP ProLiant ML350p Gen8 with...
My 64GB RAM machine with ZFS seems to stabilise at 80% memory usage all the time for the root proxmox host. (Proxmox 6.2.4)
I have several VMs running on top of it, all of them using no more than 20GB of RAM, so I suspect all the RAM usage is by ZFS on the root host.
I've read several posts...
I order a new dedicated server with Proxmox and one VM.
6 Core @ 3.50GHz (1 Socket)
When connect to RDP go right , but when try to COPY & PASTE from my computer to dedicated server start copy for few second and stop. The RDP go to black windows and I need to close RDP and...
On my Large container the pbs takes 25 hours now. The same contaner with backuppc (incremental, rsync) takes about 90 minutes.
Backing up a VM image is fast.
(Non-incremental backuppc take 12 - 40 hours.)
It seems pbs could use a feature where files with old atime/mtime would be skipped.
ich habe ein neues raidz1 mit drei USB 3.0 Festplatten (PMR) eingerichtet und erreiche nur äußerst magere Lese-/Schreibwerte. Ich hätte hierbei mit mindestens dem 10-fachen Durchsatz gerechnet, da die Festplatten im Einzelbetrieb die erwarteten Durchsätze liefern.
Mein System ist...
Update: Believe this has to do with Wireguard VPN, possibly an MTU issue but the high speed going down to a trickle is very odd. If anyone has any insight would appreciate it.
Kinda new to Proxmox built in replication but I am very familiar with zfs send / recv
Testing out simple storage...
I try to make multi os remote desktops.
32 x Xeon 2665 v2
96 Gb RAM
4 x 1TB Hdd (write speed 180mb/s) ZFS RAID10 (write speed ~400mb/s)
20 VMs whit SCSI hdd (discard, IO Thread, SSD Emulation, and speed limits are set for every vm)
this config work very slow. very high IO Delay.
I have a install Proxmox on a hardware, it works well. But, after some weeks it 's very slow.
I tried to install old and new versions and slow persist.
I booted without start VM and load keep at 0.5.
CPU BOGOMIPS: 19199.92
HD SIZE: 56.84 GB...
ich habe ein ziemlich komisches Problem...
Wenn ich mich nach einem Tag auf das Proxmox Webinterface schalte, hängt das komplett und funktioniert nicht.
Bei meinen laufenden VM's werden nur graue Fragezeichen angezeigt und auch sonst kann man nichts machen.
Auch wenn ich auf...
We have a few HP DL360s running VE 6.1-3.
When I run speed tests on the hosts, or VMs, via "speedtest-cli", I consistently get around 100Mbps down (which matches our pipe) but only 4Mbps up.
I cannot for the life of me narrow down the issue.
Other machines I plug into the same...
We have a cluster of 4 servers (all of them are up-to-date 6.0.7 version)
We have configured ceph on both, with 3 monitors.
All server work well when thez are all online but when one host is down , the other are very very slow.
Have you ever seen this?
I have 3 Proxmox hosts, one with version 4.4, another one with version 5.2 and the latest with Proxmox 5.3.
From these 3 hosts, I would expect the 5.3 host to be the most performing taking into account the better hardware. All of the hosts have hardware RAID 10, with the latest (5.3...
Is there anyway to change read ahead of the cephfs.
(could not place hyper link - new user)
this is should be improve reading single large files.
Pardon me, as I am sure this is common knowledge for many PVE users. For me, it took many frustrating hours to find the fix, so I will document it here as I have not seen this particular problem/solution laid out explicitly elsewhere. This video from the PVE team documents the fix but...
After fighting with ZFS memory hunger, poor performance, and random reboots, I just have replaced it with mdraid (raid1), ext4, and simple qcow2 images for the VMs, stored in the ext4 file system. This setup should be the least efficient because of the multiple layers of abstraction (md and...
We have some server that display some really long boot time. We are talking about 30min to 45min each.
At first I though it was related to the server POST but the image here show the server finished his POST correctly and gave control to the OS. This is where they hang for 30+ min.
I have a (what appears to be) intermittent problem with container shutdowns taking a LONG time. For example:
As you can see, there is a NEARLY 7 MINUTE delay from the stop request end time to the shutdown command. What is the cause of this delay and how can it be mitigated?