Hi Folke,
thanks for answering my question.
root@pve-2024:~# apt info proxmox-backup-client
Package: proxmox-backup-client
Version: 3.2.3-1
Priority: optional
Section: admin
Source: rust-proxmox-backup
Maintainer: Proxmox Support Team <support@proxmox.com>
Installed-Size: 13,9 MB
Depends...
Hi,
i'm following the guide in the wiki, but struggling with installing the proxmox-ve package:
Here is the result:
root@pve-2024:~# apt install proxmox-ve postfix open-iscsi chrony
Paketlisten werden gelesen… Fertig
Abhängigkeitsbaum wird aufgebaut… Fertig
Statusinformationen werden...
Hi,
i had a running Proxmox Installation (Debian 11.9 - PVE 7.4.x)
The System was struggling with some network issues, i then decided to reboot it.
After reboot - the host does not come back online in proxmox-webinterface.
looking into it via ssh gives me the error that pve-daemon is not...
i'm still into it. updated the drivers inside the vms and rechecked.
Virtio 0.1.165 Drivers under windows server 2012r2
winsat disk -ran -write -drive c
Windows-Systembewertungstool
> Wird ausgeführt: Featureaufzählung ''
> Laufzeit 00:00:00.00
> Wird ausgeführt: Speicherbewertung '-ran -write...
funny, i removed some of the ssds and added them right back into the ceph-cluster and suddenly we got performance inside our vms.
read-iops before 8-10 - now 400-500.... and backfilling is on the way with 36pgs
what is that???
feels like the system is setting the osds in hibernate and to...
did you replaced the ssds to get the performance up again?
i'm still wondering why performance in vms is so poor. Tried to update the virtio drivers in the kernel, but didn't find any new version.
30 iops is what i got with ceph bench....
30 iops times 7 (osds in each server) would be amazing....
removed one osd, wiped it and readded it. Turned out that there was another osd that has a latency of about 2000ms. Pulled that out the cluster, now waiting for recovery then i'll retest.
Hi,
@spirit
yes to test the really performance of an underlaying storage you have to make sync=1 otherwise he will use the memory as cache. This will not represent the really performance of the storage. I'm not expecting 4k-IOPS, but all of our vms are slow.
An Windows-Update took 3 hours to...
do i have to change anything in the config of the vm?
result after enabling krbd, stop and start of the vm :
fio --ioengine=psync --filename=/root/test --size=1G --time_based --name=fio --group_reporting --runtime=60 --direct=1 --sync=1 --rw=write --bs=4M --numjobs=4 --iodepth=32
fio: (g=0)...
did read somewhere that it may has to do something with the virtio driver i'm using.
added a scsi0 device and retested:
fio --rw=write --name=test --size=20M --direct=1
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.12
Starting 1...
i see 64 as maximum.
i migrated most of the vms, but this special one has to get some extra memory so i shut it down, moved it to another already upgraded server and started it there.
it is RBP
bootdisk: virtio0
cores: 8
description: BLABLABLABLA
ide2: none,media=cdrom
memory: 16384
name...
rados bench 600 write -b 4M -t 16 -p test
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 600 seconds or 0 objects
Object prefix: benchmark_data_pve5-1_589524
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.