I noticed, that LVM-think works way worse, than Directory storage + qcow2 files for VM disks. What FS for the Directory should I use for such approach? xfs, or ext4? it will only contain the qcow2 files...
Thanks, that helped a bit:
With IO Thread disabled:
Throughput:
read, MiB/s: 26.73
written, MiB/s: 17.82
With IO Thread enabled:
Throughput:
read, MiB/s: 34.65
written, MiB/s: 23.10
Is this the best I can aim...
I cannot seem to change the SCSI Controller value from GUI and for Bus/Device I can only see:
You're right. I wiped the disk, set up a Directory storage using ext4 and repeated the test:
Throughput:
read, MiB/s: 111.64
written, MiB/s: 74.43
Thus...
I should add that those drives are connected to the server via HPE Smart Array P420i controller, it has this little battery. I wonder how safe would it be to use "writeback" Cache mode
I'm not sure, I use GUI with following settings:
("fast" is the lvm-thin storage)
The first one is ADATA SU630, the second one (the one that is bottlenecking) is Seagate IronWolf 125. I would expect it to have better performance than ADATA.
On my proxmox machine I got two SSD drives natively connected to it. First disk is for proxmox OS, the other I want to use in VMs.
Basically, I want to use half of this second SSD drive in one VM, and other half in another, preferably with auto-adjusting sizes (in case one VM needs more than a...
I have one VM that has a RAID volume passed-through to it and that VM creates CIFS shares to be used on another VM.
Both VMs have similar CPU issues.
My PVE is up to date.
Hello!
Every couple of weeks various VMs tend to become problematic and their consoles show the following errors:
Rebooting the VM fixes the issue.
How do I diagnose what's going wrong? In that state of the VM, I'm unable to SSH into it.
The Proxmox is installed on HP Proliant DL380e...
Of course :)
Wait - what? So you say that this VM was never able to be snapshotted and I only noticed this now? I cannot be certain, but I think I would've noticed this before. So the VM backup-snapshot does not utilize the copy-on-write feature of LVM?
So this basically boils down to the...
More info from PVE:
[00:06:57][root@zoltan]:~# qm guest cmd 100 fsfreeze-status
thawed
On the VM itself:
[00:10:21][root@yarpen]:~# uname -a
Linux yarpen 5.10.0-0.bpo.9-amd64 #1 SMP Debian 5.10.70-1~bpo10+1 (2021-10-10) x86_64 GNU/Linux
I have a VM (id=100) that runs OpenMediaVault - NAS server with CIFS shares. For the past 117 days everything was working properly, the backups were done every night with no errors. I initiated manual backup of this VM - and it failed. Here's the event log:
INFO: starting new backup job...
The one that shows wrong values is debian-based Linux, the openmediavault. The one that shows proper ones is Ubuntu Server.
What I wanted the most is to be able to access files on this volume directly on proxmox in case of openmediavault VM crashes, or even in live-distro in case whole proxmox...
The other VM has similar config, minimum being equal to the Memory value:
But it still shows proper memory usage :(
Huh, this opens a new can of worms. I did use "qm set" approach. So it would be better for to use a PCI passthrough for this config? I want the VM to handle all the storage...
The memory is already set as Ballooning:
I don't use PCI passthough, although I do have a VirtIO Hard Disk there, which is a volume from my RAID controller that I have directly passed to the VM. Is that the reason? The other VM does not have such Hard Disk...
Hi!
I filled the "First Name" and "Last Name" in Datacenter > Permission > Users for user "root". I also set up the Datacenter > Options > Email from address, but the notification in my inbox still only shows "root" in the "From" field. It needs 2 more clicks in most of mail clients to show...
Hi!
I'm using Proxmox 7.0. It reports that my openmediavault VM uses 95.32% (7.63 GiB of 8.00 GiB) or RAM, where OMV claims to be using 229M/7.77G.
OMV reports:
This is confirmed by:
# free -h
total used free shared buff/cache available
Mem...
That's a bummer. I was hoping that installing hp-ams would enable this kind of diagnostics to be available somewhere - either in host OS or in iLO interface.
But still, there must be a way to monitor this kind of stuff.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.