A note: the byte offset in the error message: 2627730944 is not divisible by the physical hard drive sector size (4096).
I think this issue is caused by the qemu-img failing to handle the 4K native sector size.
Hello,
I do not remember the exact byte offset, but the error message was: qemu-img: error while writing at byte xxxxxx: Invalid argument.
I deleted the LVM thin pool altogether.
In the meantime I found out the same error is caused on disks on...
We took the risk and purchased the server with this hardware.
Long story short: it works with Proxmox VE 9.1.1 | Marking as [SOLVED]
Thank you all for your help and useful information!
Best Regards
Not sure which logs to look at it (i'm relatively new to proxmox). I know it's getting full because I am monitoring my pulse LXC and the ubuntu VM is showing as 0% free disk space
I'm running an ubuntu VM with docker only installed, and something is causing the VM to fill up its disk space (even though there is plenty of free space) and crash. I need to restart it to get it back up and running. How can I find out what is...
For reference, tooling to directly import such exported OCI image as LXCs is being worked on: https://lore.proxmox.com/pve-devel/20250709123435.64796-1-f.schauer@proxmox.com/
I think we have an missunderstanding here: I basically agree with your take that one shouldn't worry about wearout to much. But in the case somebody do (which often seems to be the case in Reddits homelabbing/selfhosting subs) I want to state...
I don't know what you mean. ZFS runs fine in any setup. Why change something if it works for you? Don't believe in stuff writte online. Most people have no clue what they're talking about.
There is practically no difference in installing PVE on...
I'm pleased to report that lxc-pve 6.0.0-2 appeared in the PVE 8 pve-no-subscription repository today and that after upgrading lxc-pve from 6.0.0-1 to 6.0.0-2 docker 29 runs without issue in my LXC containers.
Thanks for porting this patch to...
Thanks, yes Im running this:
root@Proxmox:~# smartd -V
smartd 7.4 2024-10-15 r5620 [x86_64-linux-6.17.2-1-pve] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
smartd comes with ABSOLUTELY NO WARRANTY...
Hey @DennyX I'm not sure about that specific adapter, but I've seen reports of people experiencing CPU utilisation bottlenecks when under high network loads. I'm not an expert, but I think this may be due to the extra processing needed to manage...
Hi, @dastrix80 .
There was a similar bug in smartmontools, but according to the bugreport it was fixed in version 6.6-1 of smartmontools six years ago. So unless you have such an old version in your PVE (very unlikely), that shouldn't be that...
Not sure where the connection between PLP and user scripts lie. wear out on any SSD made in the last 10 years is such a remote concern as to not warrant any thought at all. I bet you dont have a single SSD that is ANYWHERE CLOSE to wearout- I...
Your missing GPU display output is completely untelated to the systems' ability to perform a full UEFI boot.
And if you set your system to boot Legacy-first, then why should it boot Proxmox in UEFI mode?
The Proxmox installer can perform a...
Cheers,
I am evaluating PMG to deploy to our customers. Target systems are Exchange SE.
I set up a test drive where I evaluate several functions. One I absolutely need is recipient verification. Exchange rejects mails for non-existing...
Ja, aufjedenfall. Ich hab die Maschine mehr oder weniger selbst in den Tod gejagt. (RAM Overprovisioning) - Nur sehr seltsam, dass er dadurch einfach die Configs vergisst oder ich schätze eher mal die Node resetted (Mir ist noch aufgefallen, dass...
64K volblocksize is the sweet spot for Windows VMs execpt database or file servers. Pure AD controllers should also use zvols with 16K blocksize. MS SQL server for example: 1x vssd with 64k for the OS installation only. 2nd vssd for SQL data...
yes, but probably it mainly depends on controller. controller can inform you about optimal format. yours is ok with both (512e = 4kn = best).
example of difference from my system:
# nvme id-ns /dev/nvme3n1 -H | grep '^LBA Format'
LBA Format 0 ...