I meant I tried to upgrade things using apt dist-upgrade but it shows there is nothing to do everything is up to date.
BTW other day I was discussing things at the proxmox discord server, and someone shared this link to me...
Hi,
Since your hard disk (ide1) is orange, it's not plugged, you need to stop/start the VM to take effect; and it's most likely due to the fact that you are using the "LSI" controller with IDE disk (if I remember correctly, hot plug was never...
Hi everyone,
I recently noticed that my PC running Proxmox has been making a loud fan noise that was not there before. I cannot open the case right now to physically inspect it, so I am hoping to troubleshoot this through the command line.
My...
Hallo,
danke für deinen Post. Die Maschine, bei der ich den Umstieg auf Proxmox teste, ist ein Win10 Pro OS. Eine Reparatur durch den Virtio Installer mit nachfolgendem Neustart hat leider nicht geholfen. Gibt es eine andere Virtio Version?
Danke.
Ich stelle für mich fest, daß ich mich bisher viel zu wenig mit KI und ihren Möglichkeiten auseinandergesetzt habe (gerade Claude soll ja sehr gut sein).
Die ersten Vertreter ihre Zunft wirkten vor einem Jahr kaum wirklich brauchbar, ich...
Hi everyone!
I have:
Proxmox 9.1,
sfp ens13f0np0 Speed: 10000Mb/s, MTU 9000, physical NIC,
Bridge vmbr0 Speed: 10000Mb/s, MTU 9000
Single VM with Windows 2025 and Red Hat VirtIO Ethernet Adapter 10 Gbps. (VirtIO (paravirtualized)). All last...
Just wanted to share my experience with adding 10Gb network cards to systems that already had PVE installed and working.
On both systems I tried this (with different cards), this caused my the PVE GUI no longer to be reachable.
I had to connect...
I could confirm, turning TSO off on phy nic (post-up script) fixed this issues for me as well (I219V), no hangs so far (2w), only slightly higher cpu utilization 1-2% during nic traffic
I known that ext4 had problem with discard in the past (not about fragmentation, but discard not always working).
Personally, I'm using xfs in production, and I never had this problem (on 4000 vms)
Hi,
we have a guide for replacing a failed ZFS device, including the case of a bootable device: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_change_failed_dev
And just to note; these MX500 are really not meant for...
I have a machine with a 2.5gb intel lan ethernet port. If I connect that machine to an unmanaged switch that is only 1gb speed, it works no issues. If I then connect it to an unmanaged switch that is 2.5gb speed, the shell is laggy both in ssh...
Understood.
I'll keep in mind the 72h recommendation.
I'll talk to the other admins about the recommended procedures regarding this situation.
Thank you very much for this discussion and recommendations.
As others explained the problem isn't with PVE but is a generic problem with storage level snapshots. They always come with a storage and (depending on the used storage type) performance penalty. Copy-on-write based storages (like ZFS or btrfs)...
Moving data (block or file) that has snapshots associated with it is always challenging, regardless of the OS, hypervisor, or application. Moving such data between different storage types is exponentially more complex, as snapshot formats are...
The official VMware best practices are even more strict:
"Do not use a single snapshot for more than 72 hours.
The snapshot file continues to grow in size when it is retained for a longer period. This can cause the snapshot storage location to...
Alright, I have some progress. On one of the failing nodes.
BOSS-S1 adapter card:
DriveModelFirmware
0SSDSCKJB240G7R
DL43
1SSDSCKKB240G8R
DL6P
I upgraded the firmware on drive 1 from DL6P to DL6R and the errors are gone :D
Firmware...