@warlocksyno @curruscanis Have either of you notice any irregularity viewing the disks in the UI (pve>disks in the menu)? I get communication failures but not all hosts are effected. i.e. Connection refused (595), Connection timed out (596)
Hi,
Could you please check the `/etc/hosts` file? I would also check our wiki guide on renaming PVE node [0].
[0] https://pve.proxmox.com/wiki/Renaming_a_PVE_node
You're absolutely right. I had already looked at this particular snippet, but I must have suffered from severe troubleshooting/afternoon fatigue. I have symlinked from /etc/pve/priv/ceph now to one of the paths kvm is looking for, and the VM came...
Meine Frage ist: Was mache ich am besten aus meiner neuen UGreen-NAS, die eine ältere Synology ablösen soll. Sie soll in einer nicht produktiven HomeLab-Umgebung zwei Aufgaben erfüllen: Ca. 6 TB Backup-Dateien aufnehmen und als Datastore für...
Hello,
I used to be able to upload isos from the web ui. This is a basic feature, of course it worked, never had a problem with pve 8. Somewhere since PVE9, maybe 9.1, I can no longer upload isos. The system io stall goes up to 90 and the...
Hello,
Facing the same problem since I upgraded all my PBS to pbs4. Running 5 pbs-servers / 8 pve clusters, we had freezes on different VMs on different PVE versions.
I've rolled back all my pbs to 6.14.11-4-pve. I'll test the latest "test-pve"...
Maybe try updating your nodes; the release notes of today's qemu-server 9.1.2 update includes:
EDIT: This is available on the no-subscription repository: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_package_repositories
EDIT2...
I have the same problem since i updated today, you need to set the manual maintenance mode before rebooting. for some reasons migrating with this works fine. when its stuck in the faulty state it helps to shutdown the vm (not in the proxmox...
Thanks Chris for the info,
however, we're in the "feature freeze" phase leading up to the Christmas holidays, and I don't feel like testing a kernel that might work well for a few days and then, on December 25th, crashes my system just as I'm...
It looks like I managed to implement this successfully with the following hookscript
#!/bin/bash
if [ "$2" == "pre-start" ]
then
echo "Move the disk from NAS to SSD"
nohup /usr/sbin/qm disk move "$1" sata0 local-zfs --delete true...
Ok, I have some news:
My pfSense/OpenBSD VM and a Windows XP VM seem to have triggered ballooning - their mem usage went down significantly - despite showing
"actual=1024 max_mem=1024" using "info balloon" in monitor.
The mentioned Windows...
OMG!! 40 LXCs each with 4 vcpu and 4gb RAM? Running on a 16/32 core CPU combined with consumer NAS drives? Massive overcommit, no wonder that you‘re running in I/O stalls. I don’t believe that the system ever ran normally. Otherwise I through my...
I do have 2 sockets in the VM configuration:
For some reason the VM thinks it has 1 socket with 360 CPUs, but the socket and core count in VM is correct.
With no ballooning and no ksm it did also crash...
I will try (exceptionally) disabling swap on the host and see how it performs
But as was stated in this thread and other places before, that should not be a desirable running config.
Since the original need was for another server, specifically a PBS we decided to go with 1 8-bay unit from 45homelab.com. It's in production now and should be 'here' and in place next week. Then the next steps, will be to sync the current...
Hello everyone,
I have a Proxmox 9.1.2 installation on an OVH dedicated server.
When I try to launch the shell on the node, I regularly get the following message in the logs:
failed waiting for client: timed out
TASK ERROR: command...
Hallo zusammen,
nachdem dem Update von PXE 8 zu 9 habe ich folgendes Problem bei apt update
Ich bekomme am Schluss diese Meldung
Removing subscription message from MobilrUI...
sed: can't read...
You need to change your vSockets from 1 to 2 in the VM configuration.
You currently have configured 1 vSocket for the VM and thus it can only see the vCores of 1 vSocket.
CPU(s): 360
On-line CPU(s) list: 0-179
Off-line...
I think this is maybe bug or analyze of older state (blk missing queues?).
In beginning of chapter, you have:
From my study and usage:
scsi translates scsi commands to virtio (virtqueues) layer (overhead, ~10-15%), and has NOW only...
please post a backup task log.
client-side deduplication can only happen if there is a previous snapshot on the backup target (datastore+namespace+group!) that is not a in a verification-failed state. based on your description I suspect you have...