Hi!
Der beschriebene Patch ist nun upstream für die Kernel Version 6.15 vorgesehen. Sobald dieser im Mainline-Kernel ist cherry-picke ich den Patch in den Proxmox VE Kernel, sodass das Gerät verwendet werden kann.
Außerdem ist gerade ein Patch...
Das tut mir leid zu hören, jedoch muss der Fehler nicht zwingend fatal sein.
Wenn SMART "PASSED" meldet, bedeutet dass das das NVMe insgesamt in einem guten Zustand ist. Das kann aber trotzdem bedeuten, dass es einen Bad Block [0] auf dem...
Hi!
This is unrelated to Proxmox VE directly, but from a quick glance it seems that nixmoxer has a own option for import-form at [0]. Apart from that, the task log you added shows that there is no space left on the device:
qemu-img: error while...
Hi and welcome to the Proxmox forum, philtao!
These ACPI errors could be anywhere in between informational to suggesting a hardware failure. Do you have the latest BIOS firmware for your mainboard installed? Does (temporarily) booting the opt-in...
Hi!
Könntest du einen Auszug von dem Syslog (dmesg/journalctl --system) posten während das Backup dieser VM fehlschlägt? Eine Vermutung wäre, dass es zu Lesefehlern an der Quelle gekommen ist, aber der Syslog würde mehr Information geben.
Welcome to the Proxmox forum, ushiromawashi!
Which problems did you encounter with Ceph Reef exactly? Do you have the Ceph log of the previous installation available? If so, please post it.
If I understood it correctly, downgrading Ceph and...
Hi,
this is a known issue: https://lore.proxmox.com/pve-devel/59c810a7-6e46-45de-aaf3-718b8c7c38b4@proxmox.com/T/#mecf577eb8f86bf13e48868c54a2aa074fd7d8750
Welcome to the Proxmox forum, maximcpp!
It depends on what you mean with restricting access to one or two nodes. In general, there's no general solution for this currently, since VMs are viewed as independent objects (except if they're limited...
Hi!
Is this a migration within a cluster or a remote migration between two mixed version PVE nodes? Could you post the full command and/or output of the migration? Either way, we discourage the mixed usage of different major versions of Proxmox...
Hi @trigg3r!
Oh, yes, but it is only vital that the Qemu Guest Agent itself is running as a service. The Qemu Guest Agent VSS Provider is only the implementation for the two guest-fsfreeze-freeze and guest-fsfreeze-thaw QMP commands that I've...
Hi and welcome to the Proxmox forum, lkprime!
So, the port is reported as open, but the WebGUI is not reachable? Have you tried reaching the WebGUI with https://10.0.0.186:8006? It could be that you're not automatically redirected from the HTTP...
Hi!
This might depend on the disk setup that was selected during installation (ext4, zfs, btrfs, ...), which did you chose? Do you currently have unpartitioned space on the boot disk? Depending on the initial Proxmox VE version, there might be...
Hi!
You can query the logs of the HA CRM with journalctl -u pve-ha-crm, which needs to be run on the manager / master node. The same applies for any HA LRM, which can be queried on any node with journalctl -u pve-ha-lrm. Please post the output...
Hi!
I'm not familiar how VMware exactly exposes these settings, but the concept of "thick provisioning" should equal to a volume on any Proxmox VE storage type that does not support thin provisioning and is not in the qcow2 file format. But...
Hi @trigg3r!
Btw do you experience any problems because the service is stopped? Because AFAIK this service is only run exactly for backups, so that the guest filesystem is synchronized and frozen (guest-fsfreeze-freeze) and then unfrozen again...
Yes, this should be correct AFAICS.
No, adding/removing a storage in "Datacenter > Storage" is only removing the entry in the /etc/pve/storage.cfg file, i.e. whether Proxmox VE does have awareness that the storage exists...
Thanks for getting back with those logs!
I'll have to take a closer look at this next week, but I noticed that the backup is already finishing some minutes before that (except one of the clocks in out-of-sync):
but the service is only stopped...
Since snapshots are not available for the LVM storage plugin... Could it be the deleted storage was a LVM thin pool before?
Just for clarification, the storages added in "Datacenter > Storage" are just to make Proxmox VE aware of these storages...
And what is the output of lvs?
At least the output of both (1) pvs and (2) vgs show that (1) the physical disk /dev/sdb4 is completely assigned to volume groups and (2) the volume group vg_SSD was assigned all its space.