Same issue with the Asrockrack B650D4U motherboard and the Intel i210 NIC, Proxmox 8. Has anyone found a solution? I have clients losing access to their machines randomly.
I get this error:
qemu-img convert /dev/rpool/vm-101-disk-0 vm-101-disk-0.qcow2
qemu-img: Could not open '/dev/rpool/vm-101-disk-0': Could not open '/dev/rpool/vm-101-disk-0': No such file or directory
How can I find the path to a zfs volume?
Hello,
I have 2 disks that I saved from months ago with a Proxmox 7.2 installed and machines with ZFS, I have connected these two disks to a the only server running in the office, that I have with Debian 12, I have imported the zfs pool called rpool, is there any way to recover a qcow2 from a...
Hello,
I have a problem with one of my Proxmox servers, I recently lost one of the two disks I have in my RAID 1 ZFS and it was replaced and the ZFS pool fixed, 2 months later, I have rebooted my server and it won't boot, it seems it can't find the boot disk, reading through the internet, it...
Configure:
token: 15000
token_retransmits_before_loss_const: 10
On totem {} zone on /etc/pve/corosync.conf, restart corosync service on all nodes and check.
Hello,
I have a problem that I have been dragging for a long time, I have Dell R630 servers with 6 Samsung EVO 870 SSD 1TB disks and I have a ZFS RAID 10, when I migrate or make a high disk consumption, the IO goes up a lot and the load the same, anyone can think of any idea how to solve it...
Hello,
I just bought a dedicated machine from Hetzner, I have installed Proxmox, I have ordered KVM Console I am doing the configuration.
I have a virtual machine with Mikrotik with two interfaces, one interface to the bridge that has assigned the physical interface of the dedicated server and...
I have the same problem. https://forum.proxmox.com/threads/raidz1-pool-zfs-use-a-lot-of-cpu-on-live-migration.114745/
I have tried to migrate a machine with the discard disabled and it still happens, in order to migrate machines, does the discard have to be disabled in all the machines of the...
I have upgraded to the latest version of Proxmox and the problem still occurs, any ideas? I have these same disks in another cluster with RAID Hardware and I don't have these same proxmox problems.
Hello,
I have a cluster with 5 machines with many cores with 6x1TB SSD in RAIDZ1 ZFS and when I migrate machines in live from one node to another node the load goes up a lot, it does not allow me to migrate machines smoothly, can you think of something that can be?
ZFS version:
zfs-0.8.5-pve1...
Hello,
I have a Proxmox Backup Server with 150TB doing backups every night to more than 350 machines in a cluster with 25 Proxmox nodes, these are done in parallel, sometimes at night, in random machines are frozen, you can not enter by SSH and the services inside are blocked and you have to...
The first thing I did was activate hugepages and change the cpu model to host, the problem persisted, once I deactivated the tablet mode, the machine started to use less CPU and it seems that the response speed to the CPU was faster, I was surprised
Hello I have Proxmox 6.3 with ZFS ZRAID1 with 4 disks, 384GB of RAM and Dual Xeon 56 cores and 3 same nodes all nodes connected to 2x10Gbps with MLAG and bond when try to migrate VM in live beetwen nodes of cluster, load of server increase to 50-60, i use live migration long ago with lvm and...
Hello, I tried to disable the hugepages option and with the qmu command I did not allow it, I deleted line in configuration in 100.conf file and rebooted VM, this in all the VMs with the option activated and now works
Hello,
I enabled hubpages in one of my VPS with this:
qm set 118 -hugepages 2
And now when clone this template i can't starts any VPS, appears this error:
TASK ERROR: start failed: hugepage allocation failed at /usr/share/perl5/PVE/QemuServer/Memory.pm line 544.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.