When using lvm as virtual disk storage it is not enough to pass additional parameters for lvm
Example use in console:
lvcreate -n vmid-disk-0 --type striped -L 32G -i 4 vg
In VG can have e.g. 4 disks (or another number), and the final LV can be created with different parameters, for different...
current version: pve-manager/5.1-36/131401db (running kernel: 4.10.8-1-pve)
The problem lies in the 4.13.8-2-pve kernel version. Boot is stoped on initramfs. No block devices. After downgrade to 4.10 the system boots normally.
This solution does not help for long.
Today on 3 nodes again broke the pveproxy process.
And restarting pveproxy without rebooting pve-cluster is impossible
In what there can be a reason of such behavior?
Good day. I managed to cope with the problem. Using the following commands.
# systemctl restart pve-cluster.service
# pvecm updatecerts -f
# systemctl restart pveproxy.service
Strange situation. Old certificates were valid until 2025.
After pvecm updatecerts -f
Pveproxy started and...
Under the system I have installed a mechanical hard drives.
Below I tried to start pveproxy and followed the recommendations.
# free -h
total used free shared buffers cached
Mem: 314G 196G 118G 4.2G 390M 5.6G
-/+...
At the moment swap out activity.
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
14 0 15709016 36706880 198028 1721140 0 0 473 434 0 0 7 2 91 0 0
10 1...
Nfs share is there but it is available.
pvesm status
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
hetzner_backup_0 nfs 1 10735915008 86672384 10649226240 1.31%
local dir 1 163036644 21889292 141130968 13.93%...
Good afternoon. I ran into the same problem. I have a cluster of 4 servers. On 3 of 4 pveproxy does not work, hangs in the "Ds" state.
root 34194 0.0 0.0 239612 66012 ? Ds 11:44 0:00 /usr/bin/perl -T /usr/bin/pveproxy start
# pveversion -v
proxmox-ve: 4.4-84 (running kernel...
Hello. I have the following situation:
There are 2 Proxmox node in different countries. Between them built ipip tunnel.
Node A - Role: Active - NFS client - pve-manager/4.4-13/7ea56165 (running kernel: 4.4.49-1-pve)
Node B - Role: Backup\Standby - NFS server - pve-manager/4.4-13/7ea56165...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.