The cause of the problem in mtu 9000. After return mtu on bond interfaces to 1500 - trouble is solved.
I use bonding and vlans over bond for all networks - lan, nodes interconnect, external. And mtu 9000 was set on all bond. But mtu 9000 is need only vlan for interconnect of nodes.
Yes, my mistake.
With cookie have timeout:
...
> GET /api2/json/nodes/node-b/storage/local/status HTTP/1.1
> Host: node-a:8006
> User-Agent: curl/7.58.0
> Accept: */*
> Cookie: PVEAuthCookie=PVE:root@pam:5B632A...
>
< HTTP/1.1 596 Connection timed out
...
Then if check status node-a - have...
Summary:
Create cluster, add nodes (work only by --use_ssh),
When try look at another node status - request https://node-a:8006/api2/json/nodes/node-b/storage/local/status - fail with error: 596 Connection time out
But if login on each nodes - status returned
Connection is direct, no any proxy...
I want to update all servers, when I try update I see next list for update:
The following NEW packages will be installed:
pve-kernel-4.4.95-1-pve
The following packages will be upgraded:
base-files bind9-host debconf debconf-i18n dns-root-data dnsmasq dnsmasq-base dnsutils git git-man gnupg...
Have the next problem: https://forum.proxmox.com/threads/proxmox-cluster-hangs-partially.36209/#post-191417
I kill corosync, stop pve-ha-*, restart pvedaemon.
When I restart pve-manager, containers restarted too.
Why? How to prevent restart containers when restart pve-manager?
# pveversion...
Next commands fix cluster state for a while:
killall corosync -9
systemctl stop pve-ha-*
systemctl restart pvedaemon
systemctl start pve-ha-*
but problem, present.
Same problem only one host.
I found next regularity, after message:
Dec 15 06:30:47 lpr9 systemd-timesyncd[7596]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.058s/0.001s/-8ppm
follow messages:
Dec 15 06:44:01 lpr9 corosync[19248]: [TOTEM ] A processor failed, forming new configuration...
In linux all /dev/disk/by-* show link to block device in /dev/sd?, example:
# ls -la /dev/disk/by-path/
total 0
drwxr-xr-x 2 root root 200 Nov 2 12:58 .
drwxr-xr-x 8 root root 160 Nov 1 21:21 ..
lrwxrwxrwx 1 root root 9 Nov 2 12:59 pci-0000:05:00.0-fc-0x2100000e1ecd8890-lun-0 -> ../../sde...
Summary:
I have container with root disk stored on storage 'local':
I want to move the root disk to another storage: local -> my-other-storage
How to do this?
Why this action I can't make by WebUI?
When this action will be can make by WebUI?
Summary:
pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-2-pve)
The node have attached storage by NFS (it will use for vzdump backups) with mount point /mnt/pve/backup-1
When pve try to create backup it fail with error:
INFO: starting new backup job: vzdump 111 --compress lzo --node lpr8...
How do change (persistent) limits (open files) for unprivileged containers?
root@container1:~# ulimit -n 65536
-bash: ulimit: open files: cannot modify limit: Operation not permitted
prlimit - not affected
changes in pvenode:/etc/security/limit.conf - not affected
Thx! I known this method but this is wrong way for many times creating container from webui.
I think need field "Block size" in dialog "Create container" (feature request).
Thx! But I know how to see ip on dhcp, in log, in cli etc...
LXC have command lxc-ls -f - this command show all IPs on interfaces in container (no difference static, or manual, or by dhcp).
But webui currently show only static configured by it.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.