This time I got the stack.
For vzdump, /usr/bin/perl -T /usr/bin/pvesr run --mail 1, /usr/bin/perl -T /usr/bin/pveproxy restart, /usr/bin/perl -T /usr/sbin/pct list
it's like this:
[<0>] call_rwsem_down_write_failed+0x17/0x30
[<0>] filename_create+0x7e/0x160
[<0>] SyS_mkdir+0x51/0x100
[<0>]...
I think it's not exactly the same I don't have Ceph. zfs and ext4. All VMs and lxc-containers but one are stopped. Everything works except proxmox utils which halts ('D' state) when run.
Thanks, I will try strace next time it halts. dmesg didn't say anything about it.
I hope strace will get the point where it halts, thanks for pointing at it.
All backups are at night. Almost one by one, not at the same time. The problem continutes (or happens) when there is no working backup...
Hi All!
I have proxmox cluser, six nodes.
pve-manager/5.3-5/97ae681d (running kernel: 4.15.18-9-pve)
on all of the nodes.
On one of them (with nfs-server, it's mostly used for backup) sometimes I see very big LA. 13..14... 20... and it grows bigger.
There is very little cpu usage (top says...
error log from journalctl -xe:
aug 23 22:23:00 p12 systemd-udevd[23409]: Could not generate persistent MAC address for vethI6H4WH: No such file or directory
aug 23 22:23:00 p12 kernel: IPv6: ADDRCONF(NETDEV_UP): veth131i0: link is not ready
aug 23 22:23:01 p12 CRON[21473]...
Hello,
I have issues with such upgrade. After do-release-upgrade from Ubuntu 14.04 to 16.04 containers refuse to start.
root@p12:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.18-2-pve)
pve-manager: 5.2-7 (running version: 5.2-7/8d88e66a)
pve-kernel-4.15: 5.2-5...
I removed the bad container and created a new one, with Ubuntu 16.04-template.
When I start it, pct list refuses to work:
root@p4:/zfs2/subvol-450-disk-1/root# pct list
can't open '/sys/fs/cgroup/memory/lxc/450/ns/memory.stat' - No such file or directory
And there are no containers in the...
I have lxc-containers with Ubuntu 14.04.
After do-release-upgrade to Ubuntu 16.04 they refused to start.
I tryed it on
pve-manager/5.2-6/bcd5f008 (running kernel: 4.15.18-1-pve
and on
pve-manager/5.2-6/bcd5f008 (running kernel: 4.15.18-1-pve)
I tryed reboot proxmox but it didn't help...
I can give him right for /vms/101, then he can create container 101.
But I can't give him right to create /vms/101 only on node p1, he can do it on the other nodes too.
And I can't give him right to create /vms/anything.
With the sotrage - storage name is the same on all nodes. He can choose...
Hi
I have a cluster from several proxmoxes and I'm trying to give a user full access rights to only one node.
I gave Administrtator for /nodes/p1, access to local storage. It doesn't help: the user can't create any containers there.
It there a way to give full adminstrative access to only one...
I've set up proxmox in the google cloud.
For each external IP I have to set up different network and policy routing.
It works fine but I wasn't able to make the rules work during boot.
If I say in the terminal
ip route ad...
ip rule add ...
it works fine.
But when I put it to...
It's just a business task. I can't make our partner change it's network.
Of course I use ipsec and openvpn but this time I need exactly pptp. I know it's bad (and it made a lot of troubles when lxc with pptp hang up the whole node) but I need pptp-client. Without real IP, with nat.
I'm trying to make pptp work in proxmox and I found a lot of problems.
The only way I managed to get it work is: VM with a real IP.
pptp doesn't work in a VM with nat.
Is it possible to make pptp work in a VM with nat?
When I make lxc-container it doesn't work even with a real IP without nat...
I have a proxmox cluster.
I migrated one of the containers from one host to the other.
File system is zfs, pveversion
pve-manager/4.4-12/e71b7a74 (running kernel: 4.4.40-1-pve)
After that vzdump started to hang on
INFO: create storage snapshot 'vzdump'
If I try
zfs snapshot...
I see very strange Summary for my containers.
Summary says that containers use very little memory. But indeed the use almost all memory.
For example, now in proxmox Summary:
Memory usage 0.06% (20.54 MiB of 32.00 GiB)
Total for all containers on the node:
RAM usage 3.75% (1.32 GiB of 35.30 GiB)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.