Not exactly. My CT is on the host A, and the host A, srv B is a external nfs server mounted via /etc/fstab at /ftp path.
In the CT, the mount point is enable through the config file /etc/pve/lxc/123.cfg like this :
mp0: /ftp,mp=/ftp
So my new server C will be update in the /etc/fstab on...
My CT test uses a /ftp mount point on my SRV1
SRV1 is replaced by a new SRV2 (new ip, etc) with the same content inside /ftp, I would like to mount /ftp from SRV2 on my CT test without rebooting / downtime.
Hi Floh8,
I'm switching to a new NFS server, I would like to mount my new server (partition) in the same directory without stopping my CT, is this possible?
Hi Community !
I have a volume that I want to (switch) mount in the same location without doing downtime on the host / CT.
Is it possible to unmount / mount an NFS volume in a container without stop / start the CT ?
Unmounting is done on the host and the mount point is configured in the CT...
Hi @oguz,
Thanks for your feedback, this is the content of /etc/systemd/coredump.conf :
[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
I've applied the update but haven't rebooted yet as in a production environment...
Hi @oguz,
I hope you are doing well.
I got the same message again, I have the same logs in journalctl but in coredumpctl I do not see anything related to lxcfs.
Jun 13 15:35:45 hvr2 kernel: lxcfs[12664]: segfault at 8018 ip 00007f389acfe00e sp 00007f38617f9aa0 error 4 in...
* can you post the journal entry with the crash like last time?
Journactl's system log retention was not active, I just did it for the next few times.
* what do you get if you run coredumpctl?
No coredumps found.
* can you check also the other servers (maybe it crashed on a different node)...
thanks for answering :)
could you provide the coredump from the new crash?
Unfortunately the folder is empty in /var/lib/systemd/coredump, and a search for a filename starting with 'core.lxcfs' does not return any results, while packages are installed:
ii lxcfs-dbgsym 4.0.6-pve1 amd64 debug...
Hi @oguz ,
I restarted the server on a recent kernel proxmox-ve: 6.4-1 (running kernel: 5.4.106-1-pve), but the problem occurred again.
Do you have other leads ?
Hi @Fabian_E,
This happened to me again severals time recently, i understand your last answer but how can we try to find the origin of the problem?
This is problematic in a production environment, we are obliged to stop star the containers.
proxmox-ve: 6.4-1 (running kernel: 5.4.78-2-pve)...
Hi @Fabian_E,
I had the problem again (twice) on proxmox v5, can you still have a look?
ii lxcfs-dbgsym 3.0.3-pve1 amd64 Debug symbols for lxcfs
ii systemd-coredump 232-25+deb9u12 amd64 tools...
Hi @oguz ,
I comment again on this case because I encountered this error on an updated version.
logs on the host :
`Mar 16 15:55:00 hvr2 kernel: lxcfs20228: segfault at 8018 ip 00007fcecfa6e00e sp 00007fceaf7fdaa0 error 4 in liblxcfs.so[7fcecfa5e000+14000]
Mar 16 15:55:00 hvr2 kernel: Code...
I found my problem, it was related to my provider.
I confirm that this configuration below works on proxmox 6 (vmbr2 not vmbr0) :
iface vmbr2 inet static
address 5.135.xxx.xxx/27
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.