Search results

  1. M

    Switch NFS MP on the same path without rebooting

    In my case, the answer is no. I have to stop start all the CT in order to point on the new NFS server.
  2. M

    Switch NFS MP on the same path without rebooting

    Not exactly. My CT is on the host A, and the host A, srv B is a external nfs server mounted via /etc/fstab at /ftp path. In the CT, the mount point is enable through the config file /etc/pve/lxc/123.cfg like this : mp0: /ftp,mp=/ftp So my new server C will be update in the /etc/fstab on...
  3. M

    Proxmox VE 7.4 released!

    Hi, What is the EOL (security) for versions 7.X please ?
  4. M

    Switch NFS MP on the same path without rebooting

    My CT test uses a /ftp mount point on my SRV1 SRV1 is replaced by a new SRV2 (new ip, etc) with the same content inside /ftp, I would like to mount /ftp from SRV2 on my CT test without rebooting / downtime.
  5. M

    Switch NFS MP on the same path without rebooting

    Hi Floh8, I'm switching to a new NFS server, I would like to mount my new server (partition) in the same directory without stopping my CT, is this possible?
  6. M

    Switch NFS MP on the same path without rebooting

    Hi Community ! I have a volume that I want to (switch) mount in the same location without doing downtime on the host / CT. Is it possible to unmount / mount an NFS volume in a container without stop / start the CT ? Unmounting is done on the host and the mount point is configured in the CT...
  7. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    Hi @oguz, Thanks for your feedback, this is the content of /etc/systemd/coredump.conf : [Coredump] #Storage=external #Compress=yes #ProcessSizeMax=2G #ExternalSizeMax=2G #JournalSizeMax=767M #MaxUse= #KeepFree= I've applied the update but haven't rebooted yet as in a production environment...
  8. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    Hi @oguz, I hope you are doing well. I got the same message again, I have the same logs in journalctl but in coredumpctl I do not see anything related to lxcfs. Jun 13 15:35:45 hvr2 kernel: lxcfs[12664]: segfault at 8018 ip 00007f389acfe00e sp 00007f38617f9aa0 error 4 in...
  9. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    * can you post the journal entry with the crash like last time? Journactl's system log retention was not active, I just did it for the next few times. * what do you get if you run coredumpctl? No coredumps found. * can you check also the other servers (maybe it crashed on a different node)...
  10. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    thanks for answering :) could you provide the coredump from the new crash? Unfortunately the folder is empty in /var/lib/systemd/coredump, and a search for a filename starting with 'core.lxcfs' does not return any results, while packages are installed: ii lxcfs-dbgsym 4.0.6-pve1 amd64 debug...
  11. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    Hi @oguz , I restarted the server on a recent kernel proxmox-ve: 6.4-1 (running kernel: 5.4.106-1-pve), but the problem occurred again. Do you have other leads ?
  12. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    Hi @Fabian_E, This happened to me again severals time recently, i understand your last answer but how can we try to find the origin of the problem? This is problematic in a production environment, we are obliged to stop star the containers. proxmox-ve: 6.4-1 (running kernel: 5.4.78-2-pve)...
  13. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    Hi @Fabian_E, I had the problem again (twice) on proxmox v5, can you still have a look? ii lxcfs-dbgsym 3.0.3-pve1 amd64 Debug symbols for lxcfs ii systemd-coredump 232-25+deb9u12 amd64 tools...
  14. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    Hi @oguz , I comment again on this case because I encountered this error on an updated version. logs on the host : `Mar 16 15:55:00 hvr2 kernel: lxcfs20228: segfault at 8018 ip 00007fcecfa6e00e sp 00007fceaf7fdaa0 error 4 in liblxcfs.so[7fcecfa5e000+14000] Mar 16 15:55:00 hvr2 kernel: Code...
  15. M

    [SOLVED] Add a secondary network public IP

    I found my problem, it was related to my provider. I confirm that this configuration below works on proxmox 6 (vmbr2 not vmbr0) : iface vmbr2 inet static address 5.135.xxx.xxx/27
  16. M

    [SOLVED] Add a secondary network public IP

    Hi Dylan, Thank you for your answer but i can't do what i want. I would like to add a secondary IP to my interface vmbr0. I tried using command line : ip addr add 5.135.xxx.xxx/27 dev vmbr0 route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref...
  17. M

    [SOLVED] Add a secondary network public IP

    Hi Community, I apologize if the subject has already been discussed but I have not found an answer to my case. I would like to add a second public IP on my server. Here is my current configuration: I would like to add this configuration, how can I do it? address 5.135.x.x netmask...
  18. M

    ZFS: Loaded module v0.8.5-pve1 stuck after migration pv5to6

    Hi Community, I Need your help, i've followed the guide pve5to6 to migrate my proxmox v5 sever to v6. When the migration was finished, i try to reboot, but i'm stuck with this message (cf attached file), i try to boot with old kernel version, same result. FYI, i've changed the value...
  19. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    Thanks Oguz, we will plan to switch to the v6. Regards.
  20. M

    lxcfs.service: Main process exited code=killed, status=11/SEGV

    Hi Community, I wanted to know if anyone has ever encountered the following problem: logs: 2020-11-29T04:37:30.054129+01:00 hv18 systemd[1]: lxcfs.service: Main process exited, code=killed, status=11/SEGV 2020-11-29T04:37:30.094752+01:00 hv18 systemd[1]: lxcfs.service...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!