Search results

  1. A

    [SOLVED] High Load because of PVE Storage checks

    Where I can find access logs of the pbs?
  2. A

    EMLINK: Too many links

    I hope it helps ... It seems that "dir_index" is missing in the file system features from what I read in web
  3. A

    [SOLVED] High Load because of PVE Storage checks

    I test wise add a PBS storage to my 7 host PVE cluster ... the results was that in the pbs I had 2 processes with 130% CPU and a flooded syslog with authentication requests ... every second roughly 1 request per host ... It also feels like the hosts (especially one with the worst hardware) was...
  4. A

    EMLINK: Too many links

    Hi, this also seems to be an issue on an NFS mount ... any idea how I can increase the number there? The nfs is mounted from a synology ext4 volume
  5. A

    NFS Datastore: EINVAL: Invalid argument

    I have basically the same problem I think ... but with a Synology. He creates the directory but then it seems he wants to execute the "chown" to set owner to backup.backup (which is unneeded because rights are drwxrwxrwx and I think this fails ... I did not found a proper way to setup that in...
  6. A

    Container on gluster volume not possible?

    Is someone who is doing this also having HAguests? DOyou also experience issues like https://forum.proxmox.com/threads/problems-on-shutdown-boots-with-nfs-glusterfs-filesystems-and-ha-containers-systemd-order-of-services.71962/#post-324432 ?
  7. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    @Dominic The deeper I think/look into it I'm more sure that the topic has nothing to do with glusterfs in fact but with when the storage mounts are executed. When you check my chart in...
  8. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    Thank you. I see your points ... never hit them ;-) Yes having option to configure mor then 2 Gluster servers in config would be cool (I have a 3-replica sytem over 7 Nucs and one second 3 way replicated over 3 nucs; In fact the scenario like in the ticket is dangerour because does not provide...
  9. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    @Dominic this thread is kind of the follow up from https://forum.proxmox.com/threads/pve-with-glusterfs-as-shored-storage-and-filesystem-gets-read-only-on-boot.58243/#post-277424 Do you have any idea? if the "Mounts" are done by pve-guests service then having pve-ha-lrm starting (and ending)...
  10. A

    [SOLVED] Any more NUC users with unusable watchdog here?

    BTW: This issue is fixed since the latest kernels in pve ... so it seems it was a linux isue :-)
  11. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    See also here. This is a systemd-analyze ... why those two mounts are done that late in the startup process?
  12. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    I digged deeper and I have a new idea ... The problem is that al HA guests on the system are not stopped by the "stop all guests "call ... so they continue to run and then the filesystem get killed (or glusterfs gets stopped) ... Whe you see the log above you see Jun 25 14:28:11 pm7...
  13. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    Maybe it is also really like https://bugzilla.redhat.com/show_bug.cgi?id=1701234 where blk-availability.service is just running "too early" or https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=946882 ... but also no real solutions there when I see it correctly :-(
  14. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    because of performance things I read I did not wanted to use nfs stuff and back to 5.x is also no option because EOL. In general when looking in my syslog I find: > Jun 25 14:28:15 pm7 systemd[1]: mnt-pve-glusterfs.mount: Succeeded. Jun 25 14:28:15 pm7 systemd[1]: Unmounted /mnt/pve/glusterfs...
  15. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    So you mean I stop the container and then start manually that way? But /tmp is a bad place because will be cleaned on startup or ?!
  16. A

    [SOLVED] Problems on shutdown/boots with nfs/glusterfs Filesystems and HA containers / systemd order of services!

    Hi, Some information on my setup: I have a cluster of Intel Nucs with SSDs and most of the SSD space is a cluster wide "glusterfs" filesystem where all VMs and also LXC Container images are stored. Because of the fact that Containers can not be placed on GlusterFS by default I found ideas here...
  17. A

    e1000 driver hang

    I also had the feeling that it dit not worked in a first place .. but wasnt shure how to verify correctly. But with this info I will also place in both :-)
  18. A

    e1000 driver hang

    I have a one liner with eno1 in my file and added it to that ... no idea if correct :-( Maybe someone can advice iface eno1 inet manual post-up /sbin/ethtool -K eno1 tso off gso off
  19. A

    [SOLVED] Any more NUC users with unusable watchdog here?

    In fact I use a Meross 4 socket plug thingy (https://www.meross.com/product/19/article/) so I can control that via API ... I "Just" need to get a trigger and info if a node got fenced :-) Yes external monitoring or such ... Does the Proxmox API returns a "fenced" status on a node?
  20. A

    [SOLVED] Any more NUC users with unusable watchdog here?

    So the "Only" risk is that if kernel hangs that soft-watchdog do not triggers a reboot and the mchine hangs and needs manual power off ... but fencing is done by the other hosts ... so ok ... One aside question that you might know: When a node gets fenced there is an email sent out ... is there...