Search results

  1. robhost

    5.3 and unprivileged containers: docker works, mount nfs does not

    Thats a bug, for sure. Can be reproduced here. Did you file a bugreport already? We also need the "nfs4" type, as "nfs" is only NFS v3! So at the moment it is impossible to mount a NFS v4 share :( IMHO the checkbox in the panel should add "nfs;nfs4" here.
  2. robhost

    VMs going down during backup randomly

    No, it's all ceph rbd raw.
  3. robhost

    VMs going down during backup randomly

    Nope, nothing. Thats very curious. HA VMs are restarting themselves automatically when this problem appears, but "normal" VMs need to be started manually :/
  4. robhost

    VMs going down during backup randomly

    Hi, there are no network problems und no NFS hangings, because other backup jobs (from other nodes) are running fine. VMs are Linux (CentOS 7). But the VMs are stopped and there does not exists a KVM process anymore, so it does not seem like a OS problem. Qemu Guest Agent is installed in all VMs.
  5. robhost

    VMs going down during backup randomly

    Hi, with latest PVE 5.1 we have sometimes VMs (KVM) going down and need to be started manually during backup process. It appeas random across our VMs and hosts. We use NFS storage and "snapshot" mode with LZO compression. Example: Any idea whats wrong or how to fix this?
  6. robhost

    Live migration with local storage gives an error

    With 5.1 it is still not possible to do a live migration from UI. Why is that? It also seems it's buggy, if you cancel a migration task (started from cli) it does not cleanup the LV (vm-XXX-disk-1) on the target node!
  7. robhost

    Proxmox Cluster 4.4 to 5.1 upgrade - VM migration problem

    Any news on that? If live migrating from 4.4 to 5.1 possible? We would like to do rolling upgrades on some of our clusters and it would be nice to to so with zero VM downtime.
  8. robhost

    KSM and overprovisioning

    You can change the KSM starting treshold to 50% or lower for testing.
  9. robhost

    [SOLVED] dump status in vzdump hook script.

    The phase (job|backup)-(start|end|abort)/log-end/pre-(stop|restart)/post-restart) will be added to the called script as an arguments. See our IO throttling hook script https://github.com/robhost/proxmox-scripts/blob/master/vzdump_hook.sh for example.
  10. robhost

    Info zu VirtIO SCSI

    Du kannst in der VM.conf einfach virtio0 durch scsi0 ersetzen, nachdem du auf SCSI umgestellt hast. Im System wird vdX dann zu sdX, korrekt. Daher vorher fstab und Grub-Config checken ;)
  11. robhost

    [SOLVED] Undestrand discard option

    Afaik you have to change this directly in the <VMID>.conf under /etc/pve: From to
  12. robhost

    Slow VM's when backups are running

    Hard to say why, but you could try to limit the bandwidth using "bwlimit" in /etc/vzdump.conf (see https://pve.proxmox.com/pve-docs/vzdump.1.html, it is in KBytes per second) i.e. for testing.
  13. robhost

    Slow VM's when backups are running

    If its not set, then you're fine.
  14. robhost

    Slow VM's when backups are running

    Do you have IOPS limits via Proxmox in place? If yes, this also slows down your backups!
  15. robhost

    Service pve-cluster stops regularly

    We made a coredump and can even reproduce this bug now, please see https://bugzilla.proxmox.com/show_bug.cgi?id=1504
  16. robhost

    Service pve-cluster stops regularly

    Yes, we'll try and report back :-)
  17. robhost

    Service pve-cluster stops regularly

    The issue triggered again here :( It seems like this comes when reading files from /etc/pve (we use collectd to read stats from /etc/pve/.rrd every minute). Is it possible that there is any kind of read lock or read race condition on /etc/pve or is it generally not a good idea to read...
  18. robhost

    Corosync memory leak

    Try this: journalctl -u corosync
  19. robhost

    Service pve-cluster stops regularly

    Hi, we just faced the same issue 2 days ago without any reason on PVE 4.4-15/7599e35a: We have no idea what was the problem. Could this be a bug in pve-cluster/pmxcfs? How did you get your trace? Our node has been fenced after this, what is expected when pve-cluster is gone. Is there any...