Search results

  1. M

    [SOLVED] Sync /etc/pve/priv/known_hosts ?

    No1 knows this? Shouldn't /etc/pve be identical on all nodes?
  2. M

    Migration of VM with replication job not possible, why?

    if it is so trivial, please do contribute and submit the code yourself.
  3. M

    Migration of VM with replication job not possible, why?

    If you set up one minute replication job, the max time to sync would be data written in last 60 seconds. This is pretty much the same as sync before send.
  4. M

    Migration of VM with replication job not possible, why?

    1) Qemu contains no function for dirty bit map for delta sync? 2) If you set up replication beforehand, this is exactly what happens on offline migration. 3) See: https://bugzilla.proxmox.com/show_bug.cgi?id=2252 .
  5. M

    How to resize root partition Proxmox

    Because there are many different possible setups, and people here do not know how you installed, no1 can even try to help you. I suggest you provide info about your install, specifically about storage.
  6. M

    [SOLVED] Sync /etc/pve/priv/known_hosts ?

    Hi, i see that i have different content in /etc/pve/priv/known_hosts (/etc/ssh/ssh_known_hosts) on my PM 5 cluster. Shouldn't this file be the same (in sync) on all PM nodes, because it is residing inside /etc/pve? Here is an example conf. from two nodes in the same cluster: root@p31:~#...
  7. M

    [SOLVED] ID: nagios -> ping: socket: Operation not permitted

    Hi, i have a PM 5. I have nagios nrpe agent installed. I monitor lots of things and it works. The only thing I can not monitor is reach-ability using ICMP. Looks like user nagios has no such permissions. How can this be? I don't want to sudo just to do a simple ping. Please advise...
  8. M

    pvecm nodes does not match (IP or FQDN) for last added node

    I think this still is strange. Today i added another node and pvecm nodes has shown node by its FQDN on some nodes and as IP on others. Restarting corosync set it to IP on all nodes: Example: root@p29:~# pvecm nodes Membership information ---------------------- Nodeid Votes Name...
  9. M

    Migrate a suspended VM?

    I just want to take this opportunity to bring to attention this feature yet again. It would be awesome, so I would not loose ZFS snapshots when migrating a VM without rebooting it. :-)
  10. M

    Live migration of existing machines

    Seems that you are wrong and there is vm-126-disk-0.qcow2 2019-10-09 13:49:51 ERROR: found stale volume copy 'localdir2:126/vm-126-disk-0.qcow2' on node 'cloudhost3'
  11. M

    Live migration of existing machines

    Remove "stale volume copy 'localdir2:126/vm-126-disk-0.qcow2' on node 'cloudhost3'" and migration should work.
  12. M

    Incremental back-ups using dirty bitmap

    WOW that would be awesome, so we could stop using our own scripts for backups and start using official solution. :-)
  13. M

    Migrate a suspended VM?

    To all. What do you guys think? Would you like to see this feature. Please comment here and on the feature request, so devs know this would be an awesome feature.
  14. M

    LVM thin disaster today: ran out of metadata space

    I sugggest to add monitoring of available metadata on your LVM installs.
  15. M

    LVM thin disaster today: ran out of metadata space

    What more info do you want? https://forum.proxmox.com/threads/lvm-ran-out-of-metadata-space-need-help.41325/post-199225
  16. M

    Live migration of existing machines

    I would suggest you do some testing with test VMs and you can see how your cluster setup behaves.
  17. M

    ZFS bad Performance!

    I am not familiar with Kingston A400, but Samsung EVO / PRO will be faster than HDDs but still painfully slow, as well pose a significant threat for data loss on power failure. Take a look at intel D3 SSD line of disks and similar. Just look at the enterprise SSDs from any provider really. They...
  18. M

    ZFS bad Performance!

    Well, sorry to disappoint, but 2 x HHDs in mirros on ZFS will always be painfully slow in my experience. The first usable config with HDDs in my use cases is at minimum of 10 HDDs. However what you could try is: 1. Enable SLOG for pool (create partitions on ssds and add them as log device for...