Search results

  1. PVE Storage Replication or PVE-zsync ?

    Ok what is the easiest way of doing it with two seperate nodes w/o clustering? Nfs?
  2. PVE Storage Replication or PVE-zsync ?

    I need to sync two "running" kvms over two nodes in different datacenters for high avalibility not for load balancing. So if vm1 is not reachable second failover ip will reach vm2. So which way should I go ? Currently I am using PVE-zsync for snapshot. But AFAIK, it can not be used for running...
  3. oom-kill issue.

    Ok thank you. I will investigate deeper.
  4. oom-kill issue.

    But heavier loaded nodes is not having this issue. Nearly same node settings / hardware.
  5. oom-kill issue.

    Two of pve server are having oom-kill kvm stoping issue. But there are other noted even under heavier load but do not have oom-kill issue. Is there a way to completly disable it? I can handle slow vms but getting stopped kvm is a big problem. Thanks
  6. Kvm stops with this?

    This happened again. But there nearly %30 40 free memory on node. I need to disable oomkiller. Is it somehow possible? Also this is not happening on same hardware of other nodes even with heavier load.
  7. Kvm stops with this?

    Feb 09 00:24:42 s5 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/qemu.slice/124.scope,task=kvm,pid=15247,uid=0 Feb 09 00:24:42 s5 kernel: Out of memory: Killed process 15247 (kvm) total-vm:9604984kB, anon-rss:8819368kB...
  8. [SOLVED] issue with PVEZSYNC?

    Syntax is def. correct. But somehow, something is not liked by pve. I compared before and afterwards.
  9. [SOLVED] issue with PVEZSYNC?

    Ok I think I found the issue. I had manually edited storage.cfg file which did not cause any issue. But then I tried adding a temp storage and deleting it from GUI to update strorage.cfg file as pve like it to :) then tried command, voliaaa it is running fine. I think, somehow storage.cfg...
  10. [SOLVED] issue with PVEZSYNC?

    I even tried manual run: root@backup:~# pve-zsync sync --source 1.2.3.4:103 --dest D2 --name 103 --maxsnap 10 --method ssh ERROR: in path
  11. [SOLVED] issue with PVEZSYNC?

    It is not the main problem. Main issue is ERROR: in path I even tried creating bash from terminal. ERROR: in path still exits.
  12. [SOLVED] issue with PVEZSYNC?

    root@backup:/D4BACKUP# ./pve-zsync.sh ERROR: in path ERROR: in path ERROR: in path ... ... ./pve-zsync.sh: line 1: $'\r': command not found ./pve-zsync.sh: line 2: $'\r': command not found But some syncs are done. I just upgraded to v6. With v5 it was running fine. I am syncing to different...
  13. [SOLVED] issue with PVEZSYNC?

    But happens again next time I run.
  14. [SOLVED] issue with PVEZSYNC?

    Thank you I fixed it with your answer.
  15. Can someone please explain this about zfs ?

    For those having same issue which I am sure many, I shrink disk in kvm then set dataset size and extend again. And dataset size dropped to in use size again.
  16. Can someone please explain this about zfs ?

    I tried trim it only reduced a few GBs. I am going to try my method on a windows machine.
  17. Can someone please explain this about zfs ?

    It did not make any difference. But you already gave me an idea. I will shrink disk to used size and extend it again. I feel like it will reduce dataset to used amount. What do you think?
  18. Can someone please explain this about zfs ?

    I want to reduce size of backups taken from proxmox, snaphots by zfs and dataset size of kvms. Because as time pass by, with disk read/write/deletes, disk size increases unnecessarily. So I need release free space. Were I able to explain ?
  19. Can someone please explain this about zfs ?

    This is what I to learn. I tried zerofilling, defragmenting disk but they do not seem to help much. Only couple of GBs is reduced. So you suggest shrink the partition to used size and extend again? Do you think if clonning kvm from proxmox might help ?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!