Recent content by tonci

  1. T

    Abysmally slow restore from backup

    Hello , this is my post regarding this subject: https://forum.proxmox.com/threads/backup-speed-vs-restore-speed.106873/#post-466581 My general conclusion was: - backup speed is very satisfactory - 90% saturation of 1G net work and this is considered expected ... no matter backing up 1 VM or...
  2. T

    Redundant Servers and Load Balancing using MX Records

    thanks, sounds promising ... would point-to-point port-forward be enough ? ... if yes which ports should be opened between those two PMGs ... or sit-to-site vpn is necesary ? Thanks BR T
  3. T

    PBS compression backup job

    Hello, I did little test today, made one VM and this is its "df -h": [root@s01 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs...
  4. T

    Redundant Servers and Load Balancing using MX Records

    Hi, I'm not 100% sure that my post belongs to this subject, but this would be my scenario: I'm planing to involve PMG as incoming point for 10 clients of mine , a I'm planning to host it at one ISP (as VM) . So incoming mails for 10 domains will come to this pmg-gw1 and he will forward them to...
  5. T

    [SOLVED] Failed to run lxc.hook.pre-start

    ?! ... Ouu yesss :) ...Thank you very much !!!
  6. T

    [SOLVED] Failed to run lxc.hook.pre-start

    Hello to all, after today's upgrade from 7.2-11 to 7.2.14 LXC's won't boot run_buffer: 321 Script exited with status 2 lxc_init: 847 Failed to run lxc.hook.pre-start for container "109" __lxc_start: 2008 Failed to initialize container "109" TASK ERROR: startup for container '109' failed...
  7. T

    Permssion denied (os error 13) on NFS share

    That solution solved my issue too, so thank you! I 'd like to add following: My home-LAB setup consists of PVE & PBS on the same proxmox host. PBS has 2 datastores, "pbs-local" as local one (local zfs dataset) and "pbs-nfs" as nfs share on synology NAS. So VMs from the local zfs-vm dataset are...
  8. T

    Issue after upgrade to 7.2.3

    Hi to all, today I did 3-node-cluster upgrade from 7.1.x to 7.2.3 (with community subs) and the process broke up on every node. After Apt update / dist-upgrade (that obviously went with errors) I rebooted host , but than OpenVswitch network went down and I had to switch back to linux bridge...
  9. T

    backup speed vs restore speed

    one more thing .... Backup speed was not an issue at all ... PBS was receiving data at 950Mbs. .... backing up 1 VM ... but restoring the same one at 350Mbs :( regarding all mentioned above we can say that PBS is "faster at writing than in reading" ... which is not that common ...
  10. T

    backup speed vs restore speed

    There is one more thing to point out (after further "combination" testing) ... : my cluster consists of two powerful hosts (pve1&2), and the "little" 3rd quorum node (pve3) is SUPERMICRO A2 C3558 Atom 4 x 4TB sata 7200 rpm WD RED Pro drives - zraid10) This quorum hardware concurrently runs...
  11. T

    backup speed vs restore speed

    vzdump restores at wire speed even restoring just one VM ....
  12. T

    backup speed vs restore speed

    this is storage.cfg dir: local path /var/lib/vz content images,rootdir shared 0 zfspool: data2 pool data2 content rootdir,images mountpoint /data2 nodes pve02-company,pve01-company sparse 1 zfspool: data3 pool data3...
  13. T

    backup speed vs restore speed

    Now I tested with hw-raid volume (LSI3108) (1 volume -> zfsraid0) and the result is the same like with 4 x 1T server ssd zraid10 But , we are always talking about one target , all this 4 VMs are being concurrently restored from this PBS to the same target ... So target is capable of...
  14. T

    backup speed vs restore speed

    Hello, this is what benchmark says: so, I'm restoring 4 different VMs to the same target storage zraid10 4 x 1T ssd (local on the pve host)
  15. T

    ovswitch problem after upgrade

    After/during latest upgrade (w and w/o subscription) I had problem with openvswitch interfaces : The node was cut off from the cluster and I had to use IPMI to recover net interfaces ... ifup vmbr0 did the job ... So be careful ....

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!