Search results

  1. K

    Backup not working for lxc containers.

    Here's an idea. Would it be possible to add an option to skip lxc freeze at snapshot? When using a sufficient snapshot mechanism like in zfs or lvm-thin, it's enough that a consistent snapshot is made and that gets backed up. Manually making the snapshot and backing up works without problems...
  2. K

    Backup not working for lxc containers.

    OK, I understand, but is there any info on what blocks the process and why, when a fuse mount is present? Considering the confidence in your statement you must have investigated it more thoroughly.
  3. K

    Backup not working for lxc containers.

    Sorry for resurrecting an old thread, but I've just run into the same problem as OP on a new 5.1 upgrade (from 3.4). All containers are running fine, but the snapshot backup just freezes at the line "INFO: create storage snapshot 'vzdump'". So you say it's not possible to backup containers with...
  4. K

    Linux Container wrong hostname in /etc/hosts

    For now I think we can live with it (I also mentioned this solution exists), however, could you please explain what the purpose is for adding an entry for 127.0.1.1? When other entries are already added in the file for the same host name. The flag file approach is somewhat faulty because it's...
  5. K

    Linux Container wrong hostname in /etc/hosts

    In my case I have a bunch of lxc containers converted from openvz where we set the hosts entries manually. They all contain the proper host name, yet proxmox puts the following lines in the /etc/hosts file: # --- BEGIN PVE --- 127.0.1.1 xxxhost # --- END PVE --- But all hosts files contain...
  6. K

    High idle CPU load with kvm on latest PVE 5.1

    I've just installed a new system with latest PVE 5.1 with all updates on a Supermicro board with the Intel vulnerability patches in BIOS and a Xeon E5-2620v4. I experience high load on idle kvm VMs, about 6-7%. On the guest, CPU utilisation is 0-0.1%, all fine. I've tried multiple CPU types...
  7. K

    Proxmox VE on Debian Jessie with zfs - Hetzner

    If you manage to run a Debian live CD image using any available or working method, you can just use the usual partitioning tools and debootstrap to install a base system, reboot into that and install PVE on top of it... I've done that several times with success. The qemu way looks excessively...
  8. K

    Poor ZFS performance On Supermicro vs random ASUS board

    All types of disks often lie about their physical layout and sector size. I'd suggest keeping your record size the same as your DB page size in any case. atime=off is a good suggestion but I don't expect it to change the performance for this test or the written amount since for each sync write...
  9. K

    Poor ZFS performance On Supermicro vs random ASUS board

    No synthetic test can perfectly emulate real life systems, that should be obvious. And no, it's not based on luck but theory and backing experience. Don't forget we're using VMs on separate datasets. Naturally we can't detach those from the rest of the load, but one should be prudent and run...
  10. K

    Poor ZFS performance On Supermicro vs random ASUS board

    If the real load uses the same block/record/page size (eg. 16k test and InnoDB workload as discussed before) this should be an adequate indication of expected performance.
  11. K

    Poor ZFS performance On Supermicro vs random ASUS board

    @guletz: Yes, you're correct, I forgot that for a moment that this is the enforced maximum record size. However, I'd like to remind you of 2 things: first, changing the record size normalized the write amplification for docent; second, all tuning guides, backed by real life experience recommend...
  12. K

    Poor ZFS performance On Supermicro vs random ASUS board

    That looks better. Also see this: https://github.com/zfsonlinux/zfs/issues/6555 The rule of thumb for DBs is trying to match the record size of the database engine. For example InnoDB uses 16k page sizes for data and 128k for logs so it's generally recommended to use that as record size. But...
  13. K

    Poor ZFS performance On Supermicro vs random ASUS board

    You ran the test using 4k record size in fio. Try the test with datasets using 4k or 8k record sizes. The default is 128k, meaning with a size less than or equal to that all single writes will at least write out 128k, hence the "write amplification".
  14. K

    Poor ZFS performance On Supermicro vs random ASUS board

    Could you run new tests with saner record sizes like 4k or 8k (using new datasets for test)?
  15. K

    Poor ZFS performance On Supermicro vs random ASUS board

    What model is this mobo? I'll build a small system soon using an X10SRL-F and WD RE/Gold 1T disks and a pair of the older Intel DC3500s as SLOG. I'll report some performance data here if I don't forget...
  16. K

    Poor ZFS performance On Supermicro vs random ASUS board

    ZIL is not an external log device, but you can put it to a separate disk, hence its usual name (Separate intent LOG). You're right on the other account, my mistake (ARC is for reads only). However, what you test is still the RAM write speed for the most part, since zfs will not block sync write...
  17. K

    Poor ZFS performance On Supermicro vs random ASUS board

    That is an effective way to test the ARC in memory. What exactly is the point? Anyway, just for comparison, on a Dell 720xd, 3 mirrors striped (~raid10 with 6 disks), using Samsung 850 Pro partitions in a mirror as SLOG, PVE 4.4, zfs 0.6.5.9: # zfs create rpool/temp1 # pveperf /rpool/temp1/...
  18. K

    Remote Unlocking LUKS Drive at Boot

    Did you install the dropbear package in D9.3?
  19. K

    Remote Unlocking LUKS Drive at Boot

    I had no such issues, I was even surprised how easy it was. What if you use eth0 in place of eno1 in static_ip?
  20. K

    Remote Unlocking LUKS Drive at Boot

    Scratch your current config. Then put your IP config to /etc/initramfs-tools/conf.d/static_ip: IP=10.255.1.250::10.255.1.254:255.255.255.0::eno1:off Put your dropbear port config to /etc/dropbear-initramfs/config: DROPBEAR_OPTIONS="-p xxxx" Your authorized_keys for early dropbear go to...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!