Here's an idea. Would it be possible to add an option to skip lxc freeze at snapshot? When using a sufficient snapshot mechanism like in zfs or lvm-thin, it's enough that a consistent snapshot is made and that gets backed up. Manually making the snapshot and backing up works without problems...
OK, I understand, but is there any info on what blocks the process and why, when a fuse mount is present? Considering the confidence in your statement you must have investigated it more thoroughly.
Sorry for resurrecting an old thread, but I've just run into the same problem as OP on a new 5.1 upgrade (from 3.4). All containers are running fine, but the snapshot backup just freezes at the line "INFO: create storage snapshot 'vzdump'".
So you say it's not possible to backup containers with...
For now I think we can live with it (I also mentioned this solution exists), however, could you please explain what the purpose is for adding an entry for 127.0.1.1? When other entries are already added in the file for the same host name. The flag file approach is somewhat faulty because it's...
In my case I have a bunch of lxc containers converted from openvz where we set the hosts entries manually. They all contain the proper host name, yet proxmox puts the following lines in the /etc/hosts file:
# --- BEGIN PVE ---
127.0.1.1 xxxhost
# --- END PVE ---
But all hosts files contain...
I've just installed a new system with latest PVE 5.1 with all updates on a Supermicro board with the Intel vulnerability patches in BIOS and a Xeon E5-2620v4. I experience high load on idle kvm VMs, about 6-7%. On the guest, CPU utilisation is 0-0.1%, all fine. I've tried multiple CPU types...
If you manage to run a Debian live CD image using any available or working method, you can just use the usual partitioning tools and debootstrap to install a base system, reboot into that and install PVE on top of it... I've done that several times with success. The qemu way looks excessively...
All types of disks often lie about their physical layout and sector size. I'd suggest keeping your record size the same as your DB page size in any case. atime=off is a good suggestion but I don't expect it to change the performance for this test or the written amount since for each sync write...
No synthetic test can perfectly emulate real life systems, that should be obvious. And no, it's not based on luck but theory and backing experience. Don't forget we're using VMs on separate datasets. Naturally we can't detach those from the rest of the load, but one should be prudent and run...
If the real load uses the same block/record/page size (eg. 16k test and InnoDB workload as discussed before) this should be an adequate indication of expected performance.
@guletz: Yes, you're correct, I forgot that for a moment that this is the enforced maximum record size. However, I'd like to remind you of 2 things: first, changing the record size normalized the write amplification for docent; second, all tuning guides, backed by real life experience recommend...
That looks better. Also see this: https://github.com/zfsonlinux/zfs/issues/6555
The rule of thumb for DBs is trying to match the record size of the database engine. For example InnoDB uses 16k page sizes for data and 128k for logs so it's generally recommended to use that as record size. But...
You ran the test using 4k record size in fio. Try the test with datasets using 4k or 8k record sizes. The default is 128k, meaning with a size less than or equal to that all single writes will at least write out 128k, hence the "write amplification".
What model is this mobo? I'll build a small system soon using an X10SRL-F and WD RE/Gold 1T disks and a pair of the older Intel DC3500s as SLOG. I'll report some performance data here if I don't forget...
ZIL is not an external log device, but you can put it to a separate disk, hence its usual name (Separate intent LOG). You're right on the other account, my mistake (ARC is for reads only). However, what you test is still the RAM write speed for the most part, since zfs will not block sync write...
That is an effective way to test the ARC in memory. What exactly is the point?
Anyway, just for comparison, on a Dell 720xd, 3 mirrors striped (~raid10 with 6 disks), using Samsung 850 Pro partitions in a mirror as SLOG, PVE 4.4, zfs 0.6.5.9:
# zfs create rpool/temp1
# pveperf /rpool/temp1/...
Scratch your current config. Then put your IP config to /etc/initramfs-tools/conf.d/static_ip:
IP=10.255.1.250::10.255.1.254:255.255.255.0::eno1:off
Put your dropbear port config to /etc/dropbear-initramfs/config:
DROPBEAR_OPTIONS="-p xxxx"
Your authorized_keys for early dropbear go to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.