CAUTION!
There is a typeo in the above message.
The disks should be:
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
https://pve.proxmox.com/pve-docs/pve-admin-guide.html
Oh this is new to me, but the 6.2 release notes do say live migration is possible with zfs replication:
...What a time to be operating a zfs-qemu cluster :)
I think there is nothing preventing you from doing that.
But to somewhat hijack the thread: does anyone know if the replication logic has a lock to prevent execution when the same task is running already?
ohh, I have just noticed my rpool-hdd was created too small:
zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool-hdd 496M 273M 223M - 47% 55% 1.00x ONLINE -
Hello!
The limits I am setting on the GUI don't get reflected inside the lxc.
For example, if I create a mountpoint with 8GB of max size, inside the container it is only shown as around 100MB of space.
pveversion
pve-manager/5.3-11/d4907f84 (running kernel: 4.15.18-11-pve)
cat...
Hello!
Thank you for your answer.
Yes, I was talking about user quotas.
My findings about this is, ZFS itself does support user quotas, though not compatible with existing "quota-utils":
For example zfs set userquota@myuser=1M rpool/ROOT/pve-1 is respected properly on the pve itself.
If I...
Hello!
I have recently moved a physical machine into a Proxmox LXC container. Apparently i haven't done my research properly beforehand, because i'm stuck without quota support on the ZFS subvol based LXC filesystem.
My questions, findings about this:
- On the GUI, the CT creation wizard, or...
I had great success with:
https://github.com/oetiker/znapzend (has local and remote destinations, configurable retention per target, but it has no error reporting features)
The new kid on the block is
https://zrepl.github.io/
It looks really promising because of the remote pull feature, but i...
I might not fully understand your problem, but have you tried adding these lines to your /etc/pve/lxc/CONTAINERID.conf ?
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
lxc.cgroup.devices.allow: c 10:200 rwm
I have used bonnie++ before with success.
If possible I would establish a baseline in the current setup (benchmark of choice, or better yet perfomance statistics from the database application itself), then move to ZFS, if it is too slow you could switch to hw raid 10 with lvm. Ofcourse, for...
Hello!
Backups can only be done to configured storage.
Starting with Proxmox 5.2, you can add a samba/CIFS storage.
If you have a windows computer, share a folder on it, and use this to add this share as storage that can be used as backup target:
Hello!
Choose a volume manager/filesystem for your needs (features, performance).
Feature-wise: I think nothing compares to ZFS on Proxmox.
Performance-wise: do your benchmarks, but HW raid10 and LVM thin will probably be faster than anything.
I do not mean to hijack this thread, but there was no solution posted yet, and I seem to be suffering from the same problem.
The system I am using is a Dell PowerEdge R530 server.
After rebooting for a kernel upgrade it took around 1-2 minutes to reach the grub> prompt.
There was no error...
Hello,
This problem reappears for me with an up-to-date PVE and debian 9.2.
Is there a way to fix this besides editing /usr/share/perl5/PVE/LXC/Setup/Debian.pm ?
Thank you
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.