ok thanks.
And actually I didnt paste my latest config which was
per df:
/dev/zd16 76G 5.1G 67G 8% /rpool/data/subvol-202-disk-1
so it's just named that but actually on a /dev/zd* device.
as for writing entries by hand - the GUI has quota=1 greyed out, and I...
All instructions for this do not work:
https://pve.proxmox.com/wiki/Linux_Container#_using_quotas_inside_containers
I cannot turn on quotas in the GUI even with container stopped. It is greyed out. (I am using an ext4 fs on a zvol).
Near the bottom of this...
This does not work.
"vm 202 - lxc.aa_profile is deprecated and was renamed to lxc.apparmor.profile"
secondly a naked remount command in an lxc.conf file?
"vm 202 - unable to parse config: mount -o remount,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 /"
Doesnt seem correct.
I...
Following along hints from
https://unix.stackexchange.com/questions/450308/how-to-allow-specific-proxmox-lxc-containers-to-mount-nfs-shares-on-the-network
and elsewhere, I've updated
/etc/apparmor.d/lxc/lxc-default-with-mounting
to include simfs in the list, and then in the lxc container's...
Im using an ascii template I created (by installing a VM off the iso, then tarring up what was in the filesystem...), I've had success using it on older Pve (5.0?) or either I missed this warning or it's new on 5.2 - maybe it's ignorable.
extracting archive...
Same problem, pve5.1 won't accept a password for external vnc despite one being set. Is it the wrong username then? (Is there a username?)
Works without a password, but risky to leave open.
My 3.4 iso i had on the remote KVM didnt seem to have a rescue option but 5.1 does, but for some reason on this particular server it just hangs on a black screen and never boots (Supermicro X7DB8), so had to resort to an ubuntu live which did work.
Yeah, boot/grub is just not there for whatever reason. Bizarre.
This was a pve 3.4 system with zfs raid 10 (with 12 drives I think now, I've expanded it, originally was 4 at install time).
I booted a Ubuntu 16.04 desktop usb to live and installed the zfs packages (apt-get install...
To save people time in case they hit this thread in searches for grub rescue> + zfs problems like me, Ubuntu 16.04 desktop has a live boot ('try ubuntu') that will do zfs, provided you can get the machine online to apt-get update && apt-get install zfsutils-linux.
Read lots of threads about grub rescue, but short of mounting a live iso to boot from (can't right now, the remote kvm is half busted), we're stuck at grub rescue>
however, looking at (hd0) throught (hd7) (the max # of drives presented to bios by the JBOD controller), I can see...
To be clear, it's the upgraded kernel from that sources list that works, not that that sources list has an old copy of apparmor + libraries to downgrade to.
You must add that line, apt-get update && apt-get dist-upgrade && shutdown -r now to reboot to the new kernel.
Thanks.
Lol and I just forgot about this and did it again to myself. Anyone figure out the way to back out and restart?
Though, multicast querier is ON on the master/first node. So Im confused why timeout waiting for quorum.
Would having querier on both nodes cause the issue? It seemed to be on -...
That's ok SystemD is great:
SystemD defaults to 8.8.8.8 for DNS. Ridiculous. Among dozens of other massive issues with it.
https://twitter.com/jpmens/status/873878528844017664
And of course
https://ewontfix.com/14/
http://without-systemd.org/wiki/index.php/Arguments_against_systemd
Yes, you can power down the machine and export and reimport them too. (My method breaks the mirror briefly and re-attaches instead.) Neither method should interfere with the ability to (re)boot the system of course.
BTW, Here's a linux live CD with ZFS support...
If you've created by names instead of id's can you not then break your mirror (detatch sdb3 for eg), then zpool attach rpool sda3 /dev/disk/by-id/ata-(id for your disk)-part3 and then once resilvered do that for the first disk as well?
Does that not cause an issue with rebooting? (ie does grub...
I figured i was going to reinstall both nodes anyway, so I started messing around with the /etc/pve/priv auth keys and known_hosts files as well as removing corosync.conf on the node and pvecm re-creating. A few reboots, manually pruning out authkeys and using pvecm add -f to force add, i got...
might be related to no multicast traffic which is required by corosync. check your switch. I have the same issue still tho I can see multicast udp (ip addrs in 224.0.0.0/4)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.