You can still enable stp in /etc/network/interfaces if so you want. All the SDN stuff, including l3 routing are for situations where simple bridges are not an option (eg, when you do not control the l2 between your nodes, so no VLAN. Or when your hostser doesn't provides QinQ etc.)
I do not understand. You can't do it, but you've done it ? That doesn't make a lot of sens. You can't do it through the GUI ? Well, it's an advanced options most users will never need to understand or set. You can always set it in /etc/network/interfaces manually
Well, you have saturated your LVM thin storage. You have to free some space (either extend the thin pool, or remove some of the volumes) before trying to recover data inside VM.
There's no one true and best way to do this. Depends on a lot of factors. Including if you have another temp storage available, how much downtime you can afford, the size of the disks etc. You can for example do offline migrations with qm importdisk. It'll convert from vmdk to raw while copying...
One way to have a managed storage is to stack LVM on top of your iSCSI device. Note that such a setup would provide a block based storage, so not usable for files (no template/ISO/snippets, nor backup dumps). Also, while the storage can be shared accross all your nodes, it doesn't support thin...
Well, depends on your needs. I'll personnaly always favor iSCSI+LVM over NFS (for performance, and reliability reasons). Block based storage are usualy preferred. But, with NFS, you can use qcow2, so thin prov and snapshots.
You need to configure the multipath device in /etc/multipath.conf. And exclude the underlying devices from LVM, so that LVM only use the multipath "view". See https://pve.proxmox.com/wiki/ISCSI_Multipath
If it's an NFS export from a ZFS dataset, it's probably because since ZFS on Linux 0.8.3, the no_root_squash option isn't set anymore by default, and must be set explicitely, like
zfs set sharenfs='rw,no_root_squash' zpool/foo/bar
The issue is most likely the change in zfs 0.8.3 : default nfs exports do not set no_root_squash anymore. You must set it explicitely now, eg
zfs set sharenfs='rw,no_root_squash' zpool/dataset
This is due to padding when using small volblocksize with raidz. See https://www.reddit.com/r/zfs/comments/b6dm4y/raidz2_used_size_double_logical_size_in_proxmox_53/?utm_source=amp&utm_medium=&utm_content=post_body for example. Try using 16k volblocksize (or whatever best value for your raidz...
You should Indeed reboot to run the latest kernel anyway. But I recommend to do so after feeeing some space and reinstalling the latest kernel, to be sure
You can remove old kernels manually. List them with
dpkg -l | grep pve-kernel
Then
apt remove --purge pve-kernel-xxx
Keep only the last 3 versions. If disk where full during the last update, reinstall the last one with
apt reinstall pve-kernel-xxx
Mmm, nothing obvious which would prevent ifupdown2 to handle it. If you still have them arround, you should check the logs at the time the network wouldn't come up to see what went wrong
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.