Use PVE Firewall and (or, but preferably and) separate VLAN/networks for physical hosts and VMs.
https://pve.proxmox.com/pve-docs/chapter-pve-firewall.html
question - how are you enabling samba-alpine feature in an unprivileged container? I'm unfamiliar with that feature module, but it looks like privileged is required from the proxmox docs.
I am running my Samba server without this feature enabled, without any issues in an unprivileged container...
I just ran a test on my dev cluster (running the latest non-commercial build of Proxmox) and my LXC recursive bind works just fine.
I created a new set of recursive ifs datasets like so:
root@S1-NAS01:~# zfs list
NAME USED AVAIL REFER...
Hi @EpicBeagle, I'm actually still running a 6.x version on the host which is supporting my recursive bind, so I can't speak to the regression... in fact I have to plan my upgrade process even more carefully now, thanks for the heads-up! Your lxc config line is is almost the same as my working...
I have deployed a VXLAN setup on my homelab cluster and I can get connectivity between containers on various hosts, as long as the MTU on the VXLAN zone is greater than or equal to 1280 (the minimum size of MTU in IPv6). My intended final state is one where the VXLAN networking is encapsulated...
For any future users, I did in fact SSH from my off-site server to my primary backup server, copied the /vm and /ct folders in the datastore to my new server, setup a more aggressive prune rule and then started replication.
It starts transferring only the newest images, and once you have a few...
perhaps I could fool it by copying some of these directories? even if the verifications fail and I delete them afterwards, would that serve as a starting point for a given vm?
Oh! that's excellent news - but how would I get the initial sync (I.E. copy the latest set of backups) to the datastore, if a full sync will fill the destination drives? @dietmar
One method I was considering (but it means I lose some de-duplication) is to have a 'Target' datastore and a 'Deep Archive' datastore at the primary site.
PVEs backup to the target, which keeps the last 2-5, then does a local sync to Deep Archive. then offsite runs a daily sync job to pull from...
Our main backup server has 20+TBs of available storage (our colo server), while our offsite only has 12TB (public bare metal cloud).
Since I can't easily get additional storage at my offsite location and I only need to offsite store a smaller set of last-resort backups (say, the 5 most recent...
a) You should be able to use the 'Migrate' button. I believe there is also a Bulk Migrate option available. Generally you will want to setup a replication schedule for each VM is you are using ZFS local storage, or use shared storage like NFS or Ceph to allow faster live migrations. We do it all...
Update: I've since replaced my Fileserver VMs with containers, which generally have a much happier time with mounting nested datasets.
the VM<->9P mount performance I was getting just didn't meet my needs, unfortunately.
The guide referenced by Ricardo worked really well here is the LXC...
Thanks to kind assistance over on the STH forum, I was able to get these drives out of Diagnostic mode.
once you have dm-cli, you can use these commands to identify the device and bring it out of diagnostic mode. Note that the status change will not happen until you shutdown the system.
dm-cli...
@wolfgang I had the same issue happen on two other SSDs in the same host, and it appears that the devices are not being shut down correctly. I captured the following photograph as the server was shutting down:
The devices also appear to be diagnostic mode, as I connected them to a Windows...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.