You probably don't need to enable VLAN membership on the OPNsense. VLAN Tagging is already done on the Linux Bridge in PVE. The packets that leave the node via enp34s0 already have the VLAN tag attached. At this point, it should already enable isolated communication between VMs.
The VLAN...
To debug issues with your services, you can typically look at the system journal output with journcalctl -b0 and look at the status of your systemd services with systemctl status <service>.
Assuming that the services you try to run are started via systemd.
If you only create files from inside the container, this is indeed a feasible solution!
The problem with chown is that the problem will come back for any new file that is created outside the container.
At the time of the backup, do you see any indications in the journal why the backup was canceled?
Please upload the journal.log to find out more.
journalctl --since "2023-11-10" --until "2023-11-11" > journal.log
If you have an unprivileged container, the UIDs and GIDs inside the container are isolated from the outside ids by user namespacing.
You can set up a user namespace mapping as described here:
https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
You can assign the default gateway to the nic connected to the internet. If the other nic has an IP in the local network range your system will automatically select the local nic when accessing other systems within that network.
You understood correctly.
For security reasons, the User IDs inside the container are completely separated from User IDs outside the container.
The same problem exists with Docker, but many Docker setups don't use the user namespace feature.
You mentioned that you're mounting Google Drive from within the container. I don't see this here in your mountpoints.
Using a FUSE mount inside a container is not recommended [1].
Disable onboot option on the CT
Try to revert / remove the CT after a reboot so that the mountpoints are not...
Can you rollback the container when the container is stopped?
Please post the output of the following from within the container to see what mounts you have enabled
mount
Are you running Docker inside the unprivileged container, and are there any Docker containers running?
Please also post the output of the failing task log. You can click on the task log in the GUI and download the log.
Also, Your container is in the locked state. The error would be different...
It looks like your only have 16GB free disk space in the LVM volume group.
You are using lvmthin which allows to over-provision the VG.
What size does your disk of VM100 have, and how full is it?
If you don't have discard enabled on the VM disk, the underlying disk will fill up, even though you...
Does your hosting provider support using a bridge? From the providers' perspective, you are sending packets from a single interface with multiple MAC addresses, which often times is flagged as suspicious and may be filtered or blocked.
Please confirm with your provider that it supports this...
You are configuring a bridge behind a bridge, which will not work.
Please refer to the PVE Documentation
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_choosing_a_network_configuration
https://pve.proxmox.com/wiki/Network_Configuration
You are using an unprivileged container which utilizes user namespacing.
You can map the outside UIDs to inside UIDs which you can set up in PVE as described in the PVE Wiki https://pve.proxmox.com/wiki/Unprivileged_LXC_containers#Using_local_directory_bind_mount_points
Sorry meine Frage war nicht ganz eindeutig. Welche Storage Technology wird verwendet für die Virtuelle Disk, welche Sie vergrößert haben?
Wie wurde die VM Disk vergrößert? Anschließend auch das FS vergrößert?
Posten Sie bitte den output von:
Am Host:
tail -n +1 /etc/pve/storage.cfg...
In your setup, the vmbr1 is configured as a NAT through vmbr0. With this setup, you will not be able to access the VMs with the public IP.
If you have a separate phy on your host – something like enp6s0 – you can assign vmbr1 to the 2nd phy.
set bridge-port and remove the iptables calls
If you...
Plase post your VM configuration tail -n +1 /etc/pve/qemu-server/<VMID>.conf
Regardless of what Windows is showing, do you experience any significant performance degradation inside the VM?
It looks like your zpool (front) is full.
To get a better overview of your configuration, please post your VM configuration and fs status.
zpool status
zfs list
tail -n +1 /etc/pve/storage.cfg /etc/pve/qemu-server/105.conf
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.