Well... Proxmox doesn't check if the NAS is alive - it checks if mounted volumes are available for reading and writing.
Meaning it does this for all volumes - regardless where they are coming from.
You need a script for checking if the nas is alive.
However keep in mind that technically if the...
Yes - I did - there was a mismatch in string as reported by the server and processed by NUT.
Something with a leading 0 that was not accepted by NUT when starting.
When I removed that line from the config NUT started as expected.
At this time I'm not in a position to give you more details -...
I have 2 Proxmox servers - one for testing and one for production.
Both servers have a consumer grade systemboard being Asus TUF B550-Pro for the test server and the M edition for production.
I tried installing NUT on the test server and this works as expected.
I then tried the same on the...
Team,
I'm struggling with 2 (out of 4) Ubuntu VM's.
The vlans in these are not working as expected (i.e. eth0.122 and eth0.123).
The output of "ip link" looks fine to me - the same applies for "ip a".
However, pinging the defailt gateway for both vlans is not working.
Everything works fine via...
Not sure if we are on the same page here.
But pvestatd is about collecting stats and statusses. And generates a lot of errors in syslog when the CIFS volume is not available. In my case this was caused because of daily shutdown/reboot cycle of the Synology NAS.
The pvesm commands are about...
No problem.
I'm starting pvesm "shortly" after starting the CIFS server. And stopping "shortly" before the server stops.
Where "shortly" means 30 minutes after starting and before stopping.
So that al tasks related to the CIFS server have time enough to finish their tasks.
Below the code for...
I recently upgraded from Proxmox 7 to 8.
Within that upgrade, the systemd-resolved was removed.
I reinstalled this with the stub-resolver enabled - the same way it was previously.
In fact - the original systemd-resolved config-file was not removed.
However the caching (i.e. stub-resolving)...
I managed to solve (work-around?) this with:
(1) - disabling the ifupdown-wait-online.service
(2) - install ifupdown2 (which turned out removing the already installed ifupdown?)
Both where done within the LXC container
It now works as expected.
Yes - the IPv6 field is set to static and all fields are left empty.
In addition - nothing on IPv6 in /etc/network/interfaces.
This applies for the host as well as the container.
Thank you.
When I run this on 2 different LXC containers (one with Debian en one with Ubuntu), the Ubuntu container has 13 seconds for systemd-networkd-wait-online.service. While the Debian container has 5min for ifupdown-wait-online.service.
Any idea where these differences are coming from...
Team,
I'm trying to build a LXC container with a few network services like docker, pihole and chrony.
The build went as expected. However, after a reboot, there is a little over 5 minute delay when starting these services.
Using debug mode of pihole reveiled the following message:
Jun 26...
Team,
I just finished an upgrade - replacing an Asus B450 with an Asus B550.
This includes a CPU replacement: the Ryzen 1600 is replaced with a Ryzen 5700X.
After some config changes in the BIOS the system boots as expected.
And after changing the network config (i.e. /etc/network/interfaces)...
Team,
The file /var/log/syslog and console screen is flooded with these messages:
vmbr0: received packet on eno1 with own address as source address (addr:08:60:6e:7d:18:fe, vlan:1010)
The current settings of vmbr0:
auto vmbr0
iface vmbr0 inet dhcp
bridge-ports eno1
bridge-stp...
It turned out to be a Samba issue.
Meaning that the config file /etc/samba/smb.conf was still based on the interface vmbr1 (i.e. vmbr1 and vmbr1.*).
After changing this to vmbr0 and vmbr0.* everything was working as expected - no more strange vlans and/or DNS requests.
:)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.