Change the address 192.168.1.254/24 to address 192.168.1.254 on /etc/network/interfaces for vmbr1
You can retry the new network setup without a reboot with systemctl restart networking.service or just reboot
The logs are from ntpd service which are not relevant here
You need the vmbr0 in your interfaces file so that you can connect to internet
Your vmbr1 interface should be up and running without the need for any manual operations. How did you checked that the bridge wasn't up ?
Your vm is not...
your bridge is up and running now. The UKNOWN state is because you probably did not attach an actual network interface on the bridge. You should also be able to ping 192.168.1.1
I have installations of zfs on HDD ( WD RED ) and SDD ( Samsung PRO ), mirors and zraid2, which are all consumer grade. In my opinion, If you don't have real production needs, then those SATA drives would be enough
We have increased value Raw_Read_Error_Rate 0x000f 076 064 006 Pre-fail Always - 35880624 and Raw_Read_Error_Rate 0x000f 076 064 006 Pre-fail Always - 35887408 for the
root@De-Vre-Prox13:~# smartctl -a /dev/sdc
Model Family: Seagate...
From the first link with the frequtil the governor is set to ondemand, which means that the kernel can decide on which power mode to put the cpu. We can leave it to ondemand since the most of the time the cpu is working at full speed.
I noticed that you have run fio in /root/ and /tmp and i...
Could you please try to run those commands concurrently (you can open more terminals with ssh to the host or your can use a terminal multiplexer), as we want to check those metrics while the fio tool is running. I'm sure that you haven't made any misconfiguration. I think that maybe the hardware...
The fio metrics are pretty poor. Also something does not seem right with the vmstat and zpool iostat commands. Just for clarification, were those two commands also running while you were running the fio tool?
Have you checked all the output of the backup job? At some point it gives you the following warning
WARNING: Sum of all thin volume sizes (176.00 GiB) exceeds the size of thin pool pve/data and the amount of free space in volume group (76.25 GiB). It seems that you have overprovisioned the thin...
It seems to me that the last line of the conf file is wrong. Stop the container, update the conf to
mp0: /mnt/pve/winshare,mp=/mnt/freenas/
and start the container again