Hello everyone.
I didn't find the reason but I found a solution. I disabled IPv6 and it goes to IPv4.
I thought that by reactivating IPv6 it wouldn't work again, but it continues to work.
So I don't see why such an effect but oh well. The important thing is that it works.
Thanks to those who...
Hello everyone, after hours of fruitless research I'm turning to the community :).
I have three hetzner servers, running proxmox 8 (latest kernel), on which I have an ipv4 configuration problem.
When setting up the servers, I had only taken ipv6. No worries there and I set up the three servers...
Hi, same thing for me. Latest version of PBS (3.2.2) and same problem with wrong datastore size under pbs.
With df -h :
5.0T 953G 4.1T 19%
Under PBS :
With stat -f:
stat -f .
File: "."
ID: 0 Namelen: 255 Type: fuseblk
Block size: 131072 Fundamental block size: 512...
Hi @ggoller and thks for the feedback.
Do you think there is something to do after the update to apply the effects of the patch?
I ask this because I have just updated the pbs and unfortunately, after restarting the server, I still see the same information concerning the storage (see my...
Hello, thanks for the follow-up! excellent news :D I can't wait to be able to apply it and I will be sure to come and give feedback on how well it works ;).
Thank you for taking the time to add all these details.
Here is the result of df -ah:
datastore: 990G 633G 358G 64% /mnt/datastore
We see here that I have what is expected.
Thank you for the feedback.
Of course, here is the result of the command:
backup@pbs:/mnt/datastore# stat -f .
File: "."
ID: 0 Name: 255 Type: fuseblk
Block size: 131072 Fundamental block size: 512
Blocks: Total: 2074776029 Free: 748854998 Available: 748854998
Inodes: Total...
Hello everyone !
To explain the context, I use a hetzner storage box for storing the backups I make for different PVE nodes. It works very well.
Previously, I mounted the storage box in CIFS but as I sometimes had disconnections, this generated numerous entries in kern.log and I found myself...
Hi fiona, and... and yes you are absolutely right, the main directory belonged to root! by switching the permissions to "backup" it works :)
Thank you for your invaluable help and I hope this topic will be useful to others.
Hi Fiona and thank you for your reply!
The backup server is therefore mounted in LXC and the "datastore" is a NAS server mounted in NFS.
Here is the content of the /etc/fstab file
nas:Backups /mnt/nas nfs defaults 0 0
I made the deliberate choice to go through LXC to limit the consumption...
Hi everyone, I'm asking for your help today because I can't find any solution.
I have a "proxmox backup server" which works correctly, in version 3 and installed on a "proxmox 7.4" host in an LXC container.
Everything works perfectly for the backup of data from the LXC containers of the host...
On my side, the only thing I noticed is that by switching back to a previous kernel (5.15.64-1-pve), the server seems stable again.
And I could notice that with the 5.15.83-1-pve kernel I had an error in the logs that I don't have with the 5.15.64-1-pve kernel.
The error was the following:
got...
Hi @topstarnec and thx for this suggestion.
I have indeed thought about it, but before doing so, I wanted to find a stable situation as it was before. It's a production server, I can't afford to do too much testing on it.
Hi all, I came back to the "5.15.64-1-pve" kernel without changing any other settings and for the moment the server seems to be stable again.
I keep my fingers crossed for a few more days but it seems that in my case the 5.15.83-1-pve kernel is causing some problems.
Hi all, same problem for one of my server (hetzner AX41 NVME).
Hardware:
AMD Ryzen™ 5 3600
3 x 512 GB NVMe SSD (soft raid5)
64 GB DDR4
Motherboard B450D4U-V1L
Software:
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-5.15: 7.3-1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.