So the most likely thing is that the SAS card is not compatible with the driver (in the particular case ULT3580-HH8).
root@pbsPROVA:~# lspci | grep SAS
07:00.0 Serial Attached SCSI controller: PMC-Sierra Inc. PM8018 Adaptec SAS Adaptor ASA-70165H PCIe Gen3 x8 6 Gbps 16-lane 4x SFF-8644 (rev 06)...
I have a PBS server 2.4-1 that I'm trying to connect to a TAPE.
This is why I purchased a SAS card, specifically the SAS Adaptor ASA-70165H.
The TAPE is a TS4300
The TAPE is already connected to another server running Veeam.
The TAPE works normally with the Veeam system, the cables work, the...
Totally missed that page.
I used the "Routed Setup (Simple)" approach.
Note that multicast is not possible with this method
Can you explain to me what the downsides of this are?
We are thinking of buying a high-density server with four nodes.
I would like to create a ceph cluster without having a switch.
The idea is to connect all the nodes to each other.
In the specific case with 25Gbits connections.
However, with a test system I can't get the system to work.
The...
The server runs without problems.
All activities cause no problems.
Maybe I solved it, the problem seems to be that the influx server to which it sent the metrics had crashed, once reactivated everything seems to have returned to normal.
In my PBS installation (2.4-1) I have this situation:
I tried restarting but it didn't work.
How can I solve it?
the solution in this case:
https://forum.proxmox.com/threads/gap-in-the-graphs.135880/post-601516
Usually the solution is to set as host as CPU or create a new virtual CPU by enabling the avx and avx2 flag (but this last solution never worked for me), but the processor you have doesn't have AVX instructions, I don't think you can virtualize them.
In the same sperit of this post, where can I check the files (in this case the disk images) present in the BTRFS filesystem?
In this case the total space reported by proxmox i 6.02TB,
If i run the command btrfs filesystem usage /BTRFS i get 5.65TiB = 6,21TB.
Data,RAID1: Size:5.65TiB...
I think I've found a better solution.
In the screen initramfs insert:
mount -o degraded /dev/sda3 /root -t btrfs
Where "/dev/sda3" is the healthy disck.
And press Ctrl-D.
For non-boot disks in the proxmox shell enter:
mount -o degraded /dev/nvme0n1 /BTRFS
Where "/dev/nvme0n1" is the healthy...
I followed this guide to temporarily change the GRUB.
The change was to add
rootflags=degraded
to the end of the line starting with linux.
This made it possible to overcome the problem of boot disks.
EDIT:
There is a strange thing, or rather (in my opinion) wrong.
Reported BTRFS volume capacity...
I'm testing the behavior of BTRFS in case of disk failure.
As a first test, I disconnect one of the two disks in "RAID 1".
I get errors whether it's boot disks or not, but the errors are different.
In case of boot disk I have this screen:
Otherwise I have:
How can I fix?
Ask if you need...
Ok, I enable the the no-subscription repository with the WeGUI, run tis comand:
apt update
apt install pve-edk2-firmware
Then disable the no-subscription repository with the WeGUI.
Thank you
I managed to connect proxmox to influxdb2 (and to create graphs in grafana), the question was not about that.
In the past information there is a lot of information missing, such as those deriving from devices such as GPUs for example, but also the various temperatures of the server or more...
It is possible to change monitoring configurations with influxDB in proxmox?
Things like decreasing the time frame of the measurements, adding devices like GPU, etc...
I had the same problem, what caused it was a cable, used for ceph and as a secondary interconnect between two nodes.
Once the connection was re-established it was possible to enter a WebGUI.