Get a backup first, then see if there are any available BIOS/firmware updates. Even if there are not, what you posted should not bother you too much.
With the large amount of 3rd and 4th tier hardware out there, backed by generic BMC software -...
In the future, if you do decide to mount a device that might potentially disappear - you should use on the options discussed here:
https://unix.stackexchange.com/questions/53456/what-is-the-difference-between-nobootwait-and-nofail-in-fstab...
The USB related line in your /etc/fstab is 6th from the top, including empty line preceding it.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi @CptnBlues63 ,
Keep in mind that this is not PVE specific, but rather generic Linux administration. Any publicly available resource can generically guide through this with sufficiently detailed prompt.
At a high level, either remove the...
It seems like you have a USB disk in your fstab/systemd, and that disk has either failed or is not available any more. This blocks the normal boot.
Are you not able to enter the root password to access the rescue shell?
You can follow this...
Hi @RodolfoRibeiro , thank you for clarifying. This matches my initial understanding of your situation. SAS and iSCSI are transfer and connectivity protocols. While the article I suggested is using iSCSI as example, once you are beyond basic...
CloudInit is an industry standard way to distribute templates that are compatible with all major cloud providers. While PVE is not a Cloud Infrastructure per se, it is a hypervisor that many cloud providers user. PVE has built-in support for...
According to Veeam documentation https://helpcenter.veeam.com/docs/vbr/userguide/pve_system_requirements.html?ver=13 it is compatible with 9.1
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Have you tried a different browser? A cloud virtual VM with various browsers? Tried to get a new server/VM on the same network and access BMC via CLI (if thats possible with OVH)?
If all else fails, reach out to OVH support - a client error...
Hi @tka222 ,
Given that your storage is FC based, the migration (presuming you mean live migration of VM between PVE hosts) does not carry a lot of data.
Your best option is to create an LACP bond that provides a) redundancy for all traffic...
Hi @RodolfoRibeiro , welcome to the forum!
This topic is a common discussion point here on the forum. It is generally raised at least once a week, sometimes more.
To address your questions:
- Yes, many people run with this type of infrastructure...
In the PVE API here:
https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/disks/lvmthin
and here:
https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/disks/wipedisk
You can make a call to "Also wipe disks so they can be repurposed...
Removing an LVM slice (if VRTX-SSD is LVM based storage) is not enough to erase the data on the underlying disk. This has been discussed in the forum a few times, although I cant give you a link at this moment.
You can check by running "lsblk"...
If the pool ends at 249, then 240 is within the range. Pick an IP that is higher than 249
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you for the update @unsichtbarre . You can mark the thread as Solved, to keep the forum tidy, by editing the first message and selecting an appropriate subject prefix.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for...
The file/location is not checked at the time of setting the option. It is possible that the config does not exist at the time of VM creation. One could dynamically generate the config on VM start. You will be notified at the time of VM start or...
You are the only person with full view and access to the system. You shared summary of what you think the system is and only partial snippets with us.
There are a few possible theories, you can eliminate most of them by rebooting your host and...
Hi @unsichtbarre ,
The only explanation , based on the technical details you posted, is that you do indeed have a route between the two networks. You should be able to figure it out with "ping" command, traceroute, and by checking ARP tables...
Hi @alexx,
there was a similar discussion recently here: https://forum.proxmox.com/threads/proxmox-with-48-nodes.174684
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox