While running multipath -ll I discovered a typo in the blacklist segment. I had wwids.* key instead of wwid.*.
I fixed it and will see if it fixes the issue. If not, I will post more info.
Thank you,
Dennis
I have a 3 node cluster with an Fibre Channel Storage configured as Shared LVM. It was installed using Proxmox 7 and was working great. Once we upgraded to Proxmox 8 I started to notice this messages on some of the virtual machines.
If I run a pvscan on a host #1:
# pvscan
WARNING: Device...
I understand.
In more detail, my setup is a PVE cluster running inside an IBM Blade with IBM fibre channel storage.
For backup we use QNAP NAS storage, running latest version of QTS operating system.
Nothing out of the ordinary. Is there anyone here using PBS (virtually) and QNAP as NFS...
An update...
I created a new PBS server with version 1.1-1 and brand new datastore.
I have been backing up to this datastore for two days now, no issues at all.
I have been able to restore everything that I have backed up so far.
Garbage collection has been running daily as usual, no issues.
Fabian,
Thank you for your help. Unfortunately I dont have resources to provide PBS local disks of such size.
Also, it was working perfectly for over two years while PBS was version 1.x, this justbstarted after upgtading PBS to 2.x
I will follow your test suggestions and also will create a...
Fabian,
Backup servers are VMs inside Proxmox physical hosts. Datastore storage are NFS volumes from 3 different physical storage servers (2 QNAPs, 1 Supermicro).
As I mentioned, I have 3 datastores created recently and all of them have this issue. The first 24 hours after creating the...
I am having an issue with backups and restores on multiple datastores with different Backup Servers. The only common thing is that backup servers are version 2.3-3.
Ever since I either upgraded to 2.3-3 or simply installed a new server on 2.3-3 my backups aren't reliable. For example, I just...
So if I have a VM that has 10 disks spread through different LVMs, when I go to restore, just select any of the LVMs and it will place each disk on the original LVM it was stored on?
Hello guys:
I have a 3 node cluster using Shared LVM store (FC underneath). The cluster has four 3.3TB storage LVMs.
I have this one VM which has multiple 500G disks, Those disks are located oh different LVMs.
The backup process runs smooth and luckily I have not needed a restore from it,
My...
Upgraded a cluster node running PVE 6.4-11 to PVE7 and upon reboot I can no longer access the host via the network.
Here's my /etc/network/interfaces
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
iface enp0s26f0u2 inet manual
iface...
I have been attempting to convert a Windows Server 2000 with a MS SQL Server 2000 on it to Proxmox unsuccessfully.
The machine has three virtual disks, one 33G (Where the OS and boot are), another 136G (Where the databases are), and a 100G (used for storing SQL backup dumps).
The MSSQL...
Dominic:
I ran the process again and it was successful.
I converted all disks prior to assigning them on the VM.
The issue could have been that while the second disk was being imported I was assigning the previously imported disk to the VM and the import process found the second LV in use...
I was able to migrate it to Proxmox and boot.
I had to switch boot order and place ide1 (558GB) disk as the first boot device even though Windows said that the boot partition was C: (136GB).
Regards,
Dennis
I also noticed another thing after upgrading to 6.4.
All my FC LUNs which are 3.351TB (Raw) and 3.27 after formatted used to show @ 3.27 TB on Proxmox 6.3 but after upgrading to 6.4 and in new installs using 6.4 Proxmox show the volumes at 3.60TB which is wrong.
I don't know if this might...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.