Hello,
I got network routing issue that I cannot figure out. Running version 6.1-7. My Proxmox node has 4 NICs and I created two bonds with two NICs each. My interface config is below. I have a VLAN 651 defined on my switch I'm using for all NFS and iSCSI traffic to my NAS storage...
Thanks, I was able to do the pvcreate and vgcreate. At first, the pvcreate failed with the error:
root@sun1:/etc/lvm# pvcreate /dev/mapper/mpath0
Device /dev/mapper/mpath0 excluded by a filter.
Was able to clear that by running: wipefs -a /dev/mapper/mpath0
Then run
pvcreate...
I did try those commands, although I did create a partition first on the mpath0 device which created the mpath0-part1. Is it necessary to partition the iSCSI volume first?
PVE didn't see anything and I could not create any LVM entries.
Do I have run the pvcreate and vgcreate on each node?
Thank you. I got a step closer. Created the multipath.conf file below and copied to all six nodes. Ran systemctl restart multipath-tools.service on each node. This created the /dev/mapper/mpath0 and now pvdisplay is clean. However, when trying to create the LVM through the web GUI the "Base...
Proxmox version 6.1-7. Trying to get LVM over iSCSI working. My iSCSI server is running OmniOS using NappIt. I created the iSCSI share on the NappIt server and created my host target group and iSCSI target group and view. Proxmox cluster (six nodes) can add the iSCSI volume with no issues...
Thanks for the reply. I tried the following with no success:
root@sun1:~# iscsiadm -m session -o show
tcp: [1] 192.168.50.10:3260,1 iqn.2010-09.org.zfs-app:1583855287 (non-flash)
tcp: [2] 10.206.20.207:3260,1 iqn.2010-09.org.zfs-app:1583855287 (non-flash)
root@sun1:~# iscsiadm -m node -T...
Hello,
Running six node Proxmox cluster version 6.1-7 and run into a strange issue when removing an iSCSI volume from the cluster. Adding the iSCSI volume to the cluster goes through fine and there are no errors and system logs are clean. But then I decided to remove the iSCSI storage...
Figured it out! After getting the right search in google, found this wiki page:
https://pve.proxmox.com/wiki/Network_Configuration
At bottom of the page it has use VLAN with bond0 for the Proxmox VE management IP with traditional Linux bridge, which is exactly what I needed. So my interfaces...
My server has 4 NICs and the following network config currently works:
auto lo
iface lo inet loopback
auto eno3
iface eno3 inet manual
auto eno4
iface eno4 inet manual
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
auto bond0
iface bond0 inet manual
bond-slaves...
Found it. When I tried to bring up the bridge via command line I got this:
ifup vmbr1
error: ignoring interface vmbr1. Only one object with attribute 'bridge-vlan-aware yes' allowed.
error: cannot find interfaces: vmbr1
Back in the web gui I edit and the vmbr1 interface and unchecked the "VLAN...
New user to Proxmox and testing out Proxmox install with a two-node cluster. Running 6.1-3. Everything was working great and then something changed where my first node Linux bridge vmbr1 de-activated. Now any VM assigned to that Linux bridge will not see the network. What happened was I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.