Hello,
I have the following:
iface eno2 inet manual
auto vmbr10
iface vmbr10 inet manual
bridge_ports eno2
bridge_stp off
bridge_fd 0
attach vmbr10 to vm/ct and set the static ip ( supposing that you have multiple ip's available )
Hello,
I have the followings:
eno1 - management
eno2 - vmbr1 - 10.10.1.{node_ip}/24
- vmbr2 - 10.10.2.{node_ip}/24
- vmbr3 - 10.10.3.{node_ip}/24
is there any ways to communicate through vmbr2/3 between vm on different nodes?
-----
And second question is about sharing...
Hello,
I tried to migrate one vm from a node to another and something goes wrong :)
My OS HDD, which is a zvol from FreeNas seems as an empty disk with no partition table on it.
I tried to:
- fsck
- insert CentOS cd > Troubleshooting > boot an existing system > No Linux partition is...
yep, but on the first node I had only one ssd, so I proceed with the default ext4 and for the second and 3'rd I have 2 x ssd (ZFS+RAID1) and I saw that I cannot have the same config.
maybe I will buy another ssd and reinstall the first node :) can I reinstall the node from which the cluster was...
ok, thanks :)
and with ZFS and RAID1 option with 2 hdd's I cannot have this storage type:
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
right?
@Stefan_R I think I will let the node as it is and set thinpool only for others :)
but, if select the ext4 from the target hdd option menu I will get the thinpool but if the hdd with proxmox will fail the node will fail
if I select the RAID1 from the target hdd option menu I will be able to...
can I do on a single partition? or I need a new physical disk? because both hdd's are in RAID1 ( set in install wizard ) and proxmox is installed on /dev/sda(b)3
I've tried
root@pmx:~# wipefs -a /dev/sdb4
root@pmx:~# wipefs /dev/sdb
DEVICE OFFSET TYPE UUID LABEL
sdb 0x200 gpt...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.