Node1 kernel 6.8.4-3
Node2, kernel 6.5.13-5 pinned
Both nodes are used as nfs-storage servers for compute-nodes, so there is no other load on them than nfs-server. PVE is used for OS for unification of infrastructure, basically used it's debian part. Hardware on both nodes differs only on...
Sorry for silly question, if i map physical interfaces names with systemd.link files, would not it affect vmbr interfaces? All my PVE network is OVS-based, so, for example i have eno1+eno2 in bond and vmbr bridge on top of it; bridge has the same MAC as eno1.
Hi Chris, thanks for your answer!
Rightnow datastore lives on lvm on remote iscsi (temporary solution), so my guess just readding iscsi-target to new installation with same paths would do the trick? Later datastore will be migrated to local storage via lvm pvmove.
PBS and PVE are co-installed...
I have PBS+PVE(VM with Bacula for file-level backup) server with one host-lv for datastore and another host-lv passed as physical disk to Bacula VM.
OS is installed on mdadm raid1 HDDs, i want to do a clean installation on SSD raid on the same hardware server. Moving PVE-part to fresh...
Compute nodes: 2x10gbit LACP, nfs nconnect=16
Storage nodes: 4x10gbit LACP, mdadm 8x8TB enterprise SSDs, nfs daemons count increased to 256
Newtork: ovs because of lots of different VM vlans
Migration network is in separate vlan and CIDR on top of ovs-bond
I can almost max out fio to underlaying...
I know it is very controversal subject and generally a bad idea. But maybe i have a little specific case?
Very strict hardware resources and no budget at all.
Old Proliant server, single PVE node, fully filled disk bay, HBA mode, usual raidz2 ready, this is trivial. It has FC HBA card PtP...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.