don't need any workaround, microsegmentation (aka firewalling between 2 vms on same L2 network) is working since 10year with proxmox firewall.
The definition of rules directly on the sdn vnet is currently in developpement. (instead define it vm by vm, or through security group)
yes, sure, simply use the firewall feature on the vm. It's done at bridge level, so it's works between 2 vms on the same network.
It need to be done vm by vm currently.
but you can create security groups at datacente level. (security group = a group of rules), then add the common, security...
with openvswitch, vlans are managed directly on the vm nic option.
in general, /etc/network/interfaces is only here to declare interfaces, management ip, bridges.
once it's done, generaly, you don't need to change it anymore.
Then simply declare vlan on vm nic.
I'm using arista switches as exit-node/router in my production. (also tested with nvidia/mellanox switches + cumulus linux os)
and yes, sure, this will allow to have vxlan on physical clients. (or even vlan->vxlan translation)
Hi,
here my dlm.conf
# Enable debugging
log_debug=1
# Use tcp as protocol
protocol=sctp
# Delay at join
#post_join_delay=10
# Disable fencing (for now)
enable_fencing=0
I'm using protocol=sctp because I have multiple corosync link, and it's mandatory.
then I format with gfs2 my block device...
backup /var/lib/pve-cluster/config.db
- install a new proxmox with same hostname / ip address
- boot
- systemctl stop pve-cluster
- copy back /var/lib/pve-cluster/config.db
-reboot
done
(if you are in a cluster, copy back also /etc/corosync/* )
ok.
>>When I try to add that NFS share on proxmox-server-2, it complains that those directories it wants to create are already there!
does it complain in the gui at the moment where you are trying to add the storage ?
you could try to add it manually in /etc/pve/storage.cfg directly.
Well, you can't share the same nfs share between 2 non-cluster nodes !
imagine that you have 2 vms with same id 100, you'll mix drives with same id, like vm-100-disk-0.raw for example.
same for vzdump backup.
only for iso, lxc templates it shouldn't be a problem
they are no need to rescan drive inside the vm, the change should be auto detected. (check with parted /dev/sdx to be sure)
are you sure that it's not simply the filesystem (or lvm volume if you use it) that is not extended ?
My plugin was using the old sdk, I hope that they are a more moden rest api now ^_^.
My company is a big French netapp partner, https://www.netapp.com/partners/partner-connect/groupe-cyllene/,
feel free to pm me if you need help/advise for a clean integration.
Nice !
you can have a look of my 13year old unmaintened netapp module and reuse code if you want ^_^
https://github.com/odiso/proxmox-pve-storage-netapp
I don't know for vhd, but for vmdk, it's possible to simply specify the last snapshot
qemu-img convert -f vmdk -O qcow2 <top_file>.vmdk output.qcow2
(so, it should work with "qm disk import..." command too
1) because of the primary exit-node
2) primary exit-node option is optionnal,but I remember that the gui have a bug, trying to force a primary . I thinked it was fixed.
@shanreich
https://lists.proxmox.com/pipermail/pve-devel/2024-February/061924.html
2b) we can't force announce of the vm ip...
I have see this recently on a dell poweredge server, where the profile in the bios was in economy mode,
and governor was not exposed to guest os.
you should check in your workstation bios if they are something related to cpu governor , cpu profile, ...something like that
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.