Hello @Maxime Laplante
I'm planning to go with the similar setup /w R820 (from techmikeny) but with some additionals to speed up replication I/O's.
- 1 x 10G Cisco Switch
- 1 x Dell X520-DA2 10GBE 2P Adapter / Per Server
if your primary concern is;
- more cheap space: you should go with as...
Since it's mounting with nounix option, I can't change the permission attributes.
Here is the mount -a output:
//192.168.1.98/backup on /mnt/pve/sbox type cifs...
yeah I see that,
and actually, it makes sense from the security point of view.
Is there a way to specify uid, gid for the CIFS mount in the storage.cfg?
Hello @wolfgang ,
Here is the storage configuration for CIFS share.
cifs: sbox
path /mnt/pve/sbox
server 192.168.1.98
share backup
content vztmpl,iso,backup
maxfiles 5
username backup
As you may guess, it's mounted under /mnt/pve/sbox with root user with other mount...
Hello,
I couldn't see in the documentation but, Is there any configuration for changing mount parameters for CIFS shares?
Currently mounted CIFS share owner root and not-changable. And so no luck for the un-privileged containers for writing..
If I go with manual mount / fstab, will it broke...
i ran in to very same problem with using zfs 0.7.6
according to zfs FAQ xattr=sa and dnodesize=auto suggested, but they're only available when using zfs datasets. (not the zvol)
@fabian can you confirm that using ceph on top of zfs is not suggested by proxmox as well?
what about ingress traffic, do you guys have any automation for it?
let's say we have 5 proxmox nodes with public network
container vm nodes (the kvm's) running on private lan
may be nginx / haproxy on proxmox nodes?
lets-encrypt certificate automation?
thanks for the feedbacks
@morph027 are you using zfs for the underlying storage?
i'm in the middle of preps for the swarm & k8s cluster; my nodes have 3 ssd with zfs raidz1
and compression lz4 as well.
if you're have any test /measurement, using zfs, any suggestion for the tuning?
as far as i know, the predictable network interfaces can only be enabled / disabled with kernel boot options. either with
GRUB_CMDLINE_LINUX_DEFAULT or GRUB_CMDLINE_LINUX option in the grub config
can you share the content of /etc/default/grub file?
@yaboc it's very likely to get different provider for each tests you run speedtest-cli.
i suggest, select a dedicated server; run
iperf -s
on the test machines
iperf -c <ip of your server> -m -i 2
.. and compare the results
@papanito sorry mate; but all diagnostic outputs seem okay.
if you still couldn't find any solution, i can help you troubleshooting with ssh access.. (if you want)
or you might go with subscription to get support as well.
i created secondary interface vmbr1:0 and disabled broadcast messages for it.
it has static mac addr on the vmbr1 bridge; and it's working so far.
auto enp1s0f1
iface enp1s0f1 inet manual
mtu 9000
auto vmbr1
iface vmbr1 inet static
# hypervisor #6
address 10.10.6.1
netmask...
bridge and routing seem okay.
what about nat postrouting rules on the host:
iptables -t nat -S
btw, you should be pinging both from / to 10.10.10.2 / 10.10.10.73 right?
in the dnsmasq config, add:
dhcp-option=option:router,10.10.10.2
change nat rule; output interface to eth0
post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
you don't need to add route add line as well.
so basically, in every vm, can we use something like?
auto eth0
iface eth0 inet static
address 10.10.x.y
netmask 255.255.0.0
gateway 10.10.0.1 <<< cluster wide routed network
or
gateway ${HYPERVISOR_VMBR0}
hi all,
i'm trying to setup VM networking properly. here is my topology;
3 proxmox hypervisor
- eth0 -> public ethernet (190.xxx.12.xxx )
- eth1 -> private lan
node #1 /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
address 191.xxx.102.11...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.