You know as per specification of DNS a dot (".") cannot be included in the host part - maybe you should modify your naming scheme to be 'DNS compatible'
That's not a good idea at all - you will have data loss on a power outage or if the server crashes;
To avoid that you need to have a Raid-Controller with BBU and harddisk caches turned off;
Yes, you should really convince them to buy a hardware Raid-Controller - Areca or LSI 3ware are my...
If you don't bind your bridge to a physical interface your network packets will not leave your host;
He is performing NAT on the bridge, where all hosts from 192.168.1.0/24 should be NATted to 192.168.1.1 which is on the same Layer-2 network, so if it's not working with any ip in 192.168.1.0/24...
I assume you didn't read the whole thread - i already applied to Dietmars post 5 days ago that his suggestion would be the best solution ;-)
Not necessarily, i know several setups where the backups are send to a local NFS storage cluster which replicates it's data to a remote datacenter like...
please make sure the links in your post are correct and working;
in an virtualization cluster the software bring's up the vm on another node if the current node fail's (or you do it manually) - this is only possible with shared storage;
if you prefer manually migrations between nodes you also...
Live migration is not possible without shared storage;
you need to backup/restore or shutdown and migrate the vm and the downtime depends on your hardware/network equipment;
for minimal downtime you need a shared storage
hmmm... not so easy.... never had something like this - i would try following:
eth0 == vmbr0 (85.114.132.x)
eth0:0 == vmbr1 (78.31.71.x)
eth0:1 == vmbr2 (192.168.1.1) == vm's (192.168.1.0/24)
the rest is all iptables to nat and forward your traffic properly
maybe you don't need the bridge...
there are some questions....
you said you have two ip's 85.114.132.x and 78.31.71.x
is 78.31.71.x routed to 85.114.132.x or how has the isp assigned that ip to you?
what are 192.95.31.41 and 198.50.153.144, do you own them also?
i would try using eth0 and eth0:0 each with one public ip where...
Zimbra Collaboration Server Starter, Standard or Pro Edition come with clustering and HA features - so simply install a Zimbra cluster in your PVE environment
for such important backups you want to 'preserve' i would recommend to keep them on an 'storage archive' and not on an regular backup storage as per definition backups are recurring and therefore overwritten - archives, per definition will be kept as long someone manually deletes them;
that...
to define a max-backups on the storage side and override it with a lower value than the storage limit would be very cool, as it would significantly reduces the number of nfs mounts to the storage server and gives you the flexibility you had before;
would be very nice if you could implement it...
that's what i would expect from a backup solution;
if you have a dedicated shared backup storage you will send all backups from all nodes to that storage and the different backup jobs defines which vm's with which options;
what me confuses here is that each backup storage definition creates an...
i have drbd in active/active mode running since over 3 years without problem, and yes it's easy to control on which node the vm's are running - simply migrate the vm you want to move from one node to the other;
if you create a single drbd device with active/active mode you have control over each...
i find also the 'old' way to define the max backups on the backup job was much more suitable and much more flexible as now;
when you needed to change that for specific vm's it was fairly easy and logical - currently it's really inconvenient;
defining the same storage multiple times only for the...
if you don't need the native vlan inside the vm than you can create eth0.130 on the pve host and connect the vmbr0 to it - and you don't need to tag the traffic inside the vm as you normally would put a regular computer in vlan130 via access mode on the switch;
yes, the gateway is seeing the packet but stacked vlan's in the frame is normally not what you want to have;
in his case the outer tag comes from vlan50 where his guest is connected to the bridge on the native vlan - so his traffic ends up in the wrong vlan;
assuming he is running the os of the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.