btw it would be probably better to run the backup locally and use the hook functionnality of vzdump ( see the vzdump(1) man page)
to rsync the data to your second server NFS does not run very well over high latency links.
> Any idea please ?
Maybe you should start with the hints these command gave you ?
pvecm add 10.51.0.11 root@px-node-4
--> node px-node-4 already defined
coult it be a wrong ip adress here ?
systemctl status corosync.service ?
in bridge mode, (the PVE default) your do not need any iptables/firewalling on the pve hosts to make the VM accessible on the LAN
check that:
* the NIC of your VM is located on the vmbr0 bridge:
qm config my_vmid | grep ^net0
net0: virtio=5A:99:B2:45:82:5A,bridge=vmbr0
* disable firewall on...
according to https://blog.remibergsma.com/2012/11/15/migrating-an-ip-address-to-another-server-clear-the-arp-cache-of-your-neighbors/
you could use arping to force your external nodes to register your new mac address / ip adress combination
this could go for instance in a /etc/rc.local on...
> Is it possible to give each VM its own dedicated public IPv4 while simultaneously using a (802.3ad)(LACP) bond of eno1 & eno2?, as a bridge port?
yes this is possible look at https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_linux_bond
for an example of a bridge using a linux bond as...
PVE 5.0 will include a tool to import external disk images to an existing VM.
So you will be able to do
qm create 501
qm importdisk 501 my_qcow2 my_storage_id
the disk will appear then as "unused disk" in the gui, and you then attach the disk to a virtio blk, sata or scsi controller
you could put the ifconfig snippet in /etc/rc.local
this will be executed once at boot, which should be enough to change the mac address
output of anything you put in /etc/rc.local will be displayed on the console when you boot
maybe the network is defect at the link level ?
are seeing LOWER_UP when executing
ip link show dev eth0
where eth0 is the name of the device you're using for connecting your PVE host to the outside world
if you cannot ping the container, test from your outside machine located in the same LAN if the container is answering arp whohas requests
tcpdump -nnn -i vmbr0 -e arp src ip_of_the_container_you_try to join
you should see something like
10:08:25.423797 c2:cd:8b:f6:ab:cd > 0c:c4:7a:31:xx:xx...
If you have hostnames and not IPs in your corosync.conf, does the hostname resolution require access to a third machine which is outside your LAN ?
In that case I would recommend you to put in the hostname and IP in /etc/hosts.
if you suspect ARP to be an issue, you should have a look if arp requests are send and received properly
on a PC located on a acces port on the same VLAN, when the problem occurs
* flush your arp cache ( here arp -ad on the FreeBSD system I am using)
* inspect with tcpdump if the whohas...
hum, it might be that the NFS mount point is not correctly detected. the mount is retried and fails
so please remove the storage from the command (it will keep the content untouched)
for instance
pvesm remove backupremote
stop processes accessing the share (you can list them with fuser...
> Go to promox interface for "activate" NFS Storages
After activating the storage, is the mount taking place ?
ie what is the output of
pvesm status
pvesm list templates
pvesm nfsscan nfs_server
For the FreeNAS backup part: you can only backup as fast as you can read :)
Make sure that you that FreeNAS is on a different network, and that you can read fast enought.
Whe you see the vzdump output like:
INFO: status: 92% (47804776448/51539607552), sparse 12% (6186778624), duration 319...
to use your internal LAN for cluster communication: use the private IP (10.0.*) adress of each node when adding a cluster member
to use public internet addresses for your VM or CT: when adding a network interface for a VM or CT make it sure it is added to a bridge which is connected to the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.