note the multicast traffic from out Layer 3 switch:
ifIndex 417
Octets Received 6673610456
Packets Received Without Errors 24005319
Unicast Packets Received 1431638
Multicast Packets Received 16965403
Broadcast Packets Received 5608278
Receive Packets Discarded 5125808...
Dietmar,
Sorry for the delay replying, I could not test until tonight.
Stopping rgmanager was mentioned on the wiki.
It does force migration of ha kvms. They stop on the node and then start on the other node:
fbc243 s009 ~ # qm list
VMID NAME STATUS MEM(MB)...
After restoring the backup, delete the Network Device and add a new one.
Or do the restore from cli.
On pve 1.9 we used to do this, I've not tired on 2.1 :
qmrestore --unique /bkup/pvebackup/vzdump-qemu-560-2011_07_24-11_30_05.tar <NEWID>
from the man page for qmrestore : --unique...
3dm2 still has an important issue here. we can access the configuration page with konqueror . However the email notification test crashes 3dm2 . 3dm2 may not send an email regarding a failed disk etc..
for now to get notified of issues we do this 4x per hour from cron.d.
#!/bin/sh
# not...
Since you will be purchasing two systems, consider running Proxmox on both and using drbd. There is a good wiki page and plenty of help on the forum.
Zfs and nas are great but do not supply high availability.
I just tried to add this to wiki, but could not log in. I use this and not the wiki page that I mostly wrote.
see post #24 at http://forum.proxmox.com/threads/7145-VNC-Console-keyboard-issue
I had too many errors with rsync and the kvm on sshfs .
the issues may have had nothing to do with sshfs, b ut the system i restored works without issues on drbd and local storage.
So I give up on sshfs for now. Will use nfs instead.
here were some of the issues:
[mntent]: warning: no...
May 30 12:59:58 starting migration of VM 8020 to node 'fbc240' (10.100.100.240)
May 30 12:59:58 copying disk images
May 30 12:59:58 starting VM 8020 on remote node 'fbc240'
May 30 12:59:59 starting migration tunnel
May 30 13:00:00 starting online/live migration on port 60000
May 30 13:00:02...
the mount and share work, thanks for the idea.
however restoring an openvz template does not work. If I remember correctly, sshfs and links do wot get along. here is some of the error message at pve :
Creating container private area...
I think it would be good to allow sshfs for shared storage.
For us sshfs is more reliable then nfs . We use sshfs for user documents - with $HOME/Documents targeted to a kvm sshfs . We've done that for 3 years and have had very few issues.
Hello Marco
There is nothing to do or worry about when that message displays [ that conclusion is from reading threads on this forum and mails from the pve-user mail list. ] . It looks like one of those tar errors that should be a warning. my guess is that the devs will get to dealing...
I have a supermicro model x7dal-e+
it does not have an on board video card.
we used this for proxmox 1.9 with an add on video card, and had display on the system console.
for proxmox 2.1 kernel 12 , we do not have video - after grub loads the kernel and gets to 'waiting for /dev/...' or...
OK I see , the entire logical volume is snapshotted.. I thought the snapshot was somehow just made on of the directories under private.
So lvcreate --snapshot is used on /dev/mapper/pve-data .
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.