Here's /var/log/syslog during the error I noted above.
Timeline is as follows:
1. Nov 16 13:25:01 - VM 101 started
2. Nov 16 13:25:16 - VM 101 stopped
3. Bridge vmbr1 goes down immediately after vm stops
4. Nov 16 13:25:27 - User execution of "service networking restart" on the proxmox...
I found a solution. Instead of making another vmbr1 interface, I just gave vmbr0 two IPs, one of which is the internal VM network. Now when I stop a VM it does not take down the entire bridge.
This should suffice for our uses (software development lab usage).
I'm noticing that when I stop a VM with a network interface tied to an internal "private sub network" it takes down the bridge as well. This means any other VMs on that network don't work either.
It doesn't matter if the VM is using a network interface with e1000 or virtio.
I'm using this...
@Nemesiz I plan on picking up servers from OVH with 128GB of ram and two 2TB magnetic disks and two 300GB SSDs with dual Intel E5-2630v3. The use case will be a software development and test environment. Non production.
I was going to run RAID1 on the magnetic disks, and L2ARC on one SSD and...
Thanks for your suggestions!
I know that if I clustered these nodes I could control them from any node. I'm coming from a XenServer environment where getting control of all nodes was as easy as typing in an IP into a textbox on client software. I'm reluctant to start making these nodes aware...
Ah. There might be some confusion here. I have more than one physical server. I have 7. But they're all used in a development/test environment, where high availability is not needed. Just to make sure we're all on the same page, as I understand it, Proxmox refers to "nodes" as physical machines...
Take a look at clonezilla. You can boot off the clonezilla CD, and mirror your os drive over the network (via NFS/etc) or onto another physical drive.
http://clonezilla.org/
I'd like to manage all my nodes from one location (e.g. https://10.20.30.40:8006).
I know that if I clustered all my nodes I could do that. However I'd rather not cluster these nodes as it increases complexity, and gives me more work as I'd have to move all my VMs around so I can clear out the...
I'm a little confused. This is how I read this:
- You have a server with Proxmox installed already
- You have a kvm guest on the proxmox host
- You have a drive installed in the Proxmox host, and you can mount it with mdadm and vgchange.
- You're asking how to transfer all data on this...
FYI in the future if you have trouble with the Java console, you can enable VNC directly to the vm by editing the
/etc/pve/local/qemu-server/<VMID>.conf
Where <VMID> is your VM's ID, e.g. 101.
Then put this in the conf file
args: -vnc 0.0.0.0:<100+VMID>
Where <100+VMID> is your VMID + 100...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.