We have a couple of multi node clusters running latest 3.4 without any issues and tried to re-install one 4 node cluster of them with pve 4;
Base install was straight forward but run into issues with quorum when creating the cluster - all nodes were setup identical but one node couldn't join the...
you should use ext3 as this is supported and recommended by proxmox;
i had ext4 on one host which was running fine for about 4 month and then i run into problems that backups did not work, they hang until reboot of the host - there were some ext4 issues in the syslog and i switched back to ext3
what do you want to achieve?
you have eth0 and eth7 in the same subnet but on different bridges, and i assume you currently have your vm connected to vmbr0 and now want to add vmbr7070 to the same vm - does not make sense to me
if you need a second ip inside your vm from the same subnet create...
there is no need to be owned by root, for a restore the files needs to be readable - what is the case;
you can check on your nfs server which user owns this id and set - i assume it's 'nobody'
can't exactly remember right now but i think in idmapd.conf file is a option to set a user mapping if...
you even do not have one managed switch?
if yes i would do a port mirror on the interfaces to see what happens - otherwise you can only guessing;
a possible reason could be that there is a broadcast storm on your network causing this kernel panic and adapter resets on the other nodes because of...
have the same issue - impossible to remember except you have very detailed documentation somewhere;
the workaround i use is that i have 4 nodes - all vm's on node one have vm id starting with 1, on node two all vm's starting with node id 2, ...
works fine for me
yes, you are right - that's currently not possible not via gui;
you also need to check for vzqouta in /var/lib/vzquota and delete it because it contains the old storage path
vzquota drop <pveid>
a new one will be generated next time you start your vm;
seems you use venet interfaces - in this case you only can assign an ip address from the host interfaces subnet;
with the use of veth (bridged mode) you can use networking inside the container as on a regular pc;
i use only veth interfaces with my containers and thats how it looks like in the...
ahh, that's the reason for you to race this thread alone :-)
just kidding, provide more infos that someone is able to help you - only saying it's not working is not enough;
so, if you are really interested in someone's help then come on and provide accurate detail info's about your problem...
The Proxmox wiki is an excellent resource to start, or you need to provide more details for anyone to be able to help
http://pve.proxmox.com/wiki/Installation
http://pve.proxmox.com/wiki/Documentation
http://pve.proxmox.com/wiki/Category:HOWTO
Proxmox use pre-allocation when creating qcow2 images, so it's also using the the full size on the host system;
This prevents over provisioning of your storages;
you may want to save all in /etc/pve as it contains also the openvz container configs, storage definitions and backup tasks
and check if your local storage /var/lib/vz is on hardisk 1 and possibly contain data which needs to be saved
wireshark/tcpdump on the pve host and/or nfs target should give you more details whats going and where to look deeper
i had similar issues on two pve hosts and found that nic and switch had a problem sometimes with autonegotiation on higher throughput and after fixing both the issue didn't re-occur
Yes, i use two node drbd nodes in active/active mode and 2x 10GBit in bonding mode 1 (active/backup) and the 1 GBit interfaces for management and bridges (also bonding mode 1);
If you wonder why not bonding mode 0 (balance-rr) - because i want to make sure highest availability at minimum risk...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.