Hello Alessandro,
In case of Proxmox you have 1 disk = 1 file
See some hints also at https://pve.proxmox.com/wiki/Xenmigrate
Not necessarily - the created IMG is format "raw" which is faster but consumes more diskspace and does not offer linked clones and/or snapshots. Decide up to your...
Hello sourceminer
Are these just CEPH nodes or Proxmox nodes too?
Even you wrote "cannot ... find anything in the syslogs": post a sequence of such an event and we may see more .....
How often does this happen?
Try do exchange the power connection between the "good" and "bad" servers - it...
Hello Jonathan
It depends which filesystem you have at /dev/pve/data resp. /dev/pve/root
* In case of ext(2,3,4):
- shrink the filesystem at /dev/pve/data by resize2fs
- shrink the logical volume by lvreduce - be careful to not make it less than the filesystem has been reduced to, see...
Hello Bart,
Reading your post it is not quite clear if you made an upgrade from a version 4.x to the latest 4.4 or from a 3.x
However, something went obviously wrong during this process. What you can do trying to repair a failed upgrade:
apt-get -f install
and/or
dpkg --configure -a...
Hello LaxSlash1993,
You can probably install, but not run it because Virtual Box does not allow nested HW virtualization. Don't know which native OS you have - but why not consider LINUX Proxmox distro? It is a full DEBIAN, you can install desktop environment as Gnode, KDE, LXDE etc. on in...
Hello,
Yes, indeed, Stéphane is right - in Proxmox VE 4.0 routing is deactivated by default (AFAIK in V3.4 is was active).
Note that the setting above has effect only after restart. In order to activate it immediately do
echo 1 > /proc/sys/net/ipv4/ip_forward
in addition
Kind regards...
Hello woser,
In order to localize the problem, check:
Is backup storage BACKUP_P5 reliably accessible? If you use a network storage for backup try it with a local one instead.
Does it work using "suspend" or "stop" mode?
You use currently probably qcow2-format - did you try it with raw...
Hello afrugone,
I would not say it directly like this - but LACP (you should use rather active-passive then) needs support on the switch side, and my experience is that you cannot trust all switches. Active-Backup on the other hand is simple and don´t need any special support.
Understood -...
Hello Elgs Qian Chen
Do you mean to assign /64 addresses to the existing /48 (virtual) LAN? In this case you can use the same bridge/NICs.
You can assign as many addresses as you like to one NIC or bridge.
If you think I have misunderstood something give more details about your request...
Hello godlike
Follow the steps described here: https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster#Remove_a_cluster_node
Remove all VMs in Datacenter's tab "HA"
After this don't forget to click at "Activate" button in tab "HA"
Kind regards
Mr.Holmes
Hello afrugone,
Quite right!
Three physical LANs (and no VLANs)! Since you have 4 NICs you may bond two of them to one of these three.
The most cirtical is storage traffic, in order to keep it secure I would use "active-backup"
Kind regards
Mr.Holmes
Hello SergeyM,
Network from containers works usually without any problems.
If you post more details about your IP configuration it can be considered what in particular has to be improved.
To do so call the following on the command line (supposing you use Proxmox VE 4.0, otherwise it would be...
Hello tycoonbob
Delete vmbr0 and assign the IPs directly to bond0.
Simply leave vmbr1 and vmbr2 without IP address (make the changes in GUI and restart the host).
Kind regards
Mr.Holmes
Hello LeadGuit
I understood each IP has it´s own physical NIC. The easiest way to implement the above is to bridge the NICs for IP2 and IP3 to diffrent bridges, without any IP address in the host but IP2 and IP3 in the VMs.
Kind regards
Mr.Holmes
Hello endimion,
the node name is defined in /etc/hostname
In file /etc/hosts you should have a line like this (IP number and my... are examples):
192.168.99.99 mynodename.mydomain.tld mynodename pvelocalhost
Moreover, GUI shows each subdirectory which can be found by
ls /etc/pve/nodes/...
Hello EvilMoe
Now a solution exists (I´d rather think it´s just a workaround - the real problem is in the container template; however, it works):
http://forum.proxmox.com/threads/22770-fix-for-centos-7-container-networking
Kind regards
Mr.Holmes
Hello EvilMoe
There is obviously a bug in this template (Missing access rights to venet0 interface in the container?).
However: why not using veth? This works fine also with this template and IMHO it should be preferred in any case.
Of course, OpenVZ should also correct the venet0 issue...
Hello PorxDevy,
In order to be sure that multicast is functioning correctly try also to have a look at the packages sent/received in both nodes by
tcpdump -e -n -i eth0 | grep "239\.192\.181\.106"
Adapt in the above interface name and multicast IP if needed!
Have a look if all packages...
Hello ggsmarket
Obviously you upgraded recently but you did not perform a reboot afterwards. The system still runs with the old kernel. Reboot the Proxmox host!
If this does not resolve the problem check the kernel module by
lsmod | grep kvm
How does your VM configfile look like? Check...
Hello totalimpact
For sure I prefer "method 1":
This is quite clear structure and easy to handle. If firewall needed use the Proxmox firewall settings!
The kvm built-in NAT. Not recommended to use it since you cannot adjust anything.
Kind regards
Mr.Holmes
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.