It will be as simple as any kernel installation on Debian :)
apt-get install kernelX.XX.XXX
I am not shure though if the testing kernel will be available via proxmox repo or if you have to grab it from the webserver
Hi all,
today we upgraded three of our cluster nodes to proxmox ve 1.8. After reboot off all three nodes we experieced the same issue we have after our last controlled shutdown because off works which had to be done to the electrical network.
We have a NAS which provides two NFS exports:
One...
Ok, I posted in the OpenVZ support forum. If noone replies I will forward the question to the OpenVZ mailing list. It is really "funny" that someone provided an OpenVZ template which doesn't work in the first place. :P
I would care less if there would be an OpenVZ template for OpenSuse with the...
Are there chances that a future Proxmox kernel will integrate DEVTMPFS as well ?
Or does this feature always colide with CONFIG_VE ?
The reason why I am asking is the dependecy of certain OpenVZ templates on DEVTMPFS.
Since the main target distribution for our server products is OpenSuse /...
This is not correct for promox 1.8 if you migrate via web interface. We have one "external" NIC / bridge connected to our test network plus one NIC / bond device connected to an internal storage network (4x1GBit) with a dedicated switch. Proxmox only allows migration via the external NIC /...
Thanks for your quick reply ! Yes, I know...I didn't request new features.
It is more that the "rest of the pack" has many of them and hopefully proxmox will have some of them soon too :D
Hi there,
while proxmox provides a great feature set I still have the feeling that it lacks several features which makes life for VM administrators easier. At least in typical QA environments.
From my perspective this includes (but is not limited ;)):
VM Migration:
- Support for VM migration...
The issue was "solved" by restarting pvedaemon. Seems to me that some part of the cluster configuration was not synched correctly across all cluster nodes.
Ok, short update:
I have a VM up and running connected against the iSCSI target but now I get the same error on certain hosts when I try to edit machine settings (e.g. adding another VHDD). This behaviour is limited to certain nodes within our cluster although open-iscsi was similary installed...
Hi,
I do not want a VG for this iSCSI target. I want to use one iSCSI target dedicated to a separate machine as block device. The strange thing is now that I was able to create and start the VM on one cluster node while it still fails on all other cluster nodes although all have the latest...
Proxmox gives error when creating VM with iSCSI target [solved]
Hi all,
I am currently trying to set up a VM with iscsi target as hard disk.
I can add the iscsi target to storage without any issues but as soon as I try to generate a KVM machine with iSCSI target from the GUI I get the...
The NFS share is fully writable for root. The really strangest thing is that a lot of other VM backups to the NFS share work fine and then we are experiencing the problem described.
When I start the same backup job manually the next morning it will most likely run without any problem. The...
Yes, more than enough. We are talking about a VM with something like 30 GB HDDs and someting like 3 TB available on the NFS share which is used for backup.
Hi all,
we have a strange problem when making backups of our KVM machines. We have a cluster consisting of mixed cluster nodes and a central SAN for storing the images of the machines via NFS.
When launching the backup with vzdump we get error messages like this:
proxmox-epr005:~# vzdump...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.