It will be as simple as any kernel installation on Debian :)
apt-get install kernelX.XX.XXX
I am not shure though if the testing kernel will be available via proxmox repo or if you have to grab it from the webserver
Hi all,
today we upgraded three of our cluster nodes to proxmox ve 1.8. After reboot off all three nodes we experieced the same issue we have after our last controlled shutdown because off works which had to be done to the electrical network.
We have a NAS which provides two NFS exports:
One...
Ok, I posted in the OpenVZ support forum. If noone replies I will forward the question to the OpenVZ mailing list. It is really "funny" that someone provided an OpenVZ template which doesn't work in the first place. :P
I would care less if there would be an OpenVZ template for OpenSuse with the...
Are there chances that a future Proxmox kernel will integrate DEVTMPFS as well ?
Or does this feature always colide with CONFIG_VE ?
The reason why I am asking is the dependecy of certain OpenVZ templates on DEVTMPFS.
Since the main target distribution for our server products is OpenSuse /...
This is not correct for promox 1.8 if you migrate via web interface. We have one "external" NIC / bridge connected to our test network plus one NIC / bond device connected to an internal storage network (4x1GBit) with a dedicated switch. Proxmox only allows migration via the external NIC /...
Thanks for your quick reply ! Yes, I know...I didn't request new features.
It is more that the "rest of the pack" has many of them and hopefully proxmox will have some of them soon too :D
Hi there,
while proxmox provides a great feature set I still have the feeling that it lacks several features which makes life for VM administrators easier. At least in typical QA environments.
From my perspective this includes (but is not limited ;)):
VM Migration:
- Support for VM migration...
The issue was "solved" by restarting pvedaemon. Seems to me that some part of the cluster configuration was not synched correctly across all cluster nodes.
Ok, short update:
I have a VM up and running connected against the iSCSI target but now I get the same error on certain hosts when I try to edit machine settings (e.g. adding another VHDD). This behaviour is limited to certain nodes within our cluster although open-iscsi was similary installed...
Hi,
I do not want a VG for this iSCSI target. I want to use one iSCSI target dedicated to a separate machine as block device. The strange thing is now that I was able to create and start the VM on one cluster node while it still fails on all other cluster nodes although all have the latest...
Proxmox gives error when creating VM with iSCSI target [solved]
Hi all,
I am currently trying to set up a VM with iscsi target as hard disk.
I can add the iscsi target to storage without any issues but as soon as I try to generate a KVM machine with iSCSI target from the GUI I get the...
The NFS share is fully writable for root. The really strangest thing is that a lot of other VM backups to the NFS share work fine and then we are experiencing the problem described.
When I start the same backup job manually the next morning it will most likely run without any problem. The...
Yes, more than enough. We are talking about a VM with something like 30 GB HDDs and someting like 3 TB available on the NFS share which is used for backup.
Hi all,
we have a strange problem when making backups of our KVM machines. We have a cluster consisting of mixed cluster nodes and a central SAN for storing the images of the machines via NFS.
When launching the backup with vzdump we get error messages like this:
proxmox-epr005:~# vzdump...
Yes, they run on the same kernel on the same host.
The new VM I set up yesterday is running on another host though because I wanted to make shure that the host is not the problem.
Kernel host #1:
Linux proxmox-epr004 2.6.32-1-pve #1 SMP Fri Jan 15 11:37:39 CET 2010 x86_64 GNU/Linux
Kernel...
Thnx for replying !
Ok, let me be more specifc:
1) The "other" Xen-converted VM runs fine with KVM/promox with two cores.
2) After I changed the number of cores to 1 for the new machine and rebooted the behaviour unfortunately didn´t change. It was just a quickshot though because I do not know...
Hi all,
we are currently experience a very strange behaviour with the combination of Win2k3 (Standard Edition) and Proxmox VE 1.5
Our virtualisation cluster currently mainly consists out of three IBM machines (8 cores / 32 GB RAM each / 2x500GB SAS drives locally) connected to a Thecus 8800...
Hey tom,
Acknowledged...my car will also not like if my wheels are removed at full speed :)
I see your point but I was more trying to pinpoint that the web frontend of the master should still be working even when an attached storage device fails unless it´s the host HDD itself where proxmox is...
Thanks for your reply ! The major problem I see for now:
In order to access the cluster master web frontend again you either have to power up your storage server or remove the storage server from your storage pool.
Since removing the storage is only possible from the cluster master we run into...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.