/lib/modules/2.6.24/{build,source}
However, forget I said anything about that. It's easy enough to change.
The linux-2.6.24-openvz directory is only 46MB when cleaned, .git directory removed, tar'ed and bzip'ed.
Well that sounds easy enough. If that were the case I would suggest that...
Forget I said that part. For some reason I wss thinking the path pointed to by /lib/modules/2.6.24/{build,source} was specified by the running kernel kernel.
You do not reference the kernel source at all. Do we need the Debian source for a particular Debian kernel? Or do we need some vanilla kernel from kernel.org?
The README is not at all clear (understatement) and from it I cannot duplicate the exact kernel source used. Also, the binary kernel...
I don't think this person's request is unreasonable.
You are distributing a kernel licensed under the GPL. You are required to also provide the source. It doesn't matter if it's a vanilla Debian kernel or not. You still have to make available the source whenever you distribute GPLed code.
BTW...
Don't bother. Most wireless cards, including Broadcom cards, cannot see traffic for more than one MAC address. So even if you find 64-bit Windows XP drivers for the card that actually work with ndiswrapper, you won't be able to use it usefully with Proxmox VE. (You won't be able to successfully...
I should have mentioned that I don't think inetd is installed on Proxmox VE by default. So to use this you need to run
apt-get install openbsd-inetd
on the Proxmox VE server as root.
Each line added to the file represents a Proxmox VE virtual machine that you want to be able to connect to externally. In my example 101 through 108.
The first column is the port you want to listen on for a particular VM (penultimate column). That is of course configurable.
The last column is...
Yes, that will work too if you really want to use SCSI instead of IDE. I doubt seriously if there is any performance benefit for using SCSI and I believe IDE is more stable in certain situations.
If your RAID array is hot-pluggable I would suggest that you use /dev/disk/by-id/<whatever> or...
(Assuming you are talking about a KVM VM.)
You can attach the RAID array to the Proxmox VE server as you would any to any Linux box. Then edit the configuration file for your VM (/etc/qemu-server/xxx.conf) to include a line like this one:
ide1: /dev/disk/by-id/<ID of RAID array>
Of course if...
I am not experiencing the DMA problem with IDE disks after long-term use on FreeBSD 6.2. I've had servers running for weeks with no issues-- unless I reboot.
I am having the same issue on 32-bit FreeBSD 6.2.
Every time the system is rebooted the bootloader can't find the disk. Completely powering off the VM (rather than a simple reboot) then turning it back on works.
Perhaps a bug in KVM's BIOS implementation?
Proxmox VE doesn't like it if an disk image for a KVM virtual machine is not an exact (integral) number of megabytes.
/usr/share/perl5/PVE/Config.pm line 331 is
if ($line =~ m/^(\d+):([a-z]+):(\d+):(\S+):(\S+):(\S+):(\d+):(\d+):(\d+):(\d+):(\d+):(\d+):$/) {
and should be
if ($line =~...
I added this to my /etc/inetd.conf so that I can always connect to my VMs with an external viewer: 9001 stream tcp nowait root /usr/sbin/qm qm vncproxy 101 SecretPassword
9002 stream tcp nowait root /usr/sbin/qm qm vncproxy 102 SecretPassword
9003 stream tcp nowait root...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.