Would it be possible to have proxmox manage custom invocations of kvm/qemu, that is proxmox lists the system in the ui and can start/stop it and add the network bridges etc as required, but the majority of the command line is entered by hand by the user (basically an extension of the args...
What you could do is mount the virtual filesystems readonly on the host through qemu-img or kpartx and scan them... As this scanning would take place at the hypervisor level rather than on the potentially infected host, it should be more effective.
I've had no luck using the new virtio scsi support with windows guests...
The driver loads, and shows up as "RedHat Virtio SCSI Pass-through driver", but various problems occur when i try to use it.
If i use LVM backed storage, the disk shows up inside the guest with the full capacity of my raw...
I have a heavily customised kernel which i use inside of KVM, based on the gentoo hardened-sources (includes a number of additional security features).. The config is available at http://www.firenzee.com/config-4.0.5-hardened and if applied to a generic kernel source tree will just ignore the...
I guess the issue is that when migrating between nodes without shared storage (ie you have to migrate both the config and the data storage), where the target node has multiple storage added you cannot choose which storage to migrate the disks onto, i assume it just chooses the first available...
The 3.10.0 kernel seems to be missing the cciss driver (for older hp raid controllers, p400 and earlier)... It has the new driver (hpsa) but not the older one, the new driver doesn't support anything below the p410 card. Can this driver be added? It's still supported by the kernel, but is disabled.
yes that configuration will never work because the assigned address and gateway are in different /64 block
address is 2a02:238:f019:412::/64
gateway is 2a02:238:f019:401::/64
you either need to use a larger network size, or addresses in the same /64 range
ehm, if you add 2a02:238:f019:412:1:1:1:1/64 then you wont be able to reach your gateway of 2a02:238:f019:401:2a0:57ff:fe14:86f5 because its not in the same /64, you would need to add a gateway in 2a02:238:f019:401:*
try:
ip addr add 2a02:238:f019:401::1/64 dev vmbr0
ping6...
You didnt add an ip, you added a route:
> ip -6 ro add 2a02:238:f019:412::/64 dev vmbr0
you never assigned an actual address to your interface, and you don't seem to have autoconf... you cant route traffic from the fe80 addresses as they're for link-local purposes only
I seem to remember the original proxmox feature used rsync to copy the disk images, which had considerable overhead because it had to:
1, copy disk images
2, pause the vm
3, checksum the whole disk image on both sides in chunks to determine which bits had changed, and then copy those
4, restart...
Works very badly for me...
You have to manually invoke it from the cli using the downloaded spiceproxy file, and once connected its very slow - especially the mouse, and there are warning messages in the cli about mouse acceleration not being available for this platform.
This is entirely the...
It appears that setting the console to a serial port fails unless i define a serial port in the config, which i have done manually by adding "serial0: socket".. Is there any way to add a serial port through the web interface?
I have several systems running proxmox 3.0 with hp p400 cards...
I suggest adding the hwraid repository to apt in order to get the hp array configuration utility, see:
http://hwraid.le-vert.net
If you have 2x 10G servers, try running iperf between them?
Downloading from external mirror sites will rarely be that fast, most of them will be 100mb or 1gb at best and could be highly loaded by other users (your examples look like 100mb).
Very useful, thanks for implementing this.
I was previously adding a serial port manually using args: and a bunch of minicom rc files (one for each vm), but this had its drawbacks - eg cloning the image didn't change the socket name etc...
Any ETA on when these changes will make their way to...
You can configure a serial console for kvm guests, i add the following to the /etc/pve/qemu-server/VMID.conf:
args: -serial unix:/var/run/qemu-server/VMID.serial,server,nowait
And i have a number of corresponding minicom rc files:
root@proxmox:/etc/minicom# cat minirc.119
pu port...
It would appear that hyper-v is doing write caching in the posted benchmark... While this undoubtedly improves io performance significantly, it carries an extremely high risk of causing data corruption!
KVM 1.4 has a few new features, x-data-plane for improved virtio block performance and machine type "q35" which is supposed to become the default in the future...
Is it possible/planned for proxmox to introduce support for these features?
In particular, q35 support is intended to be the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.