allright, so this is a little bit more complicated.
[udo@veteris ~]$ curl -v -XPOST -k -H "$(<csrftoken)" -b "PVEAuthCookie=$(<csrftick)" https://rambler:8006/api2/json//nodes/rambler/qemu/133/status/start | jq '.'
% Total % Received % Xferd Average Speed Time Time Time...
well,
well, what kind of POST payload would I send to this endpoint? Anyhow, this is what happens when I do the same request using POST:
$ curl -X --insecure --cookie "$(<cookie)" https://rambler:8006/api2/json/nodes/rambler/qemu/133/status/start | jq '.'
% Total % Received % Xferd...
starting via the API didn't work either:
[udo@veteris ~]$ curl --insecure --cookie "$(<cookie)" https://rambler:8006/api2/json/nodes/rambler/qemu/133/status/start | jq '.'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload...
alright, I tried with chromium in private mode (I usually use FF). The only thing you see in the console is some complaints about missing files. The errors appear right after logging in, so they are not related to clicking on "start" or something.
Will try with the API in a second
glad to see I'm not the only one in this boat :)
As with daniel, using root@pam to start the VMs doesn't work here either but starting them via the pvesh works.
And, completely unrelated, the forum upgrade from a couple of minutes ago messed the posting icons in firefox:
Hi,
I've upgraded one of our clusters from 5.4 to 6.2 only yesterday and unfortunately cannot start VMs using iSCSI storage using the UI.
The logs don't show much, all I see there is Error: start failed: QEMU exited with code -1
Researching the forum I read that errors like these are likely...
I'll do that, yes.
But I'm still investigating if nested Hyper-V is usable for my workload, however.
Unfortunately, my tests have not been very promising in terms of performance so far, at least not when using it for Docker on Windows.
so finally I've found a solution to get nested Hyper-V virtualization up and running on a virtualized Windows 10 pro 1809 installation.
As it seems, one additional qemu cpu flag is required to hide the hypervisor from Windows:
* hypervisor: this needs to be turned off as a cpu flag, ie "-cpu...
I've since also tried to reinstall with as little traces of virtualization (no virtio drivers etc), but again to no avail.
The docs mention a "hidden" option for CPU
... but setting it 1 doesn't change anything (except adding "kvm=off" to qemu's cpu switch), so I am quite lost now ...
thanks for the idea, but turning off KVM hardware virtualization ends up with the VM taking like 10 minutes to display the login screen only and another 10 minutes to display the ordinary desktop, so that's unusable.
In order to run Docker for Windows, I'm trying to setup nested virtualization for a Windows 10 guest on our up to date 5.3-11 installation.
I've followed the wiki describing the required steps https://pve.proxmox.com/wiki/Nested_Virtualization and
I've searched both the net and the forum, but...
Jumping into this thread a little bit late, I'm the "real" Udo Rader who developed the patch for LIO :)
@raku:
My company has deployed patched versions of Proxmox to a small number of our development servers and so far did not see the performance degradation you see.
Can be a number off...
@shantanu no worries, I've already overcome this issue by finding a suitable boot USB stick.
Those were my first attempts to get my feet wet with Proxmox, so installing a PXE server just for giving Proxmox a test drive on one computer would have been a little bit of an overhead.
Hi,
If I see it right, proxmox currently (5.1) does not really support ZFS via iSCSI on up to date versions of Linux. "Linux" support for ZFS over iSCSI found in proxmox today means "IET", which has been deprecated for quite a while unfortunately.
I've looked into how IET support has been...
hopefully there are easier ways to achieve this, unfortunately none of them worked for me ...
The process I described looks actually more complicated than it is, I've repeated it yesterday for a couple of servers and if you know what to do, it doesn't take much more than 10 minutes per server.
as I have had to fight that beast as well and finally won the battle, I thought I'd share my findings.
first, as laid out previously, the easiest way to bring it up and running at Hetzner is to ask the staff to put in an USB stick with proxmox 5.1 on it and then access your server via a LARA...
thx for that code fragment.
It indeed looks like none of the five or six USB thumb drives I've played with have that flag set, so that's the reason for the installation not to start.
Given the high amount of repeated questions about why installations from USB sticks don't work, maybe removing...
just one additional thing I noticed:
after I transfer the ISO to the USB drive using whatever kind of tool (dd, imagewriter, I even tried the Windows based "etcher.io" tool mentioned in the admin guide), unplug the USB stick and replug it, dmesg shows me those lines:
[ 7351.886579] usb 1-2...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.