Search results

  1. U

    Error? messages - Guest Rip

    reviving this old thread, just to check if the messages I am seeing have the same origin. After a recent update to the most up2date 6.4 version, I keep getting a lot of messages like the ones below in the journal: Sep 15 10:46:11 concordia kernel: kvm [11085]: vcpu3, guest rIP...
  2. U

    [SOLVED] random reboots

    It was the power supply failing. While the server is not even close to high load, occasional spikes were enough to trigger the failure.
  3. U

    clusterwide startup order

    I'm trying to figure out how to have a cluster wide startup order. Initially I thought, the "Start and Shutdown Order" mentioned in "11.3.6. Automatic Start and Shutdown of Containers" of the PVE Admin guide was the way to go, but it turned out that this "Startup Order" is only taken into...
  4. U

    [SOLVED] random reboots

    final update: this has all turned out to be completely non-proxmox related. Instead my nice little home server has died ungracefully from a hardware failure ... Sorry for the noise :)
  5. U

    [SOLVED] random reboots

    it has just happened 5 times within shortest time, and in total 8 times this day ... Dec 13 11:00:58 foundation kernel: Command line: BOOT_IMAGE=/vmlinuz-5.4.78-2-pve root=UUID=6ad2877d-c58a-4337-8968-b432410fb376 ro quiet intel_iommu=on Dec 13 11:00:58 foundation kernel: Linux version...
  6. U

    [SOLVED] random reboots

    I only learned today that my home proxmox server reboots frequently. Sometimes it's sufficient to just start it to have the proxmox server go down an reboot. Unfortunately, I only activated the journal today, so I can't say how long this has been happening. After activating persistent logging...
  7. U

    implications of qemu "cache=unsafe"

    Thx. What I am interested in is the "written to disk out of order and asynchronously" part. Obviously, at a certain point of time, the pending data must be written, no matter if a guest's flush command gets ignored or not, for example if the cache is full or once the VM terminates. Which...
  8. U

    implications of qemu "cache=unsafe"

    Like the title says, I am trying to understand the implications of using cache=unsafe for some VMs. Don't you worry, I did my homework and I am well aware about what the PVE documentation says about it (ie https://pve.proxmox.com/wiki/Performance_Tweaks), as well as numerous other sources (eg...
  9. U

    LSI SAS2308 SCSI controller: Unsupported SA: 0x12

    it runs on the original HP firmware in IT mode, nothing fancy here :) But as we use those controllers in a a couple of servers, I noticed that the controller in question has a slightly outdated firmware. I'll update to the latest firmware version coming from HP and report back, if that changes...
  10. U

    LSI SAS2308 SCSI controller: Unsupported SA: 0x12

    yes, that's what I am using. The proxmox node is serving iSCSI requests using LIO, and to be even more precise, we're talking about ZFS over iSCSI.
  11. U

    LSI SAS2308 SCSI controller: Unsupported SA: 0x12

    We're using one of our proxmox nodes as an iSCSI target. The node is equiped with an HP H220/LSI SAS2308 SCSI controller with a number of disks attached to it. During high load (particularly when cloning or when the backups run), the kernel ring buffer get's spammed with tons of "Unsupported...
  12. U

    after upgrading from 5.x to 6.x, iSCSI VMs can only be started via commandline

    yes, your patch fixes the problem! Thanks for the swift correction, much appreciated!!
  13. U

    after upgrading from 5.x to 6.x, iSCSI VMs can only be started via commandline

    allright, so this is a little bit more complicated. [udo@veteris ~]$ curl -v -XPOST -k -H "$(<csrftoken)" -b "PVEAuthCookie=$(<csrftick)" https://rambler:8006/api2/json//nodes/rambler/qemu/133/status/start | jq '.' % Total % Received % Xferd Average Speed Time Time Time...
  14. U

    after upgrading from 5.x to 6.x, iSCSI VMs can only be started via commandline

    well, well, what kind of POST payload would I send to this endpoint? Anyhow, this is what happens when I do the same request using POST: $ curl -X --insecure --cookie "$(<cookie)" https://rambler:8006/api2/json/nodes/rambler/qemu/133/status/start | jq '.' % Total % Received % Xferd...
  15. U

    after upgrading from 5.x to 6.x, iSCSI VMs can only be started via commandline

    starting via the API didn't work either: [udo@veteris ~]$ curl --insecure --cookie "$(<cookie)" https://rambler:8006/api2/json/nodes/rambler/qemu/133/status/start | jq '.' % Total % Received % Xferd Average Speed Time Time Time Current Dload...
  16. U

    after upgrading from 5.x to 6.x, iSCSI VMs can only be started via commandline

    alright, I tried with chromium in private mode (I usually use FF). The only thing you see in the console is some complaints about missing files. The errors appear right after logging in, so they are not related to clicking on "start" or something. Will try with the API in a second
  17. U

    after upgrading from 5.x to 6.x, iSCSI VMs can only be started via commandline

    glad to see I'm not the only one in this boat :) As with daniel, using root@pam to start the VMs doesn't work here either but starting them via the pvesh works. And, completely unrelated, the forum upgrade from a couple of minutes ago messed the posting icons in firefox:
  18. U

    after upgrading from 5.x to 6.x, iSCSI VMs can only be started via commandline

    Hi, I've upgraded one of our clusters from 5.4 to 6.2 only yesterday and unfortunately cannot start VMs using iSCSI storage using the UI. The logs don't show much, all I see there is Error: start failed: QEMU exited with code -1 Researching the forum I read that errors like these are likely...
  19. U

    [SOLVED] Windows 10 (1809): nested virtualization does not work

    I'll do that, yes. But I'm still investigating if nested Hyper-V is usable for my workload, however. Unfortunately, my tests have not been very promising in terms of performance so far, at least not when using it for Docker on Windows.
  20. U

    [SOLVED] Windows 10 (1809): nested virtualization does not work

    so finally I've found a solution to get nested Hyper-V virtualization up and running on a virtualized Windows 10 pro 1809 installation. As it seems, one additional qemu cpu flag is required to hide the hypervisor from Windows: * hypervisor: this needs to be turned off as a cpu flag, ie "-cpu...