Search results

  1. L

    Cluster crash on update from 6.1-8 to 6.2-10

    As we got not the help we whised for we decided to disable the HA manager. We removed all VM ressources form the HA manager and also deleted the HA group. Now there is still one question. Is it normal that the status is still active for the lrm or do we need to do a further step to fully disable...
  2. L

    Bulk migration parallel jobs vs. max-workers

    Short question about the meaning of two options: In the web interface under Cluster -> Options I have the option "Maximal Workers/bulk-action". From observation this value defines the maximum number of parallel bulk migrations. But in the interface for bulk migrations I have the option "Parallel...
  3. L

    Installed qemu-server (max feature level for 5.0 is pve0) is too old to run machine type 'pc-i440fx-5.0+pve1'

    Ok...so this is not possible with the Webinterface. Is there a list of "possible valid machine types" somewhere?
  4. L

    Installed qemu-server (max feature level for 5.0 is pve0) is too old to run machine type 'pc-i440fx-5.0+pve1'

    But the machine type is on both new and old only "Default (i440fx)". The only other option in the web interface is q35. So I don't know how to change it to something else and more "compatible".
  5. L

    Cluster crash on update from 6.1-8 to 6.2-10

    Moin, leider ist das Problem gestern wieder aufgetreten. Diesmal zum Glück nur bei einem Knoten. Nachdem wir Freitag bis Montag keinerlei Probleme hatten hat gestern wieder ein node spontan rebooted, und zwar 15 Minuten und einen absichtlichen reboot nach dem Update auf 6.2-10. Ich verstehe...
  6. L

    Installed qemu-server (max feature level for 5.0 is pve0) is too old to run machine type 'pc-i440fx-5.0+pve1'

    I have nearly the same problem now. I can't migrate a VM from a node running 6.2-10 (node7) to a node running 6.1-8 (node13). Same error message: TASK ERROR: Installed QEMU version '4.1.1' is too old to run machine type 'pc-i440fx-5.0+pve0', please upgrade node 'node13' Upgrading node13 is for...
  7. L

    [SOLVED] UI Bug at CPU cores with 6.2-10

    The config file contains the right data root@node5:~# cat /etc/pve/local/qemu-server/175.conf agent: 1 bootdisk: scsi0 cores: 20 memory: 26624 name: srv.example.com net0: virtio=D2:1B:17:ff:ff:ff,bridge=vmbr0,firewall=1 numa: 0 onboot: 1 ostype: l26 scsi0: HDD:vm-175-disk-0,size=600G scsihw...
  8. L

    [SOLVED] UI Bug at CPU cores with 6.2-10

    After upgrading to 6.2-10 the hardware page of some VMs only report 1 core for the affected VM despite they are running and have 20 cores. The unapplied change is not related to this issue and the VM had 20 cores befor too. There are affected and unaffected VMs on the same host. Is this a known...
  9. L

    Cluster crash on update from 6.1-8 to 6.2-10

    Ich hab den Text gerade nochmal etwas präzisiert, vielleicht hilft das. Kannst Du mir erklären was da die freien 7G belegt haben soll (während ich ein Update installiere)? Hier zwei Ausschnitte aus dem Monitorring. Gesamttraffic alles Proxmox Knoten: Und hier direkt von den Switchen: Der...
  10. L

    Cluster crash on update from 6.1-8 to 6.2-10

    Ich gebe zu, dass ist die naheliegendste Erklärung. Leider dürfte sie hier aber nicht die Richtige sein. Ob der Aufbau an sich klug ist oder nicht sei mal dahin gestellt. Das ist Setup besteht aus einem 10G Netz das nicht mal zu einem Viertel ausgelastet ist. Während dem Ausfall und auch davor...
  11. L

    Cluster crash on update from 6.1-8 to 6.2-10

    corosync: logging { debug: off to_syslog: yes } nodelist { node { name: node10 nodeid: 6 quorum_votes: 1 ring0_addr: 2a0b:20c0:2000:60:42a6::8350 } node { name: node11 nodeid: 7 quorum_votes: 1 ring0_addr: 2a0b:20c0:2000:60:3efd::d6f4 } node {...
  12. L

    Cluster crash on update from 6.1-8 to 6.2-10

    Hey, yesterday I started to upgrade our cluster. Most of the 13 nodes where running 6.1-8 but two or three newer ones already used 6.2-10 as they where added in the last weeks or got upgraded already. No Problem so far. But yesterday I wanted to upgrade another node (node5) and after running...
  13. L

    [SOLVED] Problem with CPU flags

    Ahh ok...than the docs where a little misleading in this case, "exact the same" sounds like it is more comparable if you look at /proc/cpuinfo ... I guess :)
  14. L

    [SOLVED] Problem with CPU flags

    We have a small problem here with the cpu type option for our proxmox VMs. We run PVE 6.18 currently. The Problem is, that if we select "host" as CPU type the flags arn't the same but they should. Is there an easy explenation for that? From the Docs: > If you want an exact match, you can set...
  15. L

    How to apply ceph.conf change on a running cluster

    I want to set the following options for all OSDs in a running cluster in a persistent way. How do I apply the new ceph.conf to the running cluster and all daemons? I'm not quite sure if I need to set it manually via some config command or if there is a way to deploy the ceph.conf change...
  16. L

    [SOLVED] Configure vm shutdown timeout

    Thanks for the hint with the VM options. Didn't expected this option under "Start/Shutdown order". I searched for a global option or an option thats exactly named like that. I guess there no way to set a custom default for that?
  17. L

    [SOLVED] Configure vm shutdown timeout

    So there is now way to set a default for the Webinterface? Most of the collegues manage their VMs through it so this would be quite nice?
  18. L

    [SOLVED] Configure vm shutdown timeout

    After searching the forum I read that the hart timeout before a VM is killed is 3min. In our setup this is by far not enough for some hosts. So my question is, if there is any option to increase this timeout (yes shutdown the vm from the inside is an option too). Killed databases aren't that great.