Search results

  1. W

    Ceph HDD Pool zeigt unterschiedliche Größen an

    oh, das war hier auf Deutsch. Im Storage Tab zeigt der nur den noch verfügbaren Speicher an. Nicht die Pool _Size_. Zumindest habe ich das hier gerade quergecheckt und bei uns ist das so.
  2. W

    Ceph HDD Pool zeigt unterschiedliche Größen an

    i think your first screenshot ist from storage tab? I checked this here. There you see the free space in the brackets. Look at the ceph tab, there should stand 176 TB, maybe.
  3. W

    LAN network speeds are fine, but Proxmox and VM's have slow internet speeds.

    Hey, for tcp connections there are a lot of parameters, that can cause such sort of trouble. I myself observed through wireshark, that a server with problems did not scale tcp window size correctly. Connection was bound to 400kbit/s therefore. It was a typo in the sysctl settings for the tcp...
  4. W

    LAN network speeds are fine, but Proxmox and VM's have slow internet speeds.

    You double checked mtu settings? Here (in Germany) using dsl the mtu is usally 1492 e.g. instead of 1500 for local ethernet.
  5. W

    LAN network speeds are fine, but Proxmox and VM's have slow internet speeds.

    Hey, maybe you can test with different settings for the offload features in the NIC Driver. I have no concrete advice, but you often find hints to problems with offloading & virtualization.
  6. W

    BIOS 440BX for Windows XP

    Hey, i use "normal XP" virtualized on proxmox for legacy software as well. run´s out of the box and stable. I tried the nested thing also, it works, but no need. Before proxmox i used VirtualBox with an Web-GUI phpvirtualbox IIRC. If you are using industry software an need USB or a good timing...
  7. W

    [SOLVED] slow migrations

    hey, i sometimes see same behavior. i think it has to do with memory fragmentation on target node.
  8. W

    Proxmox USB Festplatte mounten Fehler

    Ganz ehrlich, da kann ich jetzt auch keine wirklichen Rat geben, der nicht vielleicht noch was schlimmer macht. Liegt auch dran, was da für Daten drauf sind und wie schlimm das ist, die zu verlieren. Vermutlich würde ich den physikalischen Rechner herunterfahren. Dann Platte abziehen. Dann...
  9. W

    Proxmox USB Festplatte mounten Fehler

    für mich sieht das so aus, als wenn da mal der Hypervisor und mal die VM drauf zugegriffen haben. - USB passt nicht zu sda; - "hängt tagelang" - durchgereicht - auf anderem System tat es Alles Hinweise, dass da vielleicht das ein oder andere durcheinander gelaufen ist. Oder versucht wurde.
  10. W

    Proxmox USB Festplatte mounten Fehler

    ganz blöde Frage, aber sda1 ist auch richtig? Evtl. Dateisystem auf sda angelegt, ohne Partition? was sagt denn die Partitions Tabelle, zum Beispiel per sfdisk -l /dev/sda Grüße
  11. W

    How to change a ProxMox system disk

    if you have a nfs share, clonezilla can backup to it. then change system ssd and restore from nfs with clonezilla.
  12. W

    Ceph spinnt nach host reinstall

    ceph osd df tree kann auch helfen. Vielleicht sind die OSD nicht so gleichmässig über die racks/hosts verteilt.
  13. W

    Proxmox VE 7.1 released!

    virtio-scsi, windows (10) & linux guests, older and newer kernel. One very old machine with 2.2 kernel tested, with ide. But no virtio-blk and no sata.
  14. W

    Proxmox VE 7.1 released!

    me :-) but due to the many messages here to this problem we only have migrated a few VMs already to the new 7.1; but all of them without any errors so far.
  15. W

    Ceph spinnt nach host reinstall

    ceph.conf -> /etc/pve/ceph.conf sollte das sein, nicht ceph.conf -> ../ceph.conf ich meine, ich bin da auch schon mal drauf reingefallen.
  16. W

    [SOLVED] CEPH IOPS dropped by more than 50% after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

    Hello, could it be a RAM bottleneck. I somewhere read that default "per osd buffer" increased. just a guess.
  17. W

    CEPH Health warnings - how to resolve?

    I go with itNGO and tom and don´t understand your needs. If it is a test cluster only, you can try playing with single host rule. That may fit your needs if you want to learn ceph.
  18. W

    ZFS pool lost after power outage

    Hey, have you checked if the pve mounted the pool already?
  19. W

    Ceph min_size for large clusters

    Hi! then you have only one copy of your data left. In case of problems. min-size is not regarding monitors, but data copies. Nodes that serve data by osd. You must differ between hypervisor cluster and ceph cluster / osd nodes. If you are planning to loose more than one node at a time, than...
  20. W

    Diagnosing slow ceph performance

    maybe a silly question, but did you enable jumbo frames on your switch?