Servus,
Hab alles geklärt, es funktioniert alles auf der VM, der Webserver rennt jetzt fehlerfrei und kann im lokalen Netz von meinen Notebook aus darauf zugreifen.
Trozdem danke für eure Hilfe <3
Wrote a small shell script to detect proper cpu configuration:
#!/usr/bin/env bash
# cputest.sh — Detect x86-64 micro-arch levels (v2/v3/v4) and show a tidy summary.
# Options:
# --no-color disable ANSI colors
# --full also print...
Ja die 7,8GBit/s werden komplett ausgenutzt. D.h. ein 10GB großes Modell ist in ca. 1,5 Sek. im VRAM. Da Ollama das Modell nach 5 Minuten Inaktivität jedoch wieder rauswirft, hat das den Effekt, dass beider ersten Anfrage die Antwort um die...
Hi Heracleos. I can confirm that this works in PVE 9.0.10.
This is the procedure I followed. Note, I am not using Ceph.
Before:
Backed up all VMs and containers in the cluster
Shutdown all VMs and containers in the cluster
Checked pvecm status...
So weirdly I re-copied my test VM and it seemed to get exactly the same throughput. It's got 2 disks; a 1GiB and a 50Gib disk. With the original version, the VM transfers in ~20 minutes, and with the modified importer I see the exact same...
Sorry for my English.
I also lose my motherboard's network card when I add a card to my small personal server. It seems to be off (no more LED), I think it's the same problem...
@t.lamprecht @Max Carrara
Is this a viable path to increasing the import speed? I'm not sure if there's been any public discussion of the limitations of the ESXi import tool.
Examples of creating and cloning a VM in my PVE server
# journalctl | grep create
Aug 08 23:08:20 pmx1 pvedaemon[3798182]: <root@pam> starting task UPID:pmx1:0002Fxxx:15AA4xxx:68966xxx:qmcreate:101:root@pam:
Aug 08 23:08:21 pmx1...
Hi everyone,
my current setup is: encrypted Proxmox 8 installation on Debian 12, one disk (raid1), everything is working OK
Instead of upgrade to Proxmox 9 Im considering setup change:
new setup:
disk1 (hw raid1) - encrypted Proxmox 9 on...
It definitely smells like an MTU/fragmentation issue... but I started with defaults (when the problem first occurred), set everything I could find over to 1280 (problem still there) and now I've set it all back to defaults. No real improvement...
Thank for your input. I had the same issue where my backups would start and just halt at 0 after the recent upgrade to 8.14.4. I have been running around trying to figure out why they just hang. I couldn't find any other reasons as to why my...
I had the exact same setup and a recent update must have broke the MTU setting because I have had mine set to 9000 for a while and my backups had never failed before. I set mine to 1500 and they're working great. Thanks for posting your resolution.
For what it's worth, I've found the problem. It's Synology - specifically, if you're running their Virtual Machine Manager (and a VM on it), it installs and enables Open vSwitch, and while you can set MTU=9000 on the interfaces of the NAS, the...
Hallo,
ich hab einen Cluster mit 3 Nodes und Ceph auf Basis von Minisorum MS-01 PCs. Für die Cluster und Ceph-Kommunikation habe ich auf den Thunderbolt-Ports einen Netzwerk-Ring. Das funktioniert seit gut einem Jahr mit der v8 problemlos. Heute...
Yes—this is expected with the PG autoscaler. With a mostly empty pool, Ceph starts low (often 32 PGs) and grows PGs as data increases unless you guide it with bulk, target_size_ratio/bytes, or a pg_num_min.