Wichtig an der Stelle:
ALLE repositories müssen auf bookworm - auch ceph.list in /etc/apt/sources.list.d.
Auch wenn ich gar keinen CEPH-Relesewechsel machen will.
Danke Falk - das beruhigt mich!
Patches sind alle drin / alles aktuell.
Einen Server muss ich noch gesondert angreifen, da auf dem die Netzwerkinterfaces noch "ethN" heißen.
Dann kann ich das ja gemütlich und entspannt angehen.
Ein vorhandener, konfigurierter PBS (Proxmox-Backup-Server in...
Um den Faden noch mal aufzugreifen:
Ich habe einen Cluster mit 10 Hosts.
4 Server bedienen ausschließlich CEPH - 3 (sehr potente) Server bedienen VMs und zusätzlich senselben CEPH cluster - 3 Server bedienen ausschließlich VMs.
Beim upgrade 7 -> 8 ist erstmal kein CEPH-Upgrade vorgesehen (es...
Are these "crash-files" of any importance? They are somewhat old. Yes - there seemed to be some problems. But the number of files does not increase.
Is it risky to simply delete them?
Hmm - geht auch einfacher. Hatte dasselbe Problem.
Dazu (zumindest im Firefox) die Downloads anzeigen. Dort Rechtsklick auf die virt-viewer Datei und "ähliche Dateien immer öffnen" auswählen. Das ist dann meist schon dem VIRT-Viewer zugeordnet.
Sorry for entering this thread in 2022.
Same problem with my vzdump for one VM.
Version 7.2 - latest proxmox-updates. VM is a debian bullseye with guest-tools.
Suggestion: in my vm i mount another partition for /var/www. My this cause the problem?
Suggestion: is there a limit for the...
questions
latest version od pbs is installed, i cant find proxmox-file-recovery - not in the gui (of the pbs? Do the pbs and pve have to be in the same cluster?) neither as a command. And can't find in the repository....
Would it be on the pbs or pve-cluster?
Seems i am testing / trying too often too much too different things!
I was astonished to see again a "full" backup starting after doing things. Move disk (with and without changing ondisk-format, local, nfs, ceph, iscsi), migrate, vzdump on stopped vm etc.
Fabian, thanks for your answers.
I...
somewhat clearer - but further questions:
If I do a "move disk" in PVE the ondisk format might change. Next backup starts: would this be again a full-backup?
What happens after migrate vm to another host?
Probably depending on ondisk-format? QCOW2 vs raw?
I noticed a complete read of the vm if its a raw format! (Here on CEPH). This read lasts long - even though there are just some bytes to write. Dedup etc looks fine, besides.
Probably worth a new thread...
I've got a strange situation in my ceph-cluster.
Running 3 Mons version 14.2.9.
I dont know where this pool is from:
device_health_metrics
Health:
Reduced data availability: 1 pg inactivepg 39.0 is stuck inactive for 506396.988991, current state unknown, last acting []
ceph health detail...
Update
some performance tests
pveperf
CPU BOGOMIPS: 47995.04
REGEX/SECOND: 3133624
HD SIZE: 7.07 GB (/dev/mapper/pve-root)
BUFFERED READS: 235.98 MB/sec
AVERAGE SEEK TIME: 0.10 ms
FSYNCS/SECOND: 322.44
DNS EXT: 90.98 ms
DNS INT: 0.49 ms
rados...
Hi folks,
given are 3 nodes:
each node 10 GB network
each node 8 enterprise spinners 4TB
each node 1 enterprise nvme 1TB
each node 64 GB RAM
each node 4 Core cpu -> 8 threads up to 3.2 GHz
pveperf of cpu:
CPU BOGOMIPS: 47999.28
REGEX/SECOND: 2721240
each node latest proxmox of course...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.