Please help!
Can everyone help me? Die corrupt file is the first snapshot this pool.
Can i clear the error when i delete all snapshots? The VM itself should then still work and the pool comes online now???? The 2 disk are 2 Samsung EVO 850 with 500GB.
Hallo chriskirsche!
chriskirsche said:
Did you do also write tests on your pool?
I would be keen to see those results. Maybe it has the same root cause.
Do you mean this?
on HDD-Mirror WD RED ERFX 4TB:
root@pve:~# pveperf /datastore
CPU BOGOMIPS: 55998.56
REGEX/SECOND: 4131560...
Hallo mmenaz!
mmenaz said:
Ok, seems that "hardware" read speed is ok for the 4TB WD.
I don't understand the VM100 and 301 disks configured:
a) vm 100 should have all disks named
In the last post I wrote that I corrected the setting of the VM disk. The VM is now called 301 and it is the...
Hallo mmenaz! Thanks for your answer. I have added the information again.
I only have one VM running on Proxmox. The various disks (vm-100-disk and vm-301-disk) in the figure above were created because the data was copied from the slow hard disks. I have corrected the configuration. Instead of...
But this table shows the suitability for ZFS: https://www.computerbase.de/2020-06/nas-festplatten-wd-red-plus-pro-smr/
SMR is the bad boy! Isn't it? But my hard drives are EFRX...
it says on my receipt from 2019:
2x 4000GB WD Red WD40EFRX Intellipower 64MB 3.5" (8.9cm) SATA 6Gb/s
2x 2000GB...
@mmenaz
auf SSD:
root@pve:/# pveperf
CPU BOGOMIPS: 55998.56
REGEX/SECOND: 4106875
HD SIZE: 378.24 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 227.63
DNS EXT: 38.21 ms
DNS INT: 20.47 ms (mtk.local)
zpool datastore auf WD RED:
root@pve:/# pveperf datastore...
@H4R0
root@pve:/# smartctl -a /dev/sdc
smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.78-2-pve] (local build)
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC...
I have a Proxmox Server 6.3.3 new installed with 2x WD RED 4TB.
Previously they ran in the same configuration with pve version 6.1.2 over 12 months with good speed as mirror.
Now extremly slowly. The Backup of the VM on disk is at after 10 hours 52%. Earlier the Duration of the backup was max. 4...
Also nach mehrfachem Ausprobieren war das erfolgreich:
datastore = Ursprung
datastore2= Ziel
nur den allererster Snapshot senden:
********************************
zfs send -v datastore/virt_hdd/vm-301-disk-1@snap1 | pv | zfs recv -dvF datastore2
(wobei snap1 der allererste gemachte Snapshot...
Habe jetzt folgendes gemacht:
1.) vm100 herunterfahren
2.) zfs-auto-snapshot für beide ZPools deaktiviert:
zfs set com.sun:auto-snapshot=false datastore
zfs set com.sun:auto-snapshot=false datastore2
3.) das ZVOL per send/recv nach Zpool mit allen zugehörigen Snapshots nach...
Hallo!
Ich habe folgendes Problem bzw. Frage:
Produktives System:
Proxmox v6.3 mit zfs-auto-snapshots
Zpool: "datastore"
Z-Vol: "datastore/virt_hdd/vm-100-disk-0"
die zugehörigen Snapshots per "zfs-auto-snapshot":
Nun habe 2 HDD als Raid1 hinzugesteckt und einen zweiten Zpool namens...
Ja, das macht Sinn. Ich habe es nocht nicht getestet, aber es erscheint mir logisch. Schade, da hätte ich beim Kauf des 2. Systems doch den gleichen Prozessor wählen sollen...
Hallo sysfy323!
Habe nochmals die virtuelle Fesplatte mit aktiviertem Windows 2008R2 aus dem Zvol von PVE1 per dd-Komando als *.raw auf Rechner PVE2 kopiert. Dort habe ich diese *.raw datei in die gleiche *.conf wie auf PVE1 eingetragen (mit cpu=kvm64). Leider kam nach normalem Starten der VM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.