Search results

  1. U

    Some Newbie questions....!

    Hi Michael, depends on the SSDs. ZFS on Linux is very flexible, but not really fast... with SSD-only raids it's ok, but with HDDs it's depends on your IO-workload. An SSD or NVMe for journaling and cache can be very helpfull. But use the right SSD (enterprise - look, which ssd is usable for...
  2. U

    Getrenntes Backup Node – Storage Typ/System Überlegungen

    Hi, das ist aber nur bedingt ein Backup... Aber Du könntest znapzend von Tobias Oetiker verwenden https://github.com/oetiker/znapzend Damit kannst Du ein Plan erstellen, welche snapshots auf dem Ziel existieren sollen (nicht auf der Quelle). Damit benötigst Du nur (massenhaft) Platz auf dem...
  3. U

    Trouble with bnx2 after upgrade to pve 6 due config issue inside VM

    Hi Spirit, I use OVS for network and it was an double vlan tag. For safety I will try the bnx-option. Udo
  4. U

    Trouble with bnx2 after upgrade to pve 6 due config issue inside VM

    Hi Spirit, since that I'm aware not to use vlan tagging on an tagged vm-nic - so the issue don't occour (but if fixed it's will be much better!). Udo
  5. U

    error on upgrade Proxmox from version 5.4-13 to 6.x

    Hi, if this isn't an cluster, the pvecm output is normal. What is the output of pve5to6 Udo
  6. U

    lvm/dmsetup nightmares

    Hi, if you are sure, that the device are not used, you can follow this guide i wrote for me: dmsetup info /dev/sata/vm-210-disk-1 Name: sata-vm--210--disk--1 State: ACTIVE Read Ahead: 256 Tables present: LIVE Open count: 6 Event number: 0 Major...
  7. U

    [SOLVED] Howto add a VM to an pool with the API?

    Hi Chris, thanks, work like a charm. Udo
  8. U

    [SOLVED] Howto add a VM to an pool with the API?

    Hi, I want simply add an new created VM with pvesh to an pool (like "pvesh add pools/Dev -members 123"). OK, pvesh don't know add - but my try with set are not successfull. How is the right syntax? Udo
  9. U

    Virtuelle Festplatte sehr langsam

    Hi, die "bloße Aussage" von LnxBil kommt nach meiner Meinung daher, weil "messungen" mit dd nicht für alle Storage-Type vergleichbar sind. So wird bei zfs z.b. komprimiert - und mit dd aus /dev/zero auf ein zfs-Filesystem zu schreiben, bringt zwar tolle Werte, aber die haben nichts mit der...
  10. U

    Virtuelle Festplatte sehr langsam

    Hi, ich würde mal vermuten, dass es entweder an der usb-3 performance liegt, oder an der Raidperformance selber. Und mit 5400 U/min im Raid-1 ist so ziemlich worst case... Udo
  11. U

    Performancetest (zfs) between pve5.4 + pve6.0

    Hi, @guletz: I will try volblock-sizes later. With the new Kernel, the test takes 40m27.5s and the load looks much better. pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve) pve-manager: 6.0-6 (running version: 6.0-6/c71f879f) pve-kernel-5.0: 6.0-7 pve-kernel-helper: 6.0-7...
  12. U

    Performancetest (zfs) between pve5.4 + pve6.0

    Hi Stefan, thanks for the feedback. I will do an test again, but need some time. Udo
  13. U

    High (100%) ZVOL CPU usage when doing VM import from backup.

    Hi, if I understand you right, your read the compredd data from the same zfs-pool (inside an VM), where you write the output? If you look for the throughput: (read 14001635328 bytes, duration 75 sec - read 7000817664 bytes, duration 31 sec)/44/(1024*1024) you got 151MB/s. (tihs is readed...
  14. U

    Intallations failure for unable bios_boot partition

    Hi, I've installed pve-6 successfull on a 6 * 6TB-4kn-hdd raidz2 with the help of an single disk (with 512b sectorsize) and manual reconfigure (take some time). If you wan't I can post the HowTo... Udo
  15. U

    Intallations failure for unable bios_boot partition

    Hi Davide, if you look here: https://www.seagate.com/enterprise-storage/exos-drives/exos-e-drives/exos-7e8/ you see some models with 4k sector size and some with emulated 512bytes. I assume you have an 4k-model too. Because raid-0: are your backup not important? If yone disk die - all backups...
  16. U

    Intallations failure for unable bios_boot partition

    Hi Davide, I have the same issue with different 6TB-disks. Looks that the pve-installer can't handle native 4k sectors correct! Has your disks 4k sectors (4kn) or emulated 512b (512e)? Udo BTW. zfs raid 0 is an bad idea
  17. U

    Install Ceph nodes (not proxmox)

    Hi, sorry - you must measure caching! With an 5-OSD HDD cluster on 1GB-Network you will never ever got 80-100MB thorughput in an VM / single thread. Ok, with replica 3 and 5 OSDs you can got 100/(5/3) = 60MB inside an VM with 100MB/s per OSD - but I don't think that you reach such values! Try...
  18. U

    Install Ceph nodes (not proxmox)

    Hi, documentation: https://docs.ceph.com/docs/master/install/manual-deployment/#adding-osds I would NOT use ceph-mons outside the pve-cluster, only osd-nodes (which is fine, if you have enough resources). Esp. ceph-mons should have the same (and the newest version) - osds are not so critical...
  19. U

    Install Ceph nodes (not proxmox)

    Hi, if you look here: https://docs.ceph.com/docs/jewel/start/hardware-recommendations/ you see, that you need min 3GB free ram for one OSD. And newer versions need more ram - see also here: https://unix.stackexchange.com/questions/448801/ceph-luminous-osd-memory-usage?rq=1 second - linux use...
  20. U

    Install Ceph nodes (not proxmox)

    Hi, you can simply install an OS with the same ceph-version and use the same ceph.conf like the pve-cluster... BUT I would not recommend to run with 1GB any ceph services!! You will got much trouble (don't know your testing, but I'm quite sure, that you will run in trouble). Udo