Search results

  1. C

    ssd how big?

    hello, i have 2x4TB WD Gold in zfs raid1, 64GB RAM and want to buy SSD for log and cache. How big should it be? must be the same capacity as rpool? (4TB)?
  2. C

    how to import another rpool zpool?

    ok i have imported the rpool from old HDDs as rpool1 and now i have... root@pve-klenova:~# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 115G 3,40T 104K /rpool rpool/ROOT 8,84G 3,40T 96K /rpool/ROOT rpool/ROOT/pve-1...
  3. C

    how to import another rpool zpool?

    i have installed proxmox on new HDDs 2x4TB zfsraid1... after that i have connected old 4x1TB HDDs (zfs raid10) and booted... during boot i have to manualy import rpool (new 2x4TB hdd) by ID, now i want to somehow import vm from old 4x1TB hdds... how to do it please?
  4. C

    [SOLVED] zfs mount problems after filling filesystem

    can somebody describe this step by step how to do it? i can not boot in emergency with proxmox install usb... unable to find boot disk automaticaly
  5. C

    upgrade from 5.0 to 5.1 failed please help

    root@pve-klenova:~# systemctl status zfs-mount.service ● zfs-mount.service - Mount ZFS filesystems Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Sat 2018-05-05 13:43:14 CEST; 25s ago Docs: man:zfs(8)...
  6. C

    upgrade from 5.0 to 5.1 failed please help

    hello i got serious problem when upgrading from 5.0 to 5.1... i have root@pve-klenova:~# cat /etc/apt/sources.list #deb http://ftp.sk.debian.org/debian stretch main contrib deb http://download.proxmox.com/debian/pve stretch pve-no-subscription # security updates deb http://security.debian.org...
  7. C

    [SOLVED] Proxmox Upgrade Failed from 5.0 to 5.1

    hello i got serious problem when upgrading from 5.0 to 5.1... i have root@pve-klenova:~# cat /etc/apt/sources.list #deb http://ftp.sk.debian.org/debian stretch main contrib deb http://download.proxmox.com/debian/pve stretch pve-no-subscription # security updates deb http://security.debian.org...
  8. C

    log off from web management remotely

    i was working on proxmox with one pc with shared access (different people have access), and i forgot to log off from proxmox web gui... now i am at different place and im afraid that someone can harm my VMs... how can i log off that pc from my proxmox? im logged in on another place... thank you...
  9. C

    zfs raid 10 to zfs raid 1

    ok so my only option is to reinstalll? is it reasonable to have system and storage on diferrent zfs pools? i can have system on rpool on 2x1TB wd red sata2 mirrored and vm data/storage on /vmdata pool which is 2x4TB WD gold sata3... on i can have system and storage on 2x4GB WD Gold drives and...
  10. C

    zfs raid 10 to zfs raid 1

    I think i dont understand...
  11. C

    zfs raid 10 to zfs raid 1

    but if i simly detach drives from raid10 at the end i will have 2xhdd in stripped mode, or am i wrong?
  12. C

    vms lags during vm cloning

    cau you please advise me a good 4port SATA3 PCie 2.0 controller? or post link please, thank you...
  13. C

    zfs raid 10 to zfs raid 1

    so simply zpool offline rpool /dev/sda2 zpool offline rpool /dev/sdb2 please advise me, i cant make a mistake... i need to detach drives in mirror-0 but both drives in there are bootable... how to do the other disk bootable too?
  14. C

    zfs raid 10 to zfs raid 1

    edit: the other zfs pool vmdata with 2x4TB WD GOLD is empty so i can use it as backup but HOW? :) please... should i add a new storage in datacenter and after that how can i make backup before raid 10-raid1 convertion... or i can install a fresh new proxmox on the 2x 4TB WD Gold drives and...
  15. C

    zfs raid 10 to zfs raid 1

    yes i need to convert raid 10 to raid 1 and after that i will remove two of the four drives out of the server permanently...
  16. C

    zfs raid 10 to zfs raid 1

    i have 4x 1TB WD RED drives in raid 10 zfs pool root@pve-klenova:~# zpool status pool: rpool state: ONLINE scan: resilvered 29.2G in 1h16m with 0 errors on Sun Dec 10 15:01:57 2017 config: NAME STATE READ WRITE CKSUM rpool...
  17. C

    Poor ZFS performance On Supermicro vs random ASUS board

    copy from rpool to vmdata inside a VM is about 13MB/s max 30MB/s copy from rpool to rpool another partition is about 30MB/s copy from vmdata to vmdata another partition is about 57MB/s
  18. C

    Poor ZFS performance On Supermicro vs random ASUS board

    how can i run fio test on the vmdata zfspool? i have root@pve-klenova:~# cat testdisk # This job file tries to mimic the Intel IOMeter File Server Access Pattern [global] description=Emulation of Intel IOmeter File Server Access Pattern [iometer]...
  19. C

    Poor ZFS performance On Supermicro vs random ASUS board

    ok i buy another 2x4TB WD GOLD drives make a mirror pool and performance still totaly bad root@pve-klenova:~# pveperf /vmdata CPU BOGOMIPS: 38400.00 REGEX/SECOND: 429228 HD SIZE: 3596.00 GB (vmdata) FSYNCS/SECOND: 119.18 DNS EXT: 62.68 ms DNS INT...
  20. C

    vms lags during vm cloning

    i have removed the two sata2 500GB drives, so only the 4x 1TB WD RED remain and pveperf was absolutly the same...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!