Search results

  1. T

    Storage format benchmarks

    That is correct and you are correct the best way to do ZFS is with your RAID controller or HBA in "IT mode" presenting standalone disks to the os. I did not have time to do that though, maybe on another round.
  2. T

    Proxmox or VMWare, would like to get some more arguments

    I dont know if ESX + Veeam offers the "by the minute" snapshotting abilities of pve-zsync - depending on your scenario, this can also be handy. In general I think anything VMWare can do, so can Proxmox, but with a bit more flexibility, while vmware needs 3rd party apps like Veeam to add...
  3. T

    Auto backup with NAS4Free storage

    Not sure what problem you are referring to. Run "zfs list" to check the zpools and their mount points, and run "zpool status -v" to see the member disks in each pool. The default pve install will make 2 zfs pools, and a portion of it can be directly accessed storage for the file system, the...
  4. T

    Auto backup with NAS4Free storage

    no, if you are thinking of that - again save yourself trouble, and re-install with zfs. the installer can raid it for you:
  5. T

    Auto backup with NAS4Free storage

    nevermind... I wouldnt waste all that time, zfs can make any sort of raid you need, no need to re-install. https://www.zfsbuild.com/2010/06/03/howto-create-zfs-striped-vdev-pool/ Assuming sda is your main pve os drive... sdb-sdd are 3 other drives you have: zpool create backuptank /dev/sdb...
  6. T

    Storage format benchmarks

    Spare hardware laying here, so I decided to benchmark some different local (not SAN) storage formats and attempt to show the pros/cons of each, help me out if i'm missing any important points. Test bed: Dell R730xd, H730P Raid card (array specs at the bottom) 2x Xeon e5-2683 / 64gb RAM 1x...
  7. T

    Install /root on usb

    Personally I have a couple machines with pve os and vmdata on same medium.... but I always try to avoid that scenario, for instance what if today I am using zfs for pve-zsync options, but tomorrow decide zfs is too slow and want to move up to raw devices on lvm? If my os is separate, I have the...
  8. T

    Install /root on usb

    in that spirit many (ie dell) servers come with dual SD slots to run ESXi...... so certainly we could use the same principal with 2 of the mentioned usb ssd in a zraid or mdadm raid... or make an occasional manual clone of the usb and keep a cold spare on the shelf - even better than raid. you...
  9. T

    Install /root on usb

    Actually - he COULD use a USB, if it was one of these: https://www.sandisk.com/home/usb-flash/extremepro-usb another option to use usb would be a usb > sata adaptor or dock, or drive bay, with a 2.5 ssd.
  10. T

    Auto backup with NAS4Free storage

    as said by fortechitsolutions, you need to either combine all those extra disks into a single array in your vm host, once in an array, probably your easiest route is to format the array with some file system ie: mkfs.ext4 /dev/md0 #(assuming you made a mdadm array). mkdir /mnt/backups mount...
  11. T

    virtualbox install guide fail - can't access web gui

    verify web server is listening: apt install net-tools netstat -al | grep 8006 Should give you results like this: tcp 0 0 0.0.0.0:8006 0.0.0.0:* LISTEN You might try restarting webserver: service pveproxy restart And either you made a typo, or you are...
  12. T

    virtualbox install guide fail - can't access web gui

    You may have the wrong url. The page should be at https://ipaddress:8006 Other than that... need more info, are you running 4.4 stable? if you are running one of the betas, you might need to setup the testing repository and run updates to patch a potential bug. On the pinging, yes most likely...
  13. T

    Bug in 5.0b2 disk move

    That worked, thanks Tom.
  14. T

    Bug in 5.0b2 disk move

    I have a 5gb VM on zfs storage, I moved the drive live to another ext4 directory store on the same host using the gui. (~1tb free on storage), the task got to 100% in ~3 minutes, but it continues 3 hours later, still at 100%, the data transfer size keeps increasing by very small bytes, and the...
  15. T

    USB install better options (needed!)

    I have to agree with this, the instructions in the wiki are terrible - suggesting I download a 75mb utility or osforensics which is totally cryptic in how that works - that is ridiculous when Rufus is only 2mb. Thankfully the Rufus DD method works, confirmed on 5.0 beta2
  16. T

    OVS adding VLANs without reboot

    Can someone verify the my understanding of OVS vs old Linux network stack- If I set a Linux Bridge to VLAN aware, I could add as many VMs with different VLAN ids as I want, and any time I want, without rebooting the host. Correct me on this part- On OVS, documentation seems to state I have to...
  17. T

    VLAN trunking not working

    I am at a loss, for some reason I cannot get trunking to work from my VMs to my switch. I made VLAN 11 on my switch, and made a member on the port for eth1, and set the port mode to "TagALL/Trunk". In proxmox I made the following: iface eth1 inet manual auto vmbr1 iface vmbr1 inet manual...
  18. T

    10GB network performance issue

    yes, I tried netcat also and a raw nc file transfer is giving 690 MB/s, but a ZFS send through netcat only gets 133MB/s (might as well be 1GB LAN) .... also tried mbuffer, comes out around 230MB/s. Example- zfs send rpool/data/vm-161-disk-1@rep_test161_2016-09-08_21:38:57 | nc -q 0 -w 20 pve2...
  19. T

    10GB network performance issue

    So I already installed the HPN patch with the NONE cipher option, supposedly that turns off all encryption accept the initial handshake: scp -c NONE vm-116-disk-1.qcow2 pve2:/mnt/data2/ That gives me 340-370MB/s, I dont understand why its still so slow if no encryption is happening, I guess I...
  20. T

    10GB network performance issue

    shoot - sorry, my fault. edited MB.... 330MB still sounds low to me on a server that has 1GB/s+ of local performance on the filesystem. I cp'd 35gb between 2 arrays on the same raid controller in 56 seconds = 625MB/s - which is worst case because its killing the raid card cache to read/write to...