Search results

  1. A

    1 SSD and 2 HDD - best storage setup?

    NVME is enough sized to keep all VM data - the problem is it single one. What you will recommend as robust mirror technology: ZFS or MD or maybe something else?
  2. A

    1 SSD and 2 HDD - best storage setup?

    Yes I think this is the best approach. The problem is how mirror should be created out of HDDs. I can: 1. Use md. Time-consuming, but proved to be working solution. Not recommended by PVE itself nor supported in ISO based installer. 2. Use ZFS. Rumors claim I should put boot on non-ZFS...
  3. A

    1 SSD and 2 HDD - best storage setup?

    If I have only this disks in server: 1 x NVME SSD and 2 x SATA HDDs, and no RAID card, what will be the best setup maximizing VM (no containers, only VMs) storage and server speed? 1. HDDs are in mirror (seems to be md-based one), and SSD as single disk for VMs (yes we'll do backups to HDD, and...
  4. A

    PVE 5.2 on ZFS, use as plain disk only?

    Let me tell my inner ideas :) First of all, I have a 2 HDD blade host which I can not add neither h/w raid card, nor extra disk or even SD card to boot from. This is 'given' configuration, I simple can not change it at will. I saw software raid broken down once, and it given me the idea I want...
  5. A

    PVE 5.2 on ZFS, use as plain disk only?

    ZFS is very good, really (and I just love it on storage$ and ZFS designed for many things, including filesystem storage, too), and use it as intended is just perfect approach, but this is not my question. What I need is to host VMs's disks (qcow2 files) on a plain failsystem over ZFS - there...
  6. A

    PVE 5.2 on ZFS, use as plain disk only?

    I got a freshly installed host under PVE 5.2 on two HDDs which are set up as ZFS RAID1. I used to use plain filesystem to store VMs disks so I have no plans to use rpool/data pool, so what if I remove its definition from storage.conf? I can then mount this pool (or delete it and create another...
  7. A

    Cluster and https certs

    Thank you for the explanations! Frankly I use cluster as a single point of control, not like HA or what ever such. But rare migration is something that may be useful (even offline), so I'll consider this. Somehow I missed that while learning about cluster. But then, if I say have several nodes...
  8. A

    Cluster and https certs

    Looks like I found well-known mislogic which I wasn't aware of: I set up several hardware nodes independently on PVE 5.2, put it into DNS (let me call it host01.mydomain.com, host02.mydomain.com etc. for the example purpose) and even got LE certs to access it over https without warnings. That...
  9. A

    PVE 5: VM on mounted zfs?

    Yes, familiarity is exactly my case, but I will explore the option of accessing zvols via path you've cited. May I please ask your opinion on different part of the game, the performance. ZFS is great thing but it need to be turned up very well to show perfect performance and PVE won't do that...
  10. A

    PVE 5: VM on mounted zfs?

    In PVE 5.2 I can add say couple of HDDs as ZFS based mirror. Then the ZFS pool can me added as storage. But ZFS pool can be as well mounted as general disk. But say I added this way the ZFS pool as storage or added it as dir storage. How is the pros or contras to do both options? ZFS pool...
  11. A

    Cluster just to allow migration

    I have several servers (nods), and would like to connect them under single management. It'll be very poor man setup and the single management should be only for migrating VMs at will (very rare case). Right now, these nodes are set up independent and each hosts very specific VMs on it (very few...
  12. A

    LSI 2208 and TRIM - live without?

    Really interesting post, but my problem is this: I can only see 'whole' volume from OS (like /dev/sdb as exported by RAID card, not one-by-one SSDs disk), so not sure if changing HPA will results in "real" disks HPA change. Moreover, my RAID reports this: # hdparm -N /dev/sdb /dev/sdb: SG_IO...
  13. A

    LSI 2208 and TRIM - live without?

    Here is setup I try to do: I have LSI 2208 raid controller with cache and bbu, and 4 SSD disks connected to it. I can not change the firmware, so HBA is not an option and I also don't want to loose cache and bbu. I would like to use RAID10 setup out of these 4 ssds. The probles is that 2208 not...
  14. A

    SSDs: volume with h/w RAID or ZFS?

    So strange, https://forum.proxmox.com/threads/zfs-trim-and-over-provisioning-support.32854/ says ZFS do support TRIM as of now. Can't check right away. UPD: My bad, looks like no support. But if I can do fstrim from crontab or no TRIM be passed to physical disks at all?
  15. A

    SSDs: volume with h/w RAID or ZFS?

    This is weakest part. But yet I do no know if LSI supports trim as well, something that I need to check too. Ok, looks like raid is the only option I have. But are there any proof for lack of trim support in zfs? I expected it is there right from the start.
  16. A

    SSDs: volume with h/w RAID or ZFS?

    Exactly. This is the reason for the question. PVE detects these disk as disks made by LSI vendor, and no smart etc can be seen. So use zfs is a bit weak idea, isn't it? But LSI won't care for its ssd nature, nor it can do zfs data integrity checks.
  17. A

    SSDs: volume with h/w RAID or ZFS?

    Keep at my hands nice modern server with 4 SSDs on board, I still can't find the better way to utilize it. I can use built-in h/w RAID controller (LSI card, with cache and BBU, not sure for model) and build, say, RAID10 out of these disks. Or I can export these disks as 4 independent volumes...
  18. A

    Install with ext3/4 over lvm from iso?

    Perfect! Exactly what I need!
  19. A

    Install with ext3/4 over lvm from iso?

    No no no, pve over debian results in a bit less nicer system that pve iso results in, so no way. Lvmthin to lvm won't give me ext4 on top of it, or not?
  20. A

    Install with ext3/4 over lvm from iso?

    I believe it was up to 3.x that PVE setup from iso results in ext3 filesystem over lvm. Newer days we get lvm storage which may be nicer, but it is hard to copy VM disks by simple copy its disk files. So the question is, is there any easy way to install 5.1 from iso the way that it was long...