Search results

  1. S

    Shared Storage for a PVE Cluster

    If Ceph is not an immediate option (due to the 4 node minimum req), what is the preferred method for shared storage with a PVE cluster and iSCSI? Glutez - you had mentioned dual Centos with default write-intent bitmaps enabled? Also GlusterFS?
  2. S

    proxmox HA on shared SAN storage

    So, is ZFS over iSCSI a workable configuration? If not, what would be the preferred shared storage for a HA Cluster (other than Ceph)?
  3. S

    reinstall a proxmox

    Did you re-install on the exact same server hardware?
  4. S

    Shared Storage for a PVE Cluster

    Yes. I was thinking we would be using dual iSCSI storage to avoid a single point of failure. But I did want to include FreeNAS as one of the options. So, is the PVE <> iSCSI <> FreeNAS setup currently not a stable configuration? What about 2 Dell MD3xx0i devices or using a Synology soluiton? I...
  5. S

    Shared Storage for a PVE Cluster

    Yes. This is what I thought. There are additional physical hardware and infrastructure considerations for Ceph. I will discuss this option with the customer, but they seem to be a bit reticent to move much beyond the straight and narrow. Even if Ceph is unquestionably the "best" choice, they may...
  6. S

    Shared Storage for a PVE Cluster

    Thanks, Dietmar. I will ask the client if they would consider Ceph. They have been talking about a NAS or SAN with the DELL Powervault, so I am not sure from a hardware perspective. I assume we would need separate machine(s) running Linux to create a Ceph shared storage instance that would be...
  7. S

    Shared Storage for a PVE Cluster

    We are proving a proposal to a client and would like to recommend the best option to have shared storage for their cluster(s). I have been looking through the forum during the past couple of days and found several threads on this topic. Unfortunately, I have not determined that the most...
  8. S

    Proxmox VE 5.1 2nd ISO release

    OK. This will work for machines with commercial subscriptions. Can servers with out a subscription be updated in-place with the ISO?
  9. S

    Proxmox VE 5.1 2nd ISO release

    Can this new ISO be used to load the new system in-place over the top of an existing installation or is a backup and restore required? How will it impact the current OS partition and ZFS pool?
  10. S

    Hamtpon's tutorial on ZFS RAIDZ + SSD

    Did some more looking around and found the ZFS Tips and Tricks Wiki. From that I assume that Hampton's "zpool create" command should not have referenced the /dev/x as the targets and also included the "compression=on" switch. Is the Alignment Switch value set to 12 advisable? Based on the...
  11. S

    Hamtpon's tutorial on ZFS RAIDZ + SSD

    Guletz - on SSDs, I found this TOPIC. Within it, you said: “I have several servers that use that use consumer SSD (including 120 G Kingston) for zfs cache and zil, and all are usable even now. I also have some proxmox nodes with consumers SSD for proxmox os. But the /tmp and /var/log are...
  12. S

    Hamtpon's tutorial on ZFS RAIDZ + SSD

    Thank you, Guletz. For this particular DELL server, it is difficult to install / setup a mult-disk array for the PVE OS separate from the 4 disk physical array. It is a 1U chassis that does not have much expansion options. So I may be stuck with a single disk (albeit not ideal) Would a 120gb...
  13. S

    Best Storage config for 4-disk server

    I had not planned to use a cluster initially, but that is a very good point. I did find that an additional connections can be added to this server with an adaptor. Along with a molex to Sata power splitter, this would allow me to add a SDD to connect available ports on the mainboard. I can...
  14. S

    Hamtpon's tutorial on ZFS RAIDZ + SSD

    Hello, I wanted to get some feedback on Hampton's published tutorial - http://ghost2-k4kfh.rhcloud.com/proxmox-with-zfs-raidz-ssd-caching/ We are looking a create very similar systems with a single 60gb SSD and 4 physical drives, and this procedure seems like a good fit. So I am curious as to...
  15. S

    Best Storage config for 4-disk server

    OK. This was the basis of my other question about moving VMs around easily. Since I will have several of these servers running PVE, I was thinking it would be as follows: 1 - Server or storage crash 2 - Restore backups of VMs to a different PVE server (a few min, depending on size?) 3 - VMs...
  16. S

    Best Storage config for 4-disk server

    One other option - use a couple of the available SATA ports open / existing on the DELL C1100 mainboard to connect up two SSDs and have PVE handle the RAID1 on those as well as ZFS RAID 10 on the 4 500g hds in the drive bays. Would that be to taxing for the system?
  17. S

    Best Storage config for 4-disk server

    OK. I had forgotten about the mezzanine card option in the C1100. Thanks much for the reminder, Digitaldaz! So, doing some more research, I came upon this very helpful thread - https://community.spiceworks.com/topic/316943-raid-controller-for-dell-poweredge-c1100?page=1 Basically it said...
  18. S

    Best Storage config for 4-disk server

    Yes. I agree, but we are in a bit of a quandary with these DELL C1100 1U servers. It is is difficult to find a workable secondary storage device for the PVE OS itself, as the 1U chassis provide limited options. I may be able get at PCI Express SSD (NVMe) installed with a riser card, however...
  19. S

    Best Storage config for 4-disk server

    I guess the question still remaining is wether it makes sense or is advisable to physically separate the PVE OS storage from the VM storage? This standard practice for VMWare and MSFT Hypervisor.
  20. S

    Best Storage config for 4-disk server

    Yeah, unfortunately, these DELL C1100s do not have built-in optical drives or any front bays for the device.