Search results

  1. G

    Move or not move Proxmox VE boot to SSD on existing system upgrade?

    Backups done by Proxmox Backup Server running on a Synology Plus series NAS (with HDDs in it). Of course they cannot store live data on the same NAS, and cannot afford another NAS yet.
  2. G

    Move or not move Proxmox VE boot to SSD on existing system upgrade?

    The 8 servers gone to 8 separate schools which do not have other server side components (like a NAS for bulk storage), and the school central is poor, cannot afford much more hardware purchase. This is also why the HDD mirror build in the first place instead of (server grade) SSDs. As I...
  3. G

    Move or not move Proxmox VE boot to SSD on existing system upgrade?

    I don't think that 2 mirrored HDDs benefit much from SSD special devices in this case. If I put small blocks on SSD to speed up VMs, I create a relative complicated ZFS pool with little to none benefit compared to a separate SSD mirror pool. But, thank you for your suggestion!
  4. G

    Move or not move Proxmox VE boot to SSD on existing system upgrade?

    I likely step on this road, as this is the easiest and fastest to implement. Thank you!
  5. G

    Move or not move Proxmox VE boot to SSD on existing system upgrade?

    A client of us (a school) built 8 identical Proxmox VE servers with only 2 HDDs in each (in a ZFS mirror). The preformance is ... slow. They decided to buy additional 2x SSDs to all 8 servers now. We need to help them upgrade the machines. My question is that which method is better for long...
  6. G

    Understanding CEPH in a 3 Node Cluster with 12 OSDs

    Thank you for this clear example, and for this very understandable clarification (for me)! I thought so until now that Ceph will re-create missing replicas on any free space in the cluster. But thats makes sense if the failure domain is 'host', then one host will not store 2 replicas. I don't...
  7. G

    Understanding CEPH in a 3 Node Cluster with 12 OSDs

    I don't want to argue with you, I only want to understand what you said and why: So, with 3 nodes, Ceph automatically select the failure domain to node, not to OSD. You say this is not designed to loose a complete node... The calculation I said not correct said that the OP must calculate with a...
  8. G

    Understanding CEPH in a 3 Node Cluster with 12 OSDs

    No, Ceph will self heal and start to create a 3rd copy of everything on the remaining 2 nodes (if noout was not set in Ceph). So, with 3 nodes, a maximum of ~66% usable, because if a complete node lost, the remaining ~33% capacity will be used for 3rd replicas. And with 3:2 setting, the pool...
  9. G

    Proxmox installation "Trying to detect country"

    In my opinion, Proxmox VE is the best virtualization platform for small and medium businesses, and of course for home-labbers. We switched to Proxmox VE from VMware ESXi on all customer servers (about 30 now) at the 5.4-6.1 era and never looked back. Proxmox is superior in all ways compared to...
  10. G

    Proxmox installation "Trying to detect country"

    I'd like to add some information on this. I encountered this error with 8.1-1 ISO today. I also connected to a VLAN capable switch with this server, but only the default untagged VLAN available on that port, no tagged VLANs present. The installed got an IPv4 address from the DHCP server, and...