Search results

  1. K

    ZFS over iSCSI for HA

    ok, why not use the redundant NAS also for images?
  2. K

    ZFS over iSCSI for HA

    hmm , you could pack drdb underneath for replication, but it has its own pitfalls. can you spread out the disks equally to the nodes? Then CEPH would be an option.
  3. K

    ZFS over iSCSI for HA

    you can do that of course, but it is not HA then! Think of it: The server with the disks is the single point of failure
  4. K

    freeze during and after blackup

    What is the target for backup?`I had some troubles with NFS as target, the NFS Client in the Linux OS tends to eat up resources. It looks like CIFS works more reliable in this case.
  5. K

    Vlan, Bond and OVS

    did you use the OVS Packages from proxmox repo or those from debian stretch? The packages from stretch are not working!
  6. K

    Vlan, Bond and OVS

    is tag=50 really correct for vlan519 ? that sounds strange Also are the switch ports correctly configured with all those 300 VLAN's tagged on them?
  7. K

    Promox VE & HA Storage Recommendation

    yes, use the other 1 GBit for corosync and/or management, don't use it for migration traffic (corosync likes low latency) ahh, one more point: Does your RAID Controller support real HBA Mode? I hope its not one of the MegaRaid things, which force you to use a Raid0 for a single disk. HBA mode...
  8. K

    Promox VE & HA Storage Recommendation

    > All the VMs will be public facing we dont need local networking inside the VM'S. So i dont see the need for a 10GBit for public wan traffic since for > this cluster i am only giving a wan 100mbit uplink that will be a 1gbit when i fill the 100mb up. ok, so 1 GBit for outside net is fine. >...
  9. K

    Promox VE & HA Storage Recommendation

    Sorry, hit the post button before editing was complete: -> in the minimum configuration use 2 VLAN's for CEPH ( public + private), so you can segregate the traffic physically later easily with out hard reconfiguration the part with the minimum config is doubled, as I wanted to move it up...
  10. K

    Promox VE & HA Storage Recommendation

    Sounds good, but the following recommendations: In a ideal world you would: -> Segregate CEPH Traffic physically -> Segregate corosync Traffic physically (1 GBit is ok) -> If possible segregate also Traffic for other storage protocols (NFS/CIFS) from other traffic physically -> If possible...
  11. K

    Moving cluster to new location

    Hmm, I didn't tried it this way until now, I always had all the VLAN's available over the locations (with very low latencies), so it was like a reboot (just had to set noout for CEPH OSD's to avoid the CEPH rebalancing). But as long as the Cluster IP's do not change it sounds that your proposal...
  12. K

    Moving cluster to new location

    Did you have the same IP's on the new location and are the two locations connected with not too much latency? Then its easy, just move node for node without changing anything. If IP's will change, and/or latency is too large for proper cluster operation, then its more complicated, as corosync...
  13. K

    proxmox adds localhost ipv6 entries to /etc/hosts inside the containers

    There is no need to disable this. There is no harm by this entry
  14. K

    Low filetransfer performance in Linux Guest (ceph storage)

    you have 1 OSD per Node, probably just HDD? And 3 nodes, so just 3 OSD's? Which network speed? CEPH needs both low latency, and at best many OSD's. The slow filesystem would be no surprise. Replace the HDD's with Enterprise SSD's and use more OSD's per Node, and at least 10 GBit in the backend...
  15. K

    pvestatd often reports storage offline with CIFS

    Hi, we see very often the problem, that pvestatd erroneously says the a CIFS Storage (we use it for Backup and ISO) is offline: root@gar-ha-cfw01a:/usr/share/perl5/PVE/Storage# systemctl status pvestatd ● pvestatd.service - PVE Status Daemon Loaded: loaded...
  16. K

    Cluster creation (no bond supported?)

    LACP works well, of course the switches have to be configured appropriately. We run two clusters with LACP without any quirk. (what is not working: any bonding types other than 1 or 4 -> depending on switches you can run into any network hell you can think of) Some more points: At least if...
  17. K

    Hyperconverged Infrastructure with Proxmox and Ceph

    its the same "rule of three": 5 * 4 TB = 20 TB -> /3 = 6.7 TB * 0.6 = nearly 4 TByte and same calculation for the SATA
  18. K

    Hyperconverged Infrastructure with Proxmox and Ceph

    With CEPH objects will be stored 3 times (Erasure Coding is currently not supported with proxmox as I remember) Also you should not fill up pools with more than 60 percent, so you will have for images: (12 TByte / 3 ) * 0.6 = 2.4 Tbyte SSD Storage (48 TByte / 3 ) * 0.6 = 9.6 Tbyte SATA Storage...
  19. K

    Hyperconverged Infrastructure with Proxmox and Ceph

    Ceph is very latency dependent. I would avoid a S-ATA only Pool, it is badly slow. Plan at least SSD journals for the S-ATA OSD's It is possible to create separate pools, but you have to edit the Crushmap per Hand.
  20. K

    Hyper converged or not

    Both solutions are ok, but installing proxmox on the storage nodes would give you a nice simple management and installation for CEPH as well as a common GUI. If you put the storage nodes also in the cluster you earn: -> common GUI -> possibility to use the storage nodes for some VM's with...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!