Search results

  1. How do YOU do Disaster Recovery ?

    I should have prefaced the post with the following disclaimer: let me make some clarifications: 1) We will NOT be synching from Proxmox to the NAS using ZFS based sync. I would use PVE-Sync for host to host replication. We will however back up the Userdata (VM's) (and now also the Host...
  2. How do YOU do Disaster Recovery ?

    Any chance you are referring to the following script : Seems to be based on the following forum thread:
  3. How do YOU do Disaster Recovery ?

    Disaster Recovery; How do you handle that at your Org ? Background: Planning on replacing a SBS/Terminal Server solution (Windows based) at a SMB as a favor They currently use a software called ShadowProtect SPX on their Windows Servers backing to a NAS incrimentally and full, and USB-HDD's...
  4. rapid ssd wear out!

    Kingston SSDNow V300 120GB, SATA (SV300S37A/120G) Is rated for 64 TBW. IMHO these types of SSD's are not suitable as a caching device. Edit: You use twice as many OSDs in the rapid wearout Server. Basically you have created 4 times as many writes to the RapidWearoutServers OSD compared to the...
  5. Usable space on Ceph Storage

    Been less then 20 months, its fine :D Q1: Replicated Pool ? Q2: 8 OSD per node ? Q3: Same failure Domain ? (As in Host/Node as opposed to OSD) Q4 (if Q1, Q2 and Q3 = yes): Did you set size == 3 and min_size == 1 for said replicated pool ? Is that your only pool ? What settings do those...
  6. Rule of thumb, how much ARC for ZFS ?

    That is good to know. But i use cache= none almost exclusively on that machine. For anyone wondering, here is the wiki article:
  7. Can not login to proxmox from web interface

    You probably want to report that to the bugtracker ...
  8. CPU performance comparison Win10-Ubuntu

    Or more Windows Server VM's :p Agree tho, that is rather sad state of afairs.
  9. Rule of thumb, how much ARC for ZFS ?

    well, I was trying to minimize the Ram footprint of the ZFS-Pool. So Rule of thumb then is: [base of 2GB] + [1 GB/TB of storage] + [5GB/TB for Dedupe] In my Case that would be 5GB of Arc. What is your planned maximum size? That is actually what I meant. Should have asked "What is the...
  10. Rule of thumb, how much ARC for ZFS ?

    Nope. Storage-Pools: I have a ZFS pool called SSD running on SSD's that is used for VM OS Data. <-- need advice for ARC allocation. VM-Data resides on a Raid 6 HDD Pool provided by a hardware RC with Cache + BBU. Proxmox sits on a SSD LVM Edit: just want to know how much Ram i'll need to...
  11. Rule of thumb, how much ARC for ZFS ?

    I have a 512 GB pool based on SSD's. Compression on Deduplication on Used only for VM OS Data How much ARC do I realistically need with Proxmox's implementation of ZFS ? Edit: Background info (not relevant to the problem): The system sits on a LVM-thin SSD POOL. VM-data sits on a Raid-6...
  12. website config issue

    when i hit , i get redirected straight to Latest chrome. Maybe in the 40 minutes they fixed it ? :p edit: but the wiki is throwing some (can not access database) issues.
  13. Proxmox VE on Debian Jessie with zfs - Hetzner

    I'm on OSX too, so that is perfect. Filing it for future reference. Thank you.
  14. Proxmox VE on Debian Jessie with zfs - Hetzner

    which VNC viewer did you use (just out of curiosity)
  15. Hardware/Concept for Ceph Cluster

    For the 10G network you are looking at this config from the proxmox wiki: For the 1G network gear (assuming it is not Jumbo frame capable you go down this route) you take this openvswtch bond approach, assuming your network gear is not LACP capeable. If you DO NOT want to use Openvswitch for...
  16. Hardware/Concept for Ceph Cluster

    Sry for the late reply, was on vacation. Disclaimer: I am not a network guy either (I just have to reingeneer our 200+ ceph servers on a weekly basis, to get the most out of our 3k+ spinners and 2k+ Flash, and have them talk to our georedundant (among others) proxmox-clusters). But I think you...
  17. 3 node ceph build

    Using a SSD for OS and OSD is not stupid in an off itself. It just makes it more complex. to the point where it becomes nonsense-sical. Best practice for OSD's is to use the storage of the same size and performance characteristics. If you were to utilize separate sizes and write speeds, you'd...
  18. Hardware/Concept for Ceph Cluster

    Just so I do understand this correctly. You performed the test as follows: Test 1: 3 Nodes with 3 SSD OSD's acting as own journal. Test 2: 2 Nodes with 3 SSD OSD's acting as own journal. 1 Node with 3 SSD OSD's having journal on P3600 ? If so, then you are right there is will only be a...


The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!