Search results

  1. Q

    Create multiple OVS bridges on a single interface without rebooting?

    AFAIK, and do not quote me on this. The only way for a Network change done via GUI to be taken into affect is to reboot the corresponding node. I'm not sure i understand the issue correctly, but it sounds like you have x amount of clients you want to offer this option to. I'd probably go...
  2. Q

    [Solved] Bad IO performance with SSD over NFS from vm

    which vDisk caching mode did you choose in proxmox gui ?
  3. Q

    [Solved] Bad IO performance with SSD over NFS from vm

    hey, have a look at the following: https://forum.proxmox.com/threads/io-scheduler-with-ssd-and-hwraid.32022/#post-158763 pay special attention to gkovacs posts. See if it makes any difference when doing these changes on the guest. my VM's do not need as much IO, but making the change to noop...
  4. Q

    ZFS: replace disks with smaller one

    I think there is some confusion as to SSD and same series/vendor usage, which somehow got adapted from the HDD "best practice", where people suggest you use same capacity drives of different charges / lines / series / vendors, as to mitigate a complete charge being bad and you loosing all your...
  5. Q

    Proxmox Network configuratio: 2ETHs with Vlans and bond for VMs + 1NIC for management

    I assume you looked up this nifty resource ? https://pve.proxmox.com/wiki/Open_vSwitch#Example_2:_Bond_.2B_Bridge_.2B_Internal_Ports
  6. Q

    SSD endurance evaluation

    Not sure about estimating, but you can test it (once you have the hardware e.g. during your burn-in tests) http://www.samsung.com/us/business/oem-solutions/pdfs/SSD-Sales-Presentation.pdf (page 10 and 11) Procedure would be to measure NAND writes via Smart and then divide it by the actual Data...
  7. Q

    SSD endurance evaluation

    Samsung SSD PM863 480GB, SATA (MZ-7LM480E/MZ-7LM480Z) rating: TBW: 700TB For completeness sake (since i ran the numbers last night - prices are prosumer): Samsung SSD PM863 480GB 310€ - 0.44€ / TBW Intel SSD DC S3610 480GB 415€ - 0.11€ / TBW Intel SSD DC P3700 400GB 750€ - 0.10€ / TBW Intel...
  8. Q

    SSD endurance evaluation

    Intel SSD DC S3610 480GB, 2.5", SATA (SSDSC2BX480G401 TBW: 3.7PB source: http://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ssd-dc-s3610-spec.pdf you probably also wanna read up on https://en.wikipedia.org/wiki/Write_amplification My gut feeling says, that...
  9. Q

    Can Install Proxmox with other OS in the same computer?

    disclaimer: unless you use proxmox for testing the following is not advised: It works quite well, but you can not run both systems concurrently in native mode. get 2 Disks. Lets call em sda and sdb. Install Poxmox as acustomed to via the installer. Make sure your bios auto boots from the disk...
  10. Q

    What would you do with this?

    Wasn't aware you could still do that in proxmox 4. Do you still need to do the quorum (as in set quorum votes manually so that <all quorum votes> / <number of hosts> =odd) trick so you can still access the gui ?
  11. Q

    What would you do with this?

    These are your options: get a 3rd "Mac/PC/Scrub Hardware" and set up a 3-Node Proxmox 4 Cluster - see: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster use FreeNas as Backup or to host VM's on. get a 3rd "Mac/PC/Scrub Hardware" and set up a 3-Node Proxmox 4 Cluster powered by Ceph - see...
  12. Q

    Best way to expose an shared NFS dir to each container?

    For your usecase B) is the only option (as in sharing a single NFS-Share between multiple VM's/ Container aka the "Guest") - afaik. What I described as A) is the process you choose on the "HOST" when you want to host the VM/CT's virtual Disk (vDisks) on the NFS-Share (as opposed to your local...
  13. Q

    Best way to expose an shared NFS dir to each container?

    Question for understanding: A) Are you looking to expose containers to an external NFS-Share added to Proxmox via the Datacenter -> Storage view? B) Are you looking to expose containers to an external NFS-Share by mounting said NFS-Share on multiple containers? if A) then you are doing it...
  14. Q

    Virtualize Remix Os

    check out this entry: https://pve.proxmox.com/wiki/Pci_passthrough specifically teh part about GPU passthrough.
  15. Q

    IO Scheduler with SSD and HWRaid

    Sry, What i should have said is, i shall try it on my lab setup before using it in production ( I test every change before i f*** up a working a system, cant blame nobody but yourself that way) I think for centos 7 this is only the case of disks you have passed through via virtio or of virtio...
  16. Q

    IO Scheduler with SSD and HWRaid

    I just stumbled upon this little piece of info: https://wiki.debian.org/SSDOptimization They actually recommend deadline. I shall test noop and report back :)
  17. Q

    IO Scheduler with SSD and HWRaid

    This is quite interesting. So if i read this right: If you have SSD based system, you run noop scheduler ?? Another question: Say you have a centos 7 VM and execute the command you describe. Lets further say you got the output: noop [deadline] cfg what is it you are looking at ?
  18. Q

    Optimum ZFS configuration with 8 slots

    here is a Raid calculator for you: it does include ZFS based Raid levels. http://wintelguy.com/raidcalc.pl https://www.servethehome.com/raid-calculator/ ZFS afaik supports: Raid0 Raid1 Raid10 Raid-Z1 Raid-Z2 Raid-Z3 When ever you are doing parity calculations (e.g. last 3 levels mentioned)...
  19. Q

    [SOLVED] Working with ZFS - Caching Strategies ?

    update: got a good deal over at hetzner (2x 240GB SSD + 4x3TB HDD) set up exactly like before, no iodelay anymore. I'll considered this solved now.
  20. Q

    SSD Setup with huge write loads

    my point, Samsung Pro 256Gb models will only last you 32 month :p (150TBW rating) The obvious choices here would be the file-server, logging on pfsense (lots of small io - check "write amplification SSD" on google) or swap usage on the windows machines. check for Month (average), see if it has...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!