Search results

  1. S

    Proxmox6 LVM SSD Cache

    WOW this just saved my life. Live production servers couldnt come online after a emergency reboot caused by load. I stressed a bit, made coffee and found this post and ran it and it all is good again. Thank you
  2. S

    Live Migration fails with localdisks?

    It works perfectly if I leave it on nfs - I can then decide to move disk back to local-lvm after migration on the new node. If I try migrating it when any VM still has local-lvm attached it fails completely. I see this on the server I am migrating to currently. root@pve-6:/etc/pve/qemu-server#...
  3. S

    Live Migration fails with localdisks?

    I'm not too concerned as I'm just playing with it to get a good feel for migrations on test servers atm. I just want to practice and get the process perfected. I think moving the disks to nfs-server, then migrating the VM, then moving the disks back is going to work fine. May just take a little...
  4. S

    Live Migration fails with localdisks?

    Tested it again by moving disks from local-lvm -> nfsserver and now it works fine 2019-11-25 09:14:43 use dedicated network address for sending migration traffic (10.0.0.136) 2019-11-25 09:14:43 starting migration of VM 101 to node 'pve-6' (10.0.0.136) /dev/sdc: open failed: No medium found...
  5. S

    Live Migration fails with localdisks?

    root@pve-1:~# qm migrate 101 pve-6 --online --with-local-disks --migration_type insecure --migration_network 10.0.0.0/24 /dev/sdc: open failed: No medium found /dev/sdd: open failed: No medium found 2019-11-25 08:55:28 use dedicated network address for sending migration traffic (10.0.0.136)...
  6. S

    Reinstall a Cluster Node

    Thought so. Had to confirm to be sure :) Thanks a lot
  7. S

    Reinstall a Cluster Node

    We want to setup a cluster again but find that one question we about puzzled with as per the documentation here: As said above, it is critical to power off the node before removal, and make sure that it will never power on again (in the existing cluster network) as it is. If you power on the...
  8. S

    Extendind lvm-thin size after converting raid

    I have converted successfully a HW RAID 6 partition to a RAID 5 for more space: a0 PERC H700 Integrated encl:1 ldrv:1 batt:good a0d0 2791GiB RAID 5 1x6 optimal Trying to increase the lvm-thin volume pve/data with the extra space. Anyone know exact steps? Trying not to break...
  9. S

    Proxmox6 LVM SSD Cache

    i am experiencing this too. anyone solve it yet
  10. S

    LXC loadavg

    ok thanks :)
  11. S

    LXC loadavg

    Any reason this is not made default in new proxmox versions yet?
  12. S

    Dell R610 with Consumer SSD disks in RAID 10

    Was hoping it would be possible. Guess safer just to build new server with new disks, add it cluster and live migrate them across. then remove the old disks from other server and rebuild. Better safe than sorry. Really want to get rid of those consumer disks :)
  13. S

    Dell R610 with Consumer SSD disks in RAID 10

    Hi We have two old servers running Consumer Grade 1TB SSD disks for last year with no issues whatsoever. However we would like to put in Intel DC Enterprise SSDs now. I'd like to know as its RAID 10 servers that if we remove one SSD and insert into its place a different SSD with same size but...
  14. S

    Some guidance on storage

    probably just greed for speed. I guess it does ok Thanks for comment
  15. S

    Some guidance on storage

    I have a server with the following: H700 Controller and BBU in writeback (512MB) version 8 x SAS HGST Enterprise 10k drives in RAID 10 will it help performance if I follow this guide and add 2 SSDs into the mix for LVM Caching. https://blog.jenningsga.com/lvm-caching-with-ssds/ Thoughts? or...
  16. S

    Locking down Proxmox Interface

    I would like to lock down SSH and PRoxmox interface (port 8006) using the PVE Firewall. Anyone have exact steps to follow as I dont want to lock myself out as this particular server is not in the office but in DC and too lazy to take a drive through if anything goes awry :)
  17. S

    Which is better? ssd-caching or os and mysql on SSD

    found a old Perc 6/ir which is in IT Mode. Should work best for ZFS though. Will test with all 3 raid controllers but I assume the H700 in HWRaid with BBU in writeback should provide the best speeds.
  18. S

    Which is better? ssd-caching or os and mysql on SSD

    Perc 6/i is being used but we do have some h700s around I think I should switch this server to use. We also have SAS 10k disks which we could probably use and test with.
  19. S

    Which is better? ssd-caching or os and mysql on SSD

    woops I'm an idiot. I meant for the first option " x 2 TB SATA disks" for data

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!