Search results

  1. S

    lxc /proc/loadavg implemented?

    wrong title pls delete.
  2. S

    Importing Version 3.1 VMs into a 5.1 Installation

    Maybe you just have VMs. Note: CT = OpenVZ VM = KVM If you are administering a server you need to get to know these abbreviations :)
  3. S

    lxc /proc/loadavg implemented?

    https://github.com/lxc/lxcfs/issues/13 Seems it finally been done? :) I hope so. So hopefully soon all our loadavg issues are gone. commit: https://github.com/lxc/lxcfs/commit/b04c86523b05f7b3229953d464e6a5feb385c64a
  4. S

    Importing Version 3.1 VMs into a 5.1 Installation

    I am busy doing it at this very moment. I see no issues in my migration. I'm using proxmox 3.4.16 for one server and restoring openvz to LXC and restoring the KVM ones into proxmox 5.2.1 fine. No issues atm.
  5. S

    Proxmox 3.4-13: KVM host completly freezes every couple of days. Nothing on logs

    I noticed something weird also on our cloudlinux servers but it does not go down. But may help you if you can check it aswell. Install iperf and check the network performance. I noticed on centos, debian, ubuntu KVM servers on same node this is the performance we get: [ ID] Interval...
  6. S

    Numa question

    We just added extra Processor in each Server. Intel Xeon E5-2620 Now each server has 2 of these with 64GB memory. We have few VMs on it. For eg. 1 with 14GB memory and 6 cores and selected 2 Sockets(which we assume is now using both processors as it now states 12 cores are being used. However...
  7. S

    Performance pveperf vs hdparm

    nevermind reinstalled using ext 4
  8. S

    Performance pveperf vs hdparm

    Getting this with pveperf on RAID 10 with BBU and writeback and 6x 1TB Enterprise HDD Sata disks root@vz-cpt-1:~# pveperf CPU BOGOMIPS: 57595.92 REGEX/SECOND: 1879039 HD SIZE: 35.44 GB (/dev/mapper/pve-root) BUFFERED READS: 60.59 MB/sec AVERAGE SEEK TIME: 12.64 ms...
  9. S

    Restore VM slowdown Host

    Nope larger ones still a problem. Load goes sky high. Anyway for now while I do more testing going to stick to LVM on proxmox 3.4. It happens only on Proxmox versions using LVM-thin as default for me. I dont use ZFS only HW raid controllers on our servers and every server has same issue except...
  10. S

    Create an additional lvm thin disk

    Ok tried 960G works fine: lvcreate -L 960G -n ssd-data vmdata Logical volume "ssd-data" created. lvconvert --type thin-pool vmdata/ssd-data WARNING: Converting logical volume vmdata/ssd-data to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME...
  11. S

    Create an additional lvm thin disk

    Thanks. the method on here does work: https://pve.proxmox.com/wiki/Storage:_LVM_Thin Its just the space allocation. How much space must be left free for extents. Not sure how to work that out. If I set it to 800GB it works fine but I want the full space (976.95G ) and I dont want any errors...
  12. S

    EDAC sbridge errors after upgrade to 5.1

    I"m seeing this too : EDAC sbridge: Couldn’t find mci handler tons of it.
  13. S

    Create an additional lvm thin disk

    Trying to create an additional disk but a bit lost by the term extents. Here is what I did. sgdisk -N 1 /dev/sdb Creating new GPT entries. The operation has completed successfully. pvcreate --metadatasize 250k -y -ff /dev/sdb1 Physical volume "/dev/sdb1" successfully created. vgcreate...
  14. S

    Restore VM slowdown Host

    I couldn't deal with the fact being on older version of Proxmox so setup a new server again but added a secondary Processor. Problem I think is solved. 24 x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz (2 Sockets) I was restoring small VMs though and all looks better. Going to try larger ones...
  15. S

    lvm ssd cache with hwraid and battery backup

    One thing though I cannot get my head around. Why has the LSI controller got something called CacheCade SSD Caching? Is that still slower than using BBU with writeback? I've never enabled that before for testing so was wondering what that option is and why would it be there if BBU in writeback...
  16. S

    Recommendations on setup

    We want to move away from Pure SSD as its costly. Would this work well? 2 x 1TB SSD disks in RAID 1 (For Proxmox OS, cPanel OS and Mylsq for KVMs 4 x 2TB Enterprises SATAs in RAID 10 (For /home) directory when PHP and static files exist. Note we have HW RAID Controller with BBU using...
  17. S

    lvm ssd cache with hwraid and battery backup

    Was wondering the same but wasnt so sure yet. Thanks
  18. S

    lvm ssd cache with hwraid and battery backup

    hi I have 2x ssds and 4x hdd. Also currently setup using lvm and raid1 ssd and raid10 sata x4. I am using hwraid controller with battery backup in writeback. Question is can I still create ssd cache using the ssds?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!