Wish to say I have some "custom" PVE but no, mine is a standart too.
I'm pretty interesting how can I only restart graph module without restarting whole host, just to check.
Hello,
it's been ages since I saw this first time (I believe it was on 1.8), and yet today, on 3.3, it's still there.
When I use PVE web gui I see graphs in server's time zone, while logs below - in my local. If server is near me (that's in my TZ) then these timestamps are the same so no...
But if I have a lot of free memory, won't it be wise to keep some spare swap space made of zram (thus fast one, not like ordinary hard drive, even in raid)? I understand that if I have free memory then I should not need swap at all but as I used host 128 GB of RAM and assign 126 Gb to VMs I got...
Funny thing: I tried to set up 4 zRAM block devices each of 1,5 Gb, and added it as swap with high priority. I set vm.swappiness = 10. After some hours with this config, I see these numbers:
# free -m
total used free shared buffers cached
Mem: 128902...
Yes it looks like that. But given the fact I use th same number of VMs, and all of them are configured once and never touched, and sum of all RAM assigned is less that total RAM amount that's installed in host machine, and I finally got 'out of RAM' which given me swap usage, then I really...
Sounds quite correct! :(
Just out of curiosity: may this be a good idea to implement swap made of zRAM block device(es)? It's common practice when we come to notebook with large RAM (which is not uncommon today) and SSD disk - to prevent SSD wear out and seepd things up a bit it's a good idea...
You're right. I saw 6 Gb some time ago, so it appears to fluctuate.
But anyway: why even 1 Gb of swap used while I have much more RAM as 'free'? Any approaches to free swap completely? Lower overall RAM assignment, zRAM?
Recently I got a chance to install PVE 3.3 on quite a capable server (Dual 6-core Xeons, 128 RAM, SSD disks). Due to fact it has a lot of RAM I set up swap to only 8Gb thinking I'll newer need any (and I hate to spend SSDs for swap, too). I created and run some VMs, and as I finished I have 126...
Try to use http://pve.proxmox.com/wiki/Quick_installation boot params to use disk the way I like, and here is what I miss: there are maxroot, swapsize, maxvz and minfree params. Is the sum of that sizes is the whole size of storage? Or simple maxroot+swapsize+maxvz=total size, and minfree is...
I got a chance and changed SSDs for SAS disks, which given me slower storage but at least I have no worry of 'if I do that right'.
Sad to tell but my choices was too fuzzy for any serious use: mdadm is unsupported by pve team (I've actually read that long ago, back in 1.8 era, and remember it...
As I can see now, I need to know:
1. If kernel supports TRIM? (for PVE kernel the answer is "YES")
2. If device-mapper in PVE distro supports TRIM?
3. If LVM in PVE distro supports TRIM?
4. If mdadm in PVE distro supports TRIM?
What I sure about is the 1st line, but answer for 2..4 seems to be...
We've already disconnected SSDs from Adaptec and connected it directly to SATA ports on motherboard (we simply not sure if Adaptec card is SSD-aware). Now I need to choose how to build TRIM-enabled mirror with software tools.
I can:
1. use single SSD, put LVM on it, create VG and use that...
Oh, I got another bad news here: "The good news is that in the 2.6.36 kernel, discard support was added to the dm layer! So any kernel 2.6.36 or later will support TRIM."
So I only can hope for the case that TRIM support in dm is also backported in PVE distro. And I simple don't know how to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.