Questions while setting up a new proxmox server

Sep 21, 2012
30
0
6
Hi,

still looking for some advice creating my new proxmox server on a rent server.

Hardware: Intel Core i7-4770, 32 GB DDR3 RAM, 2TB SATA HDD

This is how my actuall configuration looks:

My PVE (3.2 stable with support subscription):
Code:
proxmox-ve-2.6.32: 3.2-129 (running kernel: 2.6.32-30-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
VG setup
Code:
# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1,69 TiB
  PE Size               4,00 MiB
  Total PE              442338
  Alloc PE / Size       179200 / 700,00 GiB
  Free  PE / Size       263138 / 1,00 TiB
  VG UUID               Sd2mdO-WduR-Ooaf-CBXS-yzjT-55PA-28QBtp

  --- Volume group ---
  VG Name               vg0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               132,87 GiB
  PE Size               4,00 MiB
  Total PE              34015
  Alloc PE / Size       33792 / 132,00 GiB
  Free  PE / Size       223 / 892,00 MiB
  VG UUID               0OkfY2-1wPb-tl7h-YKW7-i96w-vrZo-xfvgDZ
LV setup
Code:
# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1/vz
  LV Name                vz
  VG Name                vg1
  LV UUID                cPzeKh-DjYs-p9aO-0dl0-IL11-XykV-x3HFPP
  LV Write Access        read/write
  LV Creation host, time [removed]
  LV Status              available
  # open                 1
  LV Size                100,00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

  --- Logical volume ---
  LV Path                /dev/vg1/backup
  LV Name                backup
  VG Name                vg1
  LV UUID                5QS8c1-Jqnd-T2ip-P9de-O4dH-EI6p-nqYM5H
  LV Write Access        read/write
  LV Creation host, time [removed]
  LV Status              available
  # open                 1
  LV Size                300,00 GiB
  Current LE             76800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

  --- Logical volume ---
  LV Path                /dev/vg1/vm-102-disk-1
  LV Name                vm-102-disk-1
  VG Name                vg1
  LV UUID                un17YT-5KHc-k9al-oTLa-i5nu-jntO-NhCaM2
  LV Write Access        read/write
  LV Creation host, time [removed]
  LV Status              available
  # open                 1
  LV Size                300,00 GiB
  Current LE             76800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

  --- Logical volume ---
  LV Path                /dev/vg0/root
  LV Name                root
  VG Name                vg0
  LV UUID                XWMuUx-elxy-kfSY-vjua-jqez-9Jtv-8UUu5k
  LV Write Access        read/write
  LV Creation host, time [removed]
  LV Status              available
  # open                 1
  LV Size                100,00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/vg0/swap
  LV Name                swap
  VG Name                vg0
  LV UUID                QXxYr0-5qjJ-7kEG-AJf7-nJ16-0LTr-vzqONF
  LV Write Access        read/write
  LV Creation host, time [removed]
  LV Status              available
  # open                 1
  LV Size                32,00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
fstab of host for VGs
Code:
/dev/vg0/root  /  ext3  defaults 0 0
/dev/vg0/swap  swap  swap  defaults 0 0
/dev/vg1/vz  /var/lib/vz  ext3  defaults 0 0
/dev/vg1/backup /pvebackup ext3 defaults 0 0
pve storage.cfg
Code:
dir: local
        path /var/lib/vz
        content iso,vztmpl,rootdir
        maxfiles 0

dir: backup
        path /pvebackup
        content backup
        maxfiles 0

lvm: images
        vgname vg1
        content images

As you can see I used ext3 as filesystem on the host (since I read it delievers the most performance) and using LVM to manage the diskspace and using it for my KVM VMs. VM102 is only a test VM so far.

The hostsystem will be used to run 3-4 KVM VMs (planed so far: 2 x debian, 1 x Win7).

Here are some questions I still have for that project.

1. Is the setup of the pve hostsystem in a good state performance wise?

2. When i setup a debian KVM VM, how should I install the filesystem of it? Let the installer use the entire disk or let it use the entire disk and setup another LVM? ext3 or ext4?

3. I do have another server with some running KVM VMs all of them using local storage (directory) with raw disk images. Can I move a full backup of them to the new server and restore them on a VM using the LVM storage?


Thanks alot for your advice.
 
1. Is the setup of the pve hostsystem in a good state performance wise?
Since you have followed the recommendation it looks fine.
2. When i setup a debian KVM VM, how should I install the filesystem of it? Let the installer use the entire disk or let it use the entire disk and setup another LVM? ext3 or ext4?
Using LVM on top of LVM will give a little performance penalty but using LVM inside the VM will make it very easy to increase the space later on. I would personally choose ext4 for VM's but YMMV.
3. I do have another server with some running KVM VMs all of them using local storage (directory) with raw disk images. Can I move a full backup of them to the new server and restore them on a VM using the LVM storage?
It should be possible using dd. See http://blog.allanglesit.com/2011/03/linux-kvm-converting-raw-disk-images-to-lvm-logical-volumes/
 
Hello,
thank you for your help.

I have some follow up questions:

4. Is is possible to assign CPU cores (threads) to a specific KVM VM? I wanted to assign 4 cores to one VM and anoher 4 to the other two VMs. I am fully aware these are only threads not cores.

5. I am having problems using spice under win7 x64. I downloaded virt-viewer-x64-0.6.0.msi. Getting error "Unable to connect to graphic server D:\Downloads\[filename].vv".

Heres a debug log:
Code:
C:\>(remote-viewer.exe:5112): remote-viewer-DEBUG: Insert window 0 0
0000000009740A0
(remote-viewer.exe:5112): remote-viewer-DEBUG: Couldn't load configuration: No s
uch file or directory
(remote-viewer.exe:5112): remote-viewer-DEBUG: Insert window 1 0000000000974760
(remote-viewer.exe:5112): remote-viewer-DEBUG: Insert window 2 0000000000974FA0
(remote-viewer.exe:5112): remote-viewer-DEBUG: fullscreen display 0: 0
(remote-viewer.exe:5112): remote-viewer-DEBUG: fullscreen display 1: 0
(remote-viewer.exe:5112): remote-viewer-DEBUG: fullscreen display 2: 0
(remote-viewer.exe:5112): remote-viewer-DEBUG: Opening display to d:\Downloads\Yj_lhuXG.vv
(remote-viewer.exe:5112): remote-viewer-DEBUG: Guest d:\Downloads\Yj_lhuXG.vv has a spice display
(remote-viewer.exe:5112): remote-viewer-DEBUG: After open connection callback fd=-1
(remote-viewer.exe:5112): remote-viewer-DEBUG: Opening connection to display at d:\Downloads\Yj_lhuXG.vv
(remote-viewer.exe:5112): remote-viewer-DEBUG: New spice channel 00000000009CBBD0 SpiceMainChannel 0
(remote-viewer.exe:5112): remote-viewer-DEBUG: notebook show status 00000000009CF090
(remote-viewer.exe:5112): remote-viewer-DEBUG: notebook show status 00000000009CF330
(remote-viewer.exe:5112): remote-viewer-DEBUG: notebook show status 00000000009CF5D0

(remote-viewer.exe:5112): GSpice-WARNING **: Socket I/O timed out
(remote-viewer.exe:5112): remote-viewer-DEBUG: main channel: failed to connect
(remote-viewer.exe:5112): remote-viewer-DEBUG: Disposing window 00000000009740A0

(remote-viewer.exe:5112): remote-viewer-DEBUG: Disposing window 0000000000974760

(remote-viewer.exe:5112): remote-viewer-DEBUG: Disposing window 0000000000974FA0

(remote-viewer.exe:5112): remote-viewer-DEBUG: Set connect info: (null),(null),(null),-1,(null),(null),(null),0
 
Last edited:
By cores do you then mean cpu cores in the host? If so, this is not possible.

Concerning your Windows question. I don't use windows so I cannot help you.
 
Hi,
yes I meant the cpu cores. I read some information about kvm being able to set cpu affinity for VMs. So this is not possible using proxmox i guess.
Thanks again for your fast anwser.
 
I hardly see the point in this since it sort of goes against the fundamental principle behind virtualization. The same applies to passing through hardware to VM's since this effectively breaks the possibility of migrating VM's.
 
Although it is not possible to assign specific CPU core to specific VM, it is possible to assign a specific CPU(multi CPU motherboard) to a group of VM. I never done it but i vaguely remember seeing it somewhere. As mir said, those VMs will not be able to migrated to other node since they will be physically connected to the CPU.
I do not believe that there is even remotely possibility to assign specific cores to specific VMs. It indeed defeats the purpose of Virtualization.
What is the reason you are thinking that way? To ensure those VMs always have enough resources?
 
Yes that was indeed the reason: So one VM can not interfere with the CPU "power" of the other VM. I had that setup some years ago on a xenserver.

Googling "kvm cpu affinity" or "xen cpu affinity" give some results that might have mislead me.
 
Last edited:
Yes that was indeed the reason: So one VM can not interfere with the CPU "power" of the other VM. I had that setup some years ago on a xenserver.

Googling "kvm cpu affinity" or "xen cpu affinity" give some results that might have mislead me.

I dont think this is cause of concern at all. As long as your physical CPU does not max out, it is all ok. And the CPU will only max out when more cores has been assigned to many VMs and all VMs are having simultaneous CPU consumption. This is very unlikely that all VMs will have 100% CPU utilization. Of course your case may be different. What you already proposed by assigning 4 cores to 1 VM and other 4 cores to 2 VMs, you will really have no issue at all even if they are running 100% CPU utilization. Because total assigned core count is still 8.
 
Yes,
what i was trying to achieve with that earlier xen setup was the following: I think back in that time I had 4 cores also with a count of 8 threads, which gave you 8 cpu ids: 0-7.
What I did on that xen was assinging one VM to the cpu ids 0-3 and the other 2 VMs to 4-7. The goal was to never let the other 2 VMs interfere with the cpu power of that one VM that had core 0-3.

Nevertheless if this is not possible in proxmox anyway I can live with that, I ll keep running some tests with my new setup and might bring some more questions.

If anyone is having more advice on my questions that were not awnsered, i am happy to hear more :)
 
I will tell you about that i am sure:

1- If your processor have 8 cores, and it is distributed in 4 cores with 8 threads, then for each VM each thread will be a core (although in the physical hardware is not so, but the same occurs without virtualization, each OS see a thread of processor as a core).
2-On the other hand, if you have in total 8 threads of processor, you can have several VMs with 8 cores configured in each one without get a hardware problem, although may affect the performance by overload of processes

I will tell you about that i am unsure:
- If the processor that support virtualization will know as distribute his resources efficiently and balance them between his cores for get the best performance.

If someone know if the processor that support virtualization will know as distribute his resources efficiently and balance them between his cores for get the best performance, please report it here.
 
I think like few other threads in this forum, this thread also in the process of being hijacked to discuss the nature and mechanics of CPUs. :)

Simply put, a CPU with Virtualization Technology knows how to efficiently distribute load on multiple cores/threads. Regardless how many VM cores are assigned. It is possible to assign total of lets say 20 cores to multiple VMs on a 8 core processor and actually run all VMs without issue. Only time issue will occur is when all VMs trying to operate maximum load. Regardless of number of virtual core assigned, the physical core will always try to rebalance total load the way it see fit.
I am not sure if there is a formula of how many virtual cores should be assigned on a physical CPU. But i think it will depend on each scenario thus will vary.

@Okumba, to answer your question about SPICE on x64, well i do not have any. I am also having this issue for several months now. I tried many things but i simply cannot use SPICE on Windows 64 bit. It works great on 32 bit. I know there are other threads going regarding this issue. But i dont think anybody found any real solution. It is not firewall related issue since i opened all possible ports without any result.
 
Simply put, a CPU with Virtualization Technology knows how to efficiently distribute load on multiple cores/threads. Regardless how many VM cores are assigned. It is possible to assign total of lets say 20 cores to multiple VMs on a 8 core processor and actually run all VMs without issue.

Wow symmcom, i don't know it, do you have a official Web link that have this information?

@Okumba
Here a information that will be util for you (see cpulimit and cpuunits):
https://pve.proxmox.com/wiki/Manual:_vm.conf
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!