Accualy when i made node1 down, and i am trying to move VM config file /etc/pve/nodes/node1/qemu-server to node2/qemu-server i have permission denied.
So how can i "start" VM on node 2 if i cant copy config ?
How should i do proper MANUAL migraton from dead node1 to online node2 ?
DRBD is...
I saw that even if i create qcow format for new machine ltes say size of 50GB, it will create file 50GB.
When i will execute conversion that i described below, the image shrink from 50GB to somethking like 1 KB and then when i am installing linux on it, it grows.
Anyway this is the way to save...
Hello guys, i have few questions.
I have 2 node proxmox cluster newest 3.2 pve with no external storage, just free avaible space on each node as /dev/mapper/pve-data.
0. Is Ceph the best way to do shared storage? on drives that are on node1 and node2 and completly no other external NAS ...
you mean it doesnt work with Cointainers ?
No problem, i will be using only VMs with debian wheedy 7 on it.
But i need to configure VE in case of one machine failure and without any other external NAS :)
Wish me luck :P
Yeah but external storage will alho have bigger possibility of failure than two servers :)
I dont want to use external NAS.
Tell me if i am correct. Can Ceph be used like shared storage on those two servers ?
so they will each have copy of the same VMs ? and will be synced in realtime ?
RAM...
Hello guys,
There are so many infos in google but i cant find interesting solution.
Tell me is that possible with proxmox 3.3:
I have two identical hardware servers Dell PowerEdgeR620
Now i wat to use High Avaibility like this:
- there is no shared storage, so there is only disk space on node1...
tell me is ext3 that much less performance than ext4 ?
[ 7.768907] EXT4-fs (dm-2): mounting ext3 file system using the ext4 subsystem
[ 7.806558] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
[ 7.811829] EXT4-fs (sda1): mounting ext3 file system using the ext4...
I cant find anything in raid configuration utulity from bios, but i will once again try to connect disk directly to motherboard without raid controler and check again performance.
If raid somehow disablec cache biuild in to disk, and i will connect this disk directly to motherboard without raid...
ALL powered OFF:
root@proxmox:~# pveperf
CPU BOGOMIPS: 54277.44
REGEX/SECOND: 1760548
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 69.04 MB/sec
AVERAGE SEEK TIME: 25.58 ms
FSYNCS/SECOND: 14.30
DNS EXT: 57.26 ms
DNS INT: 0.73 ms...
Another results when i have 4 VM powered on.
In about 8-10 hours i will make the same tests on my proxmox with powered OFF all VMs.
As i see it, system alone is working pretty good, but when VMs are getting powered on, whole system slows down, perhaps there is something wrong with one VM.
Meaby...
I checked and i dont even have BBU option in raid, this raid doesnt support BBU.
So again, accually hdparm -tT /dev/sda is pretty good results, but when i am starting VM, i have 95% busy /dev/sda and only 7MB/s read, so what the heck ?
P.S - i even checked performance WITHOUT RAID, so with...
hi guys, i have very big problem with my proxmox.
J have two 7200rpm Red WD 2x 2TB, and my system has all the time 50-90% budy disk /dev/sda (i see this in atop)
Disks are connected via: INTEL RAID SASWT4I
Tell me, is my FSYNCS/SECOND: 6.48 isnt too low ?
What can i do ?
My system:
Linux...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.