Hi,
I'm setting up an hook script for vzdump with a mailnotification enabled.
I don't get the the same messages on the console and on the e-mail.
That's a problem because all the job-start messages are missing (lines :
INFO: HOOK: start job-start
INFO: HOOK-ENV...
Thanks for your answer.
But I can't proceed the way you propose (I first need to test the servers on 2.1...) and backup/restore would take too much time = downtime...
I wanted both version to access the same array, but of course not use the same VM (and the same LV) at the same time
I would...
Thanks Tom for your answers.
1) I'll check
2) When you tell me to be careful, you mean to be careful not to start the same machine twice, right? I manage few VM's (20), and I'll manually move the conf file of each machine (after having stopped it), so there should be no problem.
I was afraid...
Hi,
I'm working on a migration from 1.7 to 2.1.
I'm really impressed by the improvements I can see between the 2 versions!!! great job guyz!
I've got several questions :
1) I've mapped my old NFS storage used to host my .iso files. It connects on 2.1, I can see the space allocated and so on...
I've migrated my VM from Xen(source) to KVM. The kernel was specific so there was a need of some precautions :
http://forum.proxmox.com/threads/391-Migration-of-servers-to-Proxmox-VE?p=21252#post21252
if you use a full virtualized VM (without specific kernel) :
* create a VM via Proxmox with...
Hi everybody ;)
I got a performance and disk space problem on my SAN (Datacore San Melody) due to the fact I've created/deleted too many VM's on the iSCSI LVM.
Datacore suggests us to zero disks before doing lvremove to tell the SAN the space is really available and to allow San Melody to...
seems to be due to the writing cache on the iSCSi target
with cache :
mwhinpo:/home/whinpo# dd if=/dev/zero of=/mnt/arf/bigfile bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.22445 s, 325 MB/s
mwhinpo:/home/whinpo#
without cache...
Hi,
I've noticed that scp a file between 2 linux VM didn't go faster than 100Mbits/s.
I've checked my network, my cards...but everything was fine..
I've tested on 2 VM on the same node : same speed...
then I decided to dd directly on the disk of a Linux VM...
and the speed is...
just had the same problem today...
trying to restart my brand new VM hosting the NFS backup..
the GUI was frozen, no way to do qm list, or list /mnt/pve...
the VM were running without issue btw..
the only way i found was to modify the storage.cfg on one node where nothing was running...
i don't understand why you put twice Dell in the multipath.conf...?
I think it should be here only once...
if you look at the doc I've put in the wiki, you'll see what I got in my multipath.conf for a Datacore system...
http://pve.proxmox.com/wiki/ISCSI_Multipath
stop multipath, remove one...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.