Hello everyone! Please help to understand what causes BSOD.
Windows Server Standard 2003 x64 R2 SP2 all fixes some specific fixes for termdd.sys, winsrv.dll to fix BSOD from Microsoft.
This server with Terminal Server installed Citrix XenApp 5.0 for Windows Server 2003.
Citrix application...
привет! Я тоже из России ! юзаю proxmox в продакшене. Так же столкнулся с рандомными bsod-ами на win srv x64 r2 sp2. Сделал как ты советовал.... посмотрим Давай дружить ) Мож чем поможем друг другу. Есть русское сообщество по proxmox ?
Hi!
GFS2 earned but with a bunch of errors. Node does not turn off properly, hangs on unmounting. Boots normally only if the other node is running. I think that the GFS will work in a cluster of three nodes, but I have just two. In the end I decided not to use the GFS2 and use LVM as advised...
Hi All !
Please help! I newbie in linux.
I make Lmbench test on my proxmox host and vm.
Please tell that is normal results ?
pve-n1 proxmox node
centos-pg kvm vm
L M B E N C H 2 . 0 S U M M A R Y
------------------------------------
Basic system...
Hi Thank's ! I actually was understood.
but can understand why before in lvm filter i write /dev/mapper/mpath0 but now pv i can see as /dev/disk/by-id/dm-name-mpath0
And I can not decide what to write in fstab:
/dev/store/storelvm /mnt/vol1 gfs2 noatime,nodiratime 0 2
is that line in fstab...
Thank you for response !
I understand that CT can use be stored on local storage or on NFS. but I buy SAN for get high perfomance.
If I use a local store that I can not have sufficient capacity and availability. If one node fail i can loss CT that stored on local storage.
I decide this issue. name of cluster from /etc/cluster/cluster.conf ?? right ?
But now new problem :
root@PVE-N3:/mnt# mount -t gfs2 -o noatime /dev/sdb /mnt/data
node not a member of the default fence domain
error mounting lockproto lock_dlm
how add node to fence domain ?
thank's! I will try at weekend, and write about results.
now i try on virtual box proxmox vm make gfs2. I have some questions: what are name of cluster i can use ? in your examle there is "proxprod" i use "lso".
that what i do:
create virt hdd. /dev/sdb
root@PVE-N3:/mnt# mkfs.gfs2 -t...
And only for offline OpenVZ migration what is configuration fs and storage ?
I already have a SAN IBM DS3512 with shared LUN via SAS HBA and I want to use it as shared storages for vm and vz cont.
In this thread http://forum.proxmox.com/threads/8644-OpenVZ-Online-Container-Migration-Fails
rpuglisi : "I installed and configured GFS, defined it in Proxmox storage, created a container and was able to successfully migrated a container online."
I will also try GFS ...
Hello! Can you help me with setup GFS on shared LUN on SAN ? I have same problem with OpenVZ.
https://forum.proxmox.com/threads/11473-File-system-type-on-shared-storage-%28SAN-IBM-DS3512%29
Hello! Please help ? How use OpenVZ and online/offline migration on shared LUN on SAN via SAS ?
https://forum.proxmox.com/threads/11473-File-system-type-on-shared-storage-%28SAN-IBM-DS3512%29
I have the same problem with OpebVZ. How how you decide this ? my nodes connect to SAN over SAS. I want test use ext3 on top of lvm on connected LUN. or use gfs2.
https://forum.proxmox.com/threads/11473-File-system-type-on-shared-storage-%28SAN-IBM-DS3512%29
what kind of storage type: iSCSI,NFS, local shared direcroty, LVM group and
where should I store the VZ containers for have the support of "live" migration? or offline migration ?
Let me remind you that I have a connection to the SAN with 2 SAS HBA adapters on each server and my server see LUN...
Thanks for the quick response! Let me ask a few more questions on the shared drive?
sorry for my english.
This means that the live migration openvz impossible? If possible, how?
what options there are connections to the shared LUN on storage system for live migration?
qemu can live migrate...
Hello!!
I have two servers IBM x3650 connected to IBM System Storage DS3512 with HBA SAS adapter. I config multipath-tools and each server have access to same shared LUN on storage.
On this LUN i mkfs ext3 and mounted on each server in /mnt/vol1. In proxmox web-management interface i add...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.