Yeah it's a production cluster with 4 nodes running 1.9. We are awaiting new HP servers along with a HP SAN. We will build the new cluster with 2.x and migrate the VMs to the new cluster.
We are actually testing 2.x in our labs with old equipments. But until then, the production servers are...
Hello,
The server went down again.... with the same output in the logs.
It's an Adaptec 6504 on a SuperMicro board (almost new with dual X5672 proc). I did a firmware update a few months ago on the raid card, I will check tomorrow if any other updates are available for the card and the mb...
A few VM's went down on our master node and found the following in /var/log/syslog just before the down state :
Apr 27 19:29:49 co-ve-001 kernel: aacraid: Host adapter abort request (1,0,0,0)
Apr 27 19:29:49 co-ve-001 kernel: aacraid: Host adapter abort request (1,0,0,0)
Apr 27 19:29:49...
OK guys, here is the final setup :
1x HP StorageWorks SAN Smart Array P2000 G3 FC/iSCSI w/ 12x300GB SAS 15K SSF
1x HP Proliant DL385 G7 Server 2x Opteron 6238 w/ 24GB RAM & HDD (+ our currents SuperMicro nodes (4x))
2x Cisco Switch WS-C2960S-24TS-L (4x SFP)
+ APC UPS/PDU Kit
All of this for...
For my situation, buy is a must, for sure! We are a startup and can't afford stress and long downtime if something happen. From this, we have Dell, HP, Supermicro, QNAP... what else ?
Things to consider, need fiber channel and IO performance is a must!
Hello,
We have 30K$ budget to expand a current setup and I'm looking for advices about the storage. Right now we have 4 servers working as a promox VE 1.9.x cluster with the following specs :
1U SuperMicro Barebone
Dual Intel Xeon X5672 Processors
48GB RAM
4 X 320GB 15K
Adaptec 6504 Raid-10...
Bump,
I'm interested about the same setup, right now we use 4 nodes and all VMs are located on each node's local filesystem. I have a budget of 20,000$ to expand our setup and I would like to move ALL vm's on a NAS or SAN. In bref, I what CPU/RAM processing by the nodes and external storage for...
Thanks for your reply, last night I've modified the cache setting and enabled it and everything is OK now.
co-ve-001:~# pveperf
CPU BOGOMIPS: 141029.42
REGEX/SECOND: 977842
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 571.11 MB/sec
AVERAGE SEEK TIME: 4.36 ms...
FIXED: Adaptec 6405 RAID Controler Performance Issue
Hello everyone,Right now we're having performance issue with one of our node. VMs (KVM) are really slow, crashing, etc...Server Specs & Data24 x Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
4 x SAS 6GB/s RAID5 Hardware (Adaptec 6405)96GB DDR3 1333MHz...
Ok I see, it's late now for you! Do you prefer to talk on Skype or something ? Then, if we can fix that I'll write a debrief about the problem -> solution.
Hey Udo thanks for your time BTW!
Well right now with the actual configuration on the nodes (we have 4 nodes), I can migrate a VM (configured as a cluster, so 2 NICs, 1 public and 1 private) to an another node and the VM is still able to talk with the other VMs in the virtual cluster.
Let say...
You mean on the host ?
Keep in mind, I need all the hosts (nodes) to be able to talk with each other because I have some VMs that talk between them located on different nodes. Let's say I have a VM cluster with 2 HAproxy and 2 web server, I deploy the web servers on node 1 and 2 and they are...
Situation
A few VM's have 2 NIC's, one for the public network and one for the LAN which is using the vmbr10 bridge. A database server is currently hosted on the proxmox host and need to be moved to a physical server outside of the proxmox host.
The proxmox (nodes) hosts are configured as a...
Humph, I have the same problem, installed PVE, with default configuration, then assign vmbr0 to a KVM guest ; adding an IP address to the KMV guest is ok but when I add an alias (eth0:0) within the guest, it's doest work at all.
While pinging the guest from the outside, I can see the ICMP...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.