Hi,
So I've been slowly working my way through issues in our proxmox deployment.
Referring mainly to this subtopic:
https://forum.proxmox.com/threads/windows-vm-rdp-issues.25619/
But my main set of issues for backstory is here:
https://forum.proxmox.com/threads/proxmox-ceph-issue.25525/
In...
Q7. Answered above. Zenoss.
Q8. Wouldn't it be cool to have a bigger budget but it is what it is. I have to get 300 VMs up on proxmox servers and don't have funds for toys like Cisco switches. So I get to suffer. :)
Q11. Are you meaning traffic in or out? Or across to each other...
We use cloned manually installed copies of Windows VMs, because we are slammed. :)
I inherited a global IT system that never had an IT director. 3 countries, 4 main sites. It's been a fun first year. And they only know Windows.
It is what it is.
I actually have a Zenoss system ready for...
Well I think I figured it out, not the above weird RDP issue but the overall issue.
I changed:
Computer Configuration > Windows Settings > Security Settings > Network List Manager Policies > Unidentified Networks.
Basically telling it to just connect you stupid windows box. ;)
Then I fixed...
Hi,
We keep experiencing random RDP issues where users cannot connect to Windows Server 2008 R2 in a Proxmox VM.
So I went into the console to try and connect directly, and while the IT user worked fine the end user user did this (see picture).
After about 5 minutes I was able to connect...
In order:
Yes.
Yes.
Yes. 14 OSDs.
We can add OSD drives however they cannot be used as shared storage because of the limit of 7 monitors in Ceph.
We do not use SSDs, fwiw.
The issue isn't we can't have more OSDs, it's we can't shared storage for HA because each of the first 7 nodes requires a...
Hi sorry on the delay, I get crazy busy. :)
Right so far.
We actually have 2 clusters of 4 nodes per C6100 (so two physical C6100 servers) running in HA mode.
They have 3x1TB 7200 RPM drives each.
Each HA node
No. We have 5 racks. Generally 2 clusters (as above) per rack. Each switch...
Hi,
The theoretical limit is 32 nodes for a CEPH HA cluster.
We have 7 nodes, and our OSD setting is 2 minimum, 3 maximum.
We can't add additional OSD drives beyond 7 nodes, but we can have more nodes.
What are we doing wrong?
Is this, in fact, 'wrong'?
Thanks.
Yes. They effectively shut down our m200 VPN, which we use for internal company access to our servers.
When the C6100 was brought online, everything went down. Took it offline, everything went back up.
Each nic has a 1gb port for regular traffic, and we have dell powerconnects for 10gb SFP+...
Hi,
The one that blew up lost it's HA config. When rebooted, it returned then lost it again. It flooded the network with so much traffic the firewall locked up. So we killed it. Was just 1 server.
Today was different, and after checking the conf files it looked fine so I looked at the...
Hi,
We're running CEPH HA systems, basically a 4 node setup of C6100s.
We had one blow up and take the entire datacenter down (our 5 racks).
In the meantime, two other Ceph clusters have blown up. However they have not taken our DC down.
I'd rather not have this happen obviously, so could...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.