For what it is worth, moving the client to something slightly newer than CentOS5 should be ~more or less straightforward and ~painless. My vague recollection is that moving services over from a CentOS 5.X box to a 6.X box is not really a big deal. At least not as much change as when compared to...
Hi, for what it is worth. I did a small proxmox on Cisco HW deploy a few years ago, everything was fine, except for the minor detail that the on-board LSI raid was the lowest-end-'nearly-but-not-quite-fakeraid' I have ever seen. The performance of IO was more or less horrible out of the box. I...
Hi, small fun question. Are you open to idea of spinning up another VM, possibly Linux of some kind / which is otherwise similar to the windows host (ie, resource allocation, same VM underlying storage etc) ? Reason I ask - is that in my experience, Win10 as an OS is sometimes "inconsistent as...
Hi, just to confirm
- live migration works as expected as a starting baseline?
- the ceph storage pool is tagged as type 'shared' in the storage manager WebUI ?
In my experience, if you start with <cluster of 3 nodes> and <live migrate works as expected as starting baseline> then <HA works...
Footnote for what it is worth.
Not sure what you mean "NFS and proxmox is hard". This is about one of the easier configs for storage in Proxmox possible (aside from local storage maybe). ie,
- setup an NFS export on your Filer storage unit which has lots of disk. Allow the proxmox host(s) RW...
Hi, I just looked, apparently the current nodes I've got at OVH are motherboard, X10SDV-TLN4F
The other ones were a bigger config I used last year but I don't appear to have the MB Documented, and I no longer have access.
Broadly speaking, I think - any standard server motherboard (ie...
Hi, small note for what it is worth (?) HA Config in Proxmox 4.X is entirely a different beast from 3.X in my experience. Testing I did last year with a 3-node supermicro @ remote hosting centre (OVH) and it .. just worked .. no fencing special hardware needed; no fussing about how to configure...
Hi, for what it is worth. (a) Dogma says that despite what logic might otherwise say, LSI raid cards running a bunch of single-disk "Raid0" volumes, is **!!NOT!!** the same thing as converting the LSI Raid card to be a JBOD_mode card in which all disks are indeed attached as single drives. Go...
Hi, are you sure that it does not work ? I've got at least a few client sites deployed with Proxmox 4.X where I ssh to the site and use a SSH tunnel forward as per old style, I'm just hitting VNC port on the proxmox as always and it works dandy for me. Maybe you are doing something...
Hi, the hardware was a remote site hosting provider (OVH) so I can't answer with 100% confidence, but based on my understanding of their environment, it was (a) supermicro nodes (b) 2 x 1gig interfaces cabled to 1gig switches, is where public internet routing came in and (c) 2 x 10gig...
Hi, alas I think nginx config review exceeds my brain today (I'm doing 'fun forum posts to pass time while I'm sick, sorry). So functionalist perspective I might suggest,
-- rule out the LXC issue, by spinning up a small KVM based ?debian or whatever you prefer? host and transplant, precisely...
I don't think there should be any problem just because it is LXC that happens to be where the nginx instance lives. Can you simplify to debug, for example, proxy something else more boring first (a hello world web page?) as a light bulb test ? (ie, is your reverse proxy well configured, etc?)
Tim
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.