Thanks for your reply and sorry for the late response. After digging more (and screwing up a few things while trying to move the cluster's ip subnets to something that is compatible with the office, should I ever need to move the machines there) I've restarted the project with fresh proxmox...
I'm just in the middle of trying to implement OVS after reinstalling my 4 node test cluster. The production cluster will be for a small ISP that has a bunch of vlans. I'm used to working with Cisco switches and was hoping to create a 10G trunk through all the nodes and starting/finishing on...
Hi there everyone
I'm still pretty new to proxmox and debian but do come from a background where I've used a lot of cisco switches and dabbled in linux.
I have a 4 node hypervirtualized cluster running 3 separate networks for corosync and ceph with the 'outside' access happening on bridged...
How did this go? I have the same sort of setup but with 4 machines and am hoping I can bridge the interfaces to form a 10Gbe ring. I know it would be better with a switch in this case but I don't have enought 10Gbe SFP ports to available at the moment.
I'm having a bit of trouble following this. I speak cisco...any chance someone has done a writeup comparing these configs to what it would look like in a managed switch? I checked the 'vlan aware' box hoping it would be treated as a trunk and I would be able to pass tagged data from the...
So while messing around trying to figure out which drive was really dead (I might have a bad backplane) my node decided to flip out and wouldn't take any drives in any bays and I had to reinstall everything.
Any tips on dumping an entire node? I reinstalled and have the drives all going...
@tburger Thanks! That worked quite well! I have a proxmox node running off a 4G memory stick. ESXI was on it previously and I was curious to see how it would go but it almost immediately got very full after running some updates.
That makes a lot of sense now that I'm seeing it all laid out like that.
lvdisplay shows nothing
vgdisplay shows the drive I'm trying to dispose of
pvdisplay also shows the drive
I ran:
vgremove <name> which said it it was successful.
pvremove <name> which said it wasn't found (I'm guessing...
I'm mucking around in the home lab testing stuff with ceph/proxmox and was wondering if there are any recommendations for raid replacement cards to just access drives directly. I'm pretty new to the server hardware scene so this probably sound a pretty elementary to most.
I grabbed a m1015...
I've only ever set one up that needed to externally available and just used a dedicated Nic. If you only have one physical port, have you looked at using a vlan? Upstream you'll need to use a managed switch or a router capable of routing that vlan to your gateway.
I may have to shut down and dd my install to a bigger drive or find a way to move some stuff off of it. I used a 8GB drive and by the time all was partitioned (using default settings) I only have 24% space left which makes some parts of the setup complain (like ceph throwing an error about a...
Nothing in particular. It's a little 32GB ultra which has exsi on it previously. I'm not in a production environment and since it's all about training I actually don't mind things going wrong from time to time so I can experience how HA failover etc happens and what it takes to recover...
The virtio drivers are added via the device manager in windows. If you add the virtio ISO file into the hardware list as another virtual cd/dvd drive then just tell windows to update the drive via the device manager and point it at the appropriate directory of the virtio pile of stuff. Here's...
I run one of my nodes (dell r510) from a usb but haven't quite figured out how to get the bios to boot consistently from it. The h700 raid controller that I am yet to get replaced with a sas adapter (the one I bought was 2mm too long to fit in the same slot as the h700) seems to want to always...
Hi there,
While adding some drives as OSDs for a ceph cluster, I accidentally made one into a LVM under disks-LVM (the drive in questions being /dev/sdc). Since there was no delete option available I then proceeded to delete the logistical group using lvremove Plexbulk (<--the name of the LVM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.