Is there a way to enable NPIV or vHBA on Proxmox, or is this on the roadmap? I need this for an expansion I have planned, and SCSI device passthrough is unlikely to be sufficient.
The plan is to build a new SAN host with two distinct vSAN arrays, and connect them both to the Fibre Channel...
I'm looking to run a second PBS server separate from my main, and have it just power up in the middle of the night (I've got this working), then immediately kick off a sync job, then a verify, then a prune/GC, then, if the start up time was the scheduled time, also automatically shut down when...
I really like Proxmox' Web Gui and the way the installer manages configuring ZFS for boot, with both mirroring and ESP configuration, disk status and smart, ceph, firewall, and the whole general UI to manage a lot of basic things. I would use it for a select subset of my infrastructure in my...
So I've discovered one potential workaround solution, which is not ideal, but is suitable. if you go to a VM's "Hardware" tab and click on one of the virtual drives, you can "Move disk" up top. Using this to move the disks from the ZFS VM storage pool over to the OS root pool and then back...
So I'm migrating my cluster to something a little smaller, and moving from Ceph cluster down to single node with ZFS. One major feature I like from Ceph is that when I add a new drive or twelve, I get data automatically rebalanced across the cluster.
Part of my goal is to spend minimal money...
That makes an odd sort of sense, and sounds like a bug, or something that should be warned about in the UI somewhere. Anyway, I know what to do next time.
Okay, so reading the manual for Ceph tells me to mark an OSD as "Out" and then allow the pool to rebalance the data away from it before removing it, and then removing it should not require any rebalancing. Okay, so I did this. I marked an OSD as OUT. I added two OSDs to replace it. I let the...
So uh, I'm sure I know why this happened, but here goes.
I was having some communication issues on one node after a disaster recovery and reinstall. I checked all the config, but VLANs on this VM just wouldn't communicate outside the host. So I decided to attempt to reboot the switch. (Note...
So I set up a step-ca ACME certificate authority to get proxmox and other things valid internal certificates so I can manage trust using internal domain names. This shouldn't be too much of a stretch. Here's the thing, I can't upload the Root CA to proxmox to be able to register.
When I go to...
I could have been more clear. There are two use cases for the 'partial join' I'm looking for.
The first is for allowing physical hosts to export some ceph services (Mostly OSD, though MON or FDS should be possible), while also keeping them on a pure debian distro to avoid having issues with...
Running Proxmox on a Physical node is, a bit begrudgingly, fine. That said, One of my goals is to have several of the VMs using the ceph cluster. Running PVE on those is quite overkill. I'll test out installing pve-cluster on the VMs and making sure they get 0 votes (They shouldn't...
I'm running a cluster in PVE and ceph as my main storage as is the norm these days, but for reasons that may or may not be common, I don't want to run PVE on all my guests or several of my physical machines. I would like to be able to install just enough software however to have Proxmox VE...
I've got proxmox + ovs set up in testing now. It's working wonderfully already, and I've got another debian system about to go into proxmox testing with LACP as well. Having configured both in both linux and proxmox, I definitely think OVS should be default.
Linux trunking, bonding, and bridging is outdated, awkward to configure, and doesn't do vlans or virtual interfaces very well. With today's modern infrastructure requirements, I think proxmox 4 should switch to openvswitch by default instead of legacy software. It does everything that the...
So after creating a test image on a test proxmox 4 install as a ZFS image, I was unable to remove said image. I had used debian 8 and lvm. I later discovered that the host (proxmox) had run run vgchange -an volgroupname and vgexport volgroupname to be able to destroy the image on zfs.
No...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.