Howdy,
So we've got a relatively small CEPH NVMe cluster consisting of 4x nodes, each with a Samsung datacenter 3.8TB M.2 SSD inside. Each node has 4x 10g connections, 2x in LACP for normal traffic, 2x in LACP for CEPH traffic. Connected using a pair of S4048-ON switches.
We're looking at...
Yes, all of my code waits for the command to finish.
Here's the full error it gives me:
failed to stat '/mnt/pve/optanestor/images/9005/vm-9005-cloudinit.qcow2'
TASK ERROR: disk image '/mnt/pve/optanestor/images/9005/vm-9005-cloudinit.qcow2' already exists
Hmm, alright. How can I get that UPID...
I think I might be in the wrong here. For context, this is my process:
- Create VM via API, wait via polling UPID
- Call custom API connector on node which uses `qm set` to apply the instance metadata
- Start VM via API, wait via polling UPID (this is where things fail, with the `already exists`...
I've been experiencing what I believe to be the same issue, except I can replicate it reliably. I'm creating a VM via the API, configuring it, then starting it, and my start tasks fail semi-regularly. I can reduce the chances of it happening by using a 5 second delay, but this is suboptimal as...
That's not quite what I'm asking. I'm asking whether I should attach my `vlan20` interface to `vmbr0` or `bond0`. Both configurations seem to work, but I haven't been able to find any documentation on possible issues with having a vlan *and* a bridge attached to one interface (bond0).
Hello,
So I've got a few servers where they are connected to our ethernet switch via an LACP bond. It's basically like:
vmbr0 (vlan aware) -> bond0 -> eno1, eno2
We need to add a vlan interface in order to access our storage network VLAN on the bond. My question is, do I attach the vlan...
Braindead moment. Someone assigned a machine to that management IP and didn't document it, causing an IP conflict. Changed that machine's IP and everything works great now.
I'm having a weird issue and I've been unable to find a solution online, but I have a feeling someone here might know the solution.
I've got a PVE 7.1-7 box running right now, and I have its management interface assigned as `vlan10` (The main VLAN on my network). We'll call the hypervisor...
Hey guys, I'm in a bit of a pickle.
I'll start by going over my setup:
- Storage server running TrueNas with 10Gb networking
- ZFS pools
- One pool with 12x 2TB drives in raidz2 (storage B)
- Another pool with 3x 400GB ssds in stripe (storage A)
- Single hypervisor
- Boot disk is 400GB NVMe...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.