Hello,
can a single PBS instance with a single drive for a datastore be configured to manage backups from 2 hosts with potentially overlapping VM IDs, non cluster proxmox hosts.
Hello,
I'm looking to test out vGPU on a little lenovo P360 I have with an A2000 GPU. Right now in the BIOS I have the GPU set to auto. When I do this I only get the VGA device for the onboard iGPU as that is what's plugged in and using during configuration. If I enable or force GPU, or the...
Hello,
I recently upgrade from Proxmox 7 to 8. After the process every time I boot the host I get question marks for storage and VMs. Running service pveproxy restart && service pvestatd restart after reboot fixes this problem until next reboot. Seems some strange timing thing or the service...
Curious if it’s possible to dice up a 4x4 nvme card and pass only 2 of them. They come up as isolated ids so assume I could but proxmox crashes when I try and start the vm. It works if I pass all 4 then I can see all 4 on the vm but I want to assigned a pair if I can.
I setup a second node and temporarily added it to a cluster. I then removed it via the docs and am trying to login to it as a standalone. I can ssh as root fine but I can't login as root under PAM via the console UI. I have tried the standard things like resetting the passwd and such but it...
After sorta getting bored, I wanted to try something new. And I wasn't super happy with a 3 node CEPH cluster even on a 10GBe storage backend. although the flexibility was very nice. Things still felt slightly laggy, so I decided to venture out. Prior to moving on I was running OMV 5.x and...
I have 3 hosts that have a couple 10GBe interfaces on em. I am having an issue getting a VLAN on a KVM host to run at full 10GBe. It's running at 1GBe from my tests but only on VLAN 8. If I assign the VM IP to be the same subnet as the HOST IP subnet it gets full 10GBe. So I am not sure what I...
I have 3 hosts. All setup the same way.
I can do iperf from host to host and get 10GB pretty consistently.
[ 3] 0.0- 1.0 sec 1.09 GBytes 9.38 Gbits/sec
[ 3] 1.0- 2.0 sec 1.09 GBytes 9.38 Gbits/sec
[ 3] 2.0- 3.0 sec 1.09 GBytes 9.39 Gbits/sec
But when I try to do it from a VM to a...
I have an HDD pool for Ceph that consists of 12x10TB disks that are spread across 3 nodes so 4x10TB in each.
In datacenter summary I see this:
but in datacenter >> ceph >> performance I see this:
The above "Usage" seems accurate as I am using 3 part replication and currently have about...
Templates aren't completely VMs, so I'm not sure how to manage them as far as making sure they're always available. It looks like I can assign them to HA but that seems odd to me as it has options like "started" in there. If I do make it HA will that mean the template will come on on a different...
Hello,
I have a 3 node ceph cluster with 4 OSDs in each node in a 3/2 for a 12 disk 120TB HDD pool
If I have a power outage, and I lose my UPS batteries, what will happen?
1. If the servers shut off at different times because of lack of power.
2. I get to the hosts in time and shut them all...
Hello,
I have a new VLAN 8. This network supports network boot and tftp settings. Requests that are network booted from a KVM host that is tagged on VLAN 8 get the right IP from DHCP, then goes to fetch the pxelinux.0 from my nas. This works as well as the file is downloaded, but then it just...
I'm curious why the difference here?
86TB of Summary Storage, but Ceph Usage shows 120TB total. Ceph is much closer to the original full raw amount of data from all the drives. The amount used also in the Summary seems far too high as well.
and
Hello,
I am trying to understand how to optimize my ceph pools and understand how to associate pgs to pools correctly. I have the following.
12 OSDs in HDD pool in 3 hosts
9 OSDs in NVMe pool in 3 hosts
3 OSDs in SSD pool in 3 hosts
Each one is a 3/2 with a default of 128 PGs
if I used this...
I have been tooling around quite a bit with Ceph and Proxmox the last few weeks, and really, it is pretty awesome. It's really taken my lab of a mix mash of machines and hosts, and made them somewhat valuable again and much easier to manage my needs and experiments. I mean it's another level of...
I messed around with creating new classes and rules and such which works great. But in the Host >> Disk >> Usage it's not showing the correct usage for the nvme drives:
It should show osd.20:
Host1 >> Ceph >> Configuration
item osd.20 weight 0.909
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.