Hello,
I wanted to write yesterday already but didn't get around to it... thanks for the link! I went through it and was able to achieve the access to quarantine but not admin gui :)
Chris
Hello,
Sorry if this question has already been A&A, i tried to search but did not find anything. I just recently put a PMG into use, inside a DMZ network. Only using this for inbound and so far it works really well. My users get the daily spam summary, including the links to either white-/ or...
Hello,
Yes I edited the apt file on all 3 and installed ceph-adm and ceph-volume. Works now. Perhaps would be good to have this fixed though (if possible).
Thanks,
Chris
Hi all,
Upgraded all nodes to Pve 7.2, wanted to deploy a ceph cluster. I set up the cluster from the UI with version "17.2.0 Quincy". Installed managers and monitors on all 3 nodes, now adding an osd returns "binary not installed: /usr/sbin/ceph-volume".
I noticed that the repository on the...
Is there any update on this? Having the cloud-init images located on rbd also breaks cloud-init (no longer gets applied). Moving the cloud-init image back to local storage fixes this.
Have you tried to up the min-size for your pools? I think the issue is that you have min-size set to 1, meaning only 1 copy of the data will be saved. Had the same issue some time ago with a test-pool and as soon as I set the min-size to 2, it wasn't an issue anymore.
Hi all,
I love PBS, works so well. The only thing I am dealing with now is that the Prune options for Datastores does not seem to work. I have set a prune setting to keep the last 7 backups. However, when I go to the datastore I see VMs with 10 copies (backup runs daily). Prune + GC job is set...
Been working on this for over a day now. I am getting started with linked clones and cloud-init and I am having an issue with the first boot of linked clones. The cloud-init simply doesn't run at all if the vm is a linked clone. After the clone, I make sure to change the IP and such, then press...
Hi all,
I am currently using Ceph with the following constellation:
- 2x DL360 Gen9, 1x DL380 Gen9
- 1x Xeon E5-2690 v3
- 128GB DDR4 ECC (2133 mhz)
- 5 x Intel S4510 SSD as OSD
- 4x10Gbps Uplink with LACP
So, 3 nodes in total right now with a total of 15 OSD's.
I have been handed an "old"...
I have never used the LXC feature in Proxmox, ever only hosted VMs. But yeah for a VM you could also just passthrough the GPU to it. However, you would probably want 2 GPU's (one for VM, one for OS). Then passthrough the one for your VM. You then need to install the Nvidia drivers (either...
You just need to add the plex repository and then install with apt. :) for me it has simplifed a lot of things, also since I use hw acceleration with gpu
I would say you have too many cores assigned... It's a quad core cpu and you are leaving no cores for the hypervisor system, meaning that the hypervisor is fighting the vm for the cores. Any reason you don't just run plex directly on the proxmox host?
Hi all,
I have a 3 node ceph cluster, consisting of HPE Gen9 servers. It's been running well since I set it up and I really enjoy the "no single point of failure" feature.
Now, during the installation I was using some S3700 100gb drives for boot zfs mirrors, however for one of the hosts, I...
Is there any way to change the location of the WAL after OSD creation? Would it be smart to put the WAL for all OSDs onto a single NVMe drive or should these be mirrored?
Hey, sure no problem. I did not brick my own, as I followed the guide that I linked to before. At this time, I got a fully working H310 Mini Mono in my R720xd :) Sure I'd be happy to help, the guide really says it all though.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.