I'm going to try the SPAN port, based on your recommendation =).
I don't have a separate NIC available.
Hence, I have a single server running Proxmox, with two 10Gbase-LR cables going into it.
The plan is to use the first network connection as the main VM bridge interface, and the second...
I have an existing NAS server running FreeNAS 11.2-U6.
The motherboard is a SuperMicro A2SDi-H-TP4F, and I have:
8 x 12TB HDDs
1 x 2TB HP EX950 M.2 NVMe drive
1 x Intel Optane PCIe SSD
I'm also running Bhyve to host some Ubuntu VMs - however, these have not proven very stable.
Hence, I'm...
Is there any way to split up the VM traffic, Corosync and Ceph networks onto separate VLANs from Proxmox?
(I couldn't seem to find anything in the Proxmox GUI about creating a new interface with a specific VLAN tagging, but not sure if I missed it somewhere).
I could then potentially apply QoS...
We have a 3-node HA cluster running Proxmox/Ceph.
I know the current recommendation is to have separate physical network ports for VM traffic, Ceph and Corosync traffic.
We currently do this with a 4-port SFP+ NIC (Intel X710-DA4).
However, we're looking at moving to 100 Gbps. The NIC in the...
I just want to follow up that I was able to do this succesfully!
The two types of disks I am using for my two Ceph pools are:
Intel Optane 900P (480GB)
Samsung 960 EVO (1TB)
To be honest - both disks are actually NVMe disks.
However, I am cheating a bit - I used Ceph to change the device...
I'm keen to take another shot at this. What do you guys think of this?
I have a main Proxmox server that runs all my normal VMs.
Then I have a separate server, also running Proxmox that will be used for network analysis.
What I'm thinking of is - I will have one cable for the normal VM...
Hi,
Do you know if this "auto live migration", or "automated scheduling" is still on the cards for Proxmox? (Similar to VMware DRS, or oVirt's scheduling features)
This other thread mentions a discussion on the PVE mailing list - but I can't seem to find the thread?
I don't see the feature on...
Yup, I definitely was on Google authentication - to be honest, I found it super convenient. Ah well.
The privacy reasons - was that something to do with GDPR? Or something else?
We have two Proxmox clusters that are geographically quite far apart.
They are both behind NAT-ing firewalls, and hence on different Layer 2 networks.
What is the best way to synchronise the user databases between them?
Is it safe to synchronise /etc/pve/user.cfg between them?
Or is there...
I'm happy to continue communication here - if you are still willing to help me? =) That way hopefully other people can benefit from the knowledge as well!
(I would also love to help improve the Proxmox docs as well, but that's another story).
Got it. So the browser client should be the one...
Ah great - thanks! I can confirm it works.
I was thrown off, because the /etc/pve/domains.cfg file on another server had sections for pam and pve:
pam: pam
comment Linux PAM standard authentication
...
pve: pve
comment Proxmox VE authentication server
However, it seems these aren't...
@dcsapak - Hmm, are you saying that the proxy needs to do something special to pass a cookie on? Or could the cookie somehow be tied to a specific IP address or machine?
(Is there any chance this cookie behaviour has changed in Proxmox 5.4 vs Proxmox 6.0)?
What could I try to diagnose this...
Hi,
I am attempting to setup LDAP authentication in Proxmox 6.0.
Previously, on Proxmox 5.4 - I had to edit the /etc/pve/domains.cfg file, in order to add the new LDAP realm - as per the Proxmox documentation at https://pve.proxmox.com/wiki/User_Management#pveum_authentication_realms - e.g...
Instead of Cloudflare access, I also tried with Google IAP as well.
That simply proxies the connections from a load-balancer sitting within GCP.
When I do that, I get an error:
Connection error 504: Gateway Timeout
In the access.log file, I still see a HTTP 401:
34.83.155.61 - -...
Also - this is the cloudflared (HTTPS proxy) logs at the same time:
{"CF-RAY":"5069cb9deae8cec8-LAX","level":"debug","msg":"POST https://localhost:8006/api2/extjs/access/ticket HTTP/1.1","time":"2019-08-15T01:28:29-07:00"}
{"CF-RAY":"5069cb9deae8cec8-LAX","level":"debug","msg":"Request Headers...
I've setup a new Proxmox 6.0 cluster with three nodes. Version info is here:
root@example-vm01:/var/log/pveproxy# pveversion
pve-manager/6.0-5/f8a710d7 (running kernel: 5.0.18-1-pve)
I'm using Cloudflared as a proxy to provide SSO in front of Proxmox. This was previously working on a separate...
Also - if I list the OSD hierarchy - they're all class "ssd".
root@vwnode1:~# ceph osd crush tree --show-shadow
ID CLASS WEIGHT TYPE NAME
-2 ssd 4.01990 root default~ssd
-4 ssd 1.33997 host vwnode1~ssd
0 ssd 0.10840 osd.0
1 ssd 0.10840 osd.1
2 ssd 0.10840...
Sorry, I'm a bit confused =(
To be clear - you're saying that the only way to do this is to use device classes, right?
I had tried creating OSDs on the first set of disks, then creating a Ceph Pool. Afterwards, I added OSDs on the other set of disks - but it seems to have simply integrated...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.