I contacted the service owner, and they are changing the service to meet the spec. I tested on their dev instance and it works great, so problem solved. Thank you for your help.
Thank you for the pointer to the spec. That will help in my discussion with the service owners. I am hoping they can just make /.well-known/openid-configuration work, which I think should not break any other users of the service, and should make it work with Proxmox.
Thanks for the pointer, and the comment to that bug. I will follow along and see what they say. I will also check with my CAS administrators when they are back from vacation to see what they say.
CAS is a centrally run service and I am just one user of hundreds, so I really doubt it. Also, if they changed it, it would likely break other existing users of the service.
Our entire response is actually publicly available (https://cas.ucdavis.edu/cas/oidc/.well-known/openid-configuration)...
My workplace has used CAS for web SSO for many, many years now. In they last couple years they added OIDC, but they added the discovery URL down a couple levels from the top: /cas/oidc/.well-known/openid-configuration. I put https://cas.ucdavis.edu/cas/oidc in for the issuer URL, but I get the...
This morning it looked like the cluster has completely shattered. `pvecm nodes' on vm6 looked like:
Node Sts Inc Joined Name
1 M 10000 2015-11-30 11:16:42 vm1
2 X 9880 vm2
3 M 10000 2015-11-30 11:16:42 vm3
4 M 10000...
We run our back-end storage for all VMs on NFS via InfiniBand, the same InfiniBand the cluster uses to talk to itself. If we had major communication problems the VMs would have stopped working.
The output of all the commands looks good (these were all run on the node I moved the VM to, though...
I tried to do a live migration of a VM a couple hours ago and it failed (full output attached). ERROR: failed to clear migrate lock: no such VM ('104')It was a live migration from a Proxmox node named vm6 to a node named vm4. The GUI shows VM 104 in the list for vm4 (Proxmox node), but displays...
Done! I had to look up how to do that. Not as trivial as clicking a "Marked Solved" button in the message that actually solved it. Thanks for your help.
I mis-read the license for AOMEI Partition Assistant. It's only free for use on non-server OSs. Since mine is 2012R2 it would not work, so I ended up going the gdisk route. This is the sequence that worked for me:
SSH to Proxmox node
modprobe nbd max_part=8
qemu-nbd --connect=/dev/nbd0...
The other options look like they have lots of possibilities to mess up and cause more issues, so I'm going to go with the simple option. Looks like there is free Windows software (AOMEI Partition Assistant) that can do it. I can attach the disk to be converted to another Windows VM. Ah the power...
The VM has:
boot: dc
bootdisk: sata0
[snip]
sata0: zstor2-images:158/cl-print.raw,format=raw
I forgot to mention that I'm using Proxmox 3.4, if that makes a difference.
We are working on transitioning a department off Hyper-v onto Proxmox, but have hit a snag with a 2012R2 server that has a GPT partition table. It gets stuck in a boot loop indicating: No bootable device. Retrying in 1 seconds. Per the FAQ I checked the partition table, and one of the partitions...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.