Search results

  1. V

    Proxmox login returns HTTP 401 with valid password

    Instead of Cloudflare access, I also tried with Google IAP as well. That simply proxies the connections from a load-balancer sitting within GCP. When I do that, I get an error: Connection error 504: Gateway Timeout In the access.log file, I still see a HTTP 401: 34.83.155.61 - -...
  2. V

    Proxmox login returns HTTP 401 with valid password

    Also - this is the cloudflared (HTTPS proxy) logs at the same time: {"CF-RAY":"5069cb9deae8cec8-LAX","level":"debug","msg":"POST https://localhost:8006/api2/extjs/access/ticket HTTP/1.1","time":"2019-08-15T01:28:29-07:00"} {"CF-RAY":"5069cb9deae8cec8-LAX","level":"debug","msg":"Request Headers...
  3. V

    Proxmox login returns HTTP 401 with valid password

    I've setup a new Proxmox 6.0 cluster with three nodes. Version info is here: root@example-vm01:/var/log/pveproxy# pveversion pve-manager/6.0-5/f8a710d7 (running kernel: 5.0.18-1-pve) I'm using Cloudflared as a proxy to provide SSO in front of Proxmox. This was previously working on a separate...
  4. V

    How to create multiple Ceph storage pools in Proxmox?

    Also - if I list the OSD hierarchy - they're all class "ssd". root@vwnode1:~# ceph osd crush tree --show-shadow ID CLASS WEIGHT TYPE NAME -2 ssd 4.01990 root default~ssd -4 ssd 1.33997 host vwnode1~ssd 0 ssd 0.10840 osd.0 1 ssd 0.10840 osd.1 2 ssd 0.10840...
  5. V

    How to create multiple Ceph storage pools in Proxmox?

    Sorry, I'm a bit confused =( To be clear - you're saying that the only way to do this is to use device classes, right? I had tried creating OSDs on the first set of disks, then creating a Ceph Pool. Afterwards, I added OSDs on the other set of disks - but it seems to have simply integrated...
  6. V

    Proxmox 6.0 - unable to create Ceph OSDs (Unable to create a new OSD id)

    I can confirm this worked. To clarify, ceph auth get client.bootstrap-osd simply prints out the key information, you actually need to redirect it to the correct location (this got me at first, haha): root@vwnode1:~# ceph auth get client.bootstrap-osd > /var/lib/ceph/bootstrap-osd/ceph.keyring...
  7. V

    [SOLVED] Ubuntu VM installation problems

    Hmm, weird - it still doesn't work for me. This is with the Ubuntu 19.04 Server ISO, on Proxmox 6.0 (Beta 1). I tried this ISO: ubuntu-19.04-live-server-amd64.iso (MD5 sum "9a659c92b961ef46f5c0fdc04b9269a6"). Note that I can use Alt + Left, or Alt + Right to switch to a different TTY - but...
  8. V

    [SOLVED] Ubuntu VM installation problems

    I just appear to have hit this issue as well, trying to install a new VM under Proxmox 6.0 (beta 1) with Ubuntu 19.04. Is this a temporary fix, or is this the permanent workaround? It is an issue on Ubuntu's side, or on our side? Can the Default display be made with Ubuntu? UPDATE - Wait, I...
  9. V

    Understanding how migration and failover work in Proxmox/Ceph cluster?

    Right - so it will be a new boot of that VM. Curious - is there any method, or scenario under which it could be seamless migrated over, without a restart? Is such a thing possible under Proxmox (or elsewheere)?
  10. V

    Understanding how migration and failover work in Proxmox/Ceph cluster?

    Hi, Say I have a Proxmox cluster, with Ceph as the shared storage for VMs. Our VMs are mostly running Windows, and clients access them via RDP. To confirm - migrating a running VM from one node to another should be fairly quick, and the VM stays running for the whole period - so a RDP session...
  11. V

    How to create multiple Ceph storage pools in Proxmox?

    We have three servers running a 3-node Proxmox/Ceph setup. Each has a single 2.5" SATA for Proxmox, and then a single M.2 NVNe drive and a single Intel Optane PCIe NVMe drive. I'd like to use both NVMe drives for *two* separate Ceph storage pools in Proxmox. (I.e. one composed of the three M.2...
  12. V

    Proxmox 6.0 - unable to create Ceph OSDs (Unable to create a new OSD id)

    Thank you! I will try this tonight as soon as I get home, I really want to get this working. So basically I just run that one command, and then the ceph-volume command should work as is? Do you think it might make sense to add an option in the Proxmox Ceph GUI to specify the number of OSDs per...
  13. V

    Proxmox 6.0 - unable to create Ceph OSDs (Unable to create a new OSD id)

    I have just installed Proxmox 6.0 beta on a 3-node cluster. I have setup the cluster, and also setup Ceph Managers/Monitors on each node. I’m now at the stage to create OSDs - I’m using Intel Optane drives, which benefit from multiple OSDs per drive. However, when I try to run the command to...
  14. V

    Proxmox won't install - "A volume group called pve already exists".

    I had to do this again recently - in case anybody else reads this, using wipefs -a /dev/<devicename> also did the trick. There is more info about the command in this post: https://forum.proxmox.com/threads/recommended-way-of-creating-multiple-osds-per-nvme-disk.52252/
  15. V

    Proxmox VE 6.0 beta released!

    I just tried to install using ZFS on a Samsung M.2 NVMe drive - however, it would not boot into Proxmox VE after installation. It simply took me to a screen, that said “Reboot into firmware interface”. However, when I re-did the installation using ext4 - I was able to boot successfully Does...
  16. V

    API to read QEMU VM creation time or uptime?

    Thank you for the detailed answer! I would never have discovered this otherwise. (Maybe I should document it somewhere?) I used the info you provided to search source code - it seems part of the logline is constructed in pve-common/src/PVE/Tools.pm. 1. One question - what is “dtype”? Are the...
  17. V

    Proxmox 6.0 - release date? Preview builds? (Debian Buster is out 6th July, 2019)

    Hi, I saw that Debian Buster is coming out in a few days! =) https://lists.debian.org/debian-devel-announce/2019/06/msg00003.html I read that the next version of Proxmox is 6.0 - and it will be based on Debian Buster, and have Ceph Luminous in it. Is there any idea of when Proxmox 6.0 is...
  18. V

    API to read QEMU VM creation time or uptime?

    So I’m tailing /var/log/messages. When I start a VM, I see: Jul 3 23:06:17 syd1 pvedaemon[617005]: <root@pam> starting task UPID:syd1:000A4DBA:0112844F:5D1CA849:qmstart:108:root@pam: When I shutdown a VM: Jul 3 23:07:20 syd1 pvedaemon[617005]: <root@pam> starting task...
  19. V

    Configuring hookscripts to run for all VMs? Or for all VMs cloned from an image template?

    We have a 3-node HA Proxmox cluster, with Ceph. So could we store the hookscript on CephFS? Or is some external SMB storage a better idea? And you're saying if we add it to the config textfile by hand on the image, then create an image template from that via the GUI, it should propagate...
  20. V

    Image templates on CephFS shared storage only appear on one node?

    Ah got it - but say we lose one node 1 - then we no longer have access to the image templates (even though they're stored in CephFS, which would still be available across the other 2 nodes). And if we make copies of those image templates on each machine - then it'd use up 3x the amount of...