Search results

  1. V

    Proxmox 6.0 - unable to create Ceph OSDs (Unable to create a new OSD id)

    I can confirm this worked. To clarify, ceph auth get client.bootstrap-osd simply prints out the key information, you actually need to redirect it to the correct location (this got me at first, haha): root@vwnode1:~# ceph auth get client.bootstrap-osd > /var/lib/ceph/bootstrap-osd/ceph.keyring...
  2. V

    [SOLVED] Ubuntu VM installation problems

    Hmm, weird - it still doesn't work for me. This is with the Ubuntu 19.04 Server ISO, on Proxmox 6.0 (Beta 1). I tried this ISO: ubuntu-19.04-live-server-amd64.iso (MD5 sum "9a659c92b961ef46f5c0fdc04b9269a6"). Note that I can use Alt + Left, or Alt + Right to switch to a different TTY - but...
  3. V

    [SOLVED] Ubuntu VM installation problems

    I just appear to have hit this issue as well, trying to install a new VM under Proxmox 6.0 (beta 1) with Ubuntu 19.04. Is this a temporary fix, or is this the permanent workaround? It is an issue on Ubuntu's side, or on our side? Can the Default display be made with Ubuntu? UPDATE - Wait, I...
  4. V

    Understanding how migration and failover work in Proxmox/Ceph cluster?

    Right - so it will be a new boot of that VM. Curious - is there any method, or scenario under which it could be seamless migrated over, without a restart? Is such a thing possible under Proxmox (or elsewheere)?
  5. V

    Understanding how migration and failover work in Proxmox/Ceph cluster?

    Hi, Say I have a Proxmox cluster, with Ceph as the shared storage for VMs. Our VMs are mostly running Windows, and clients access them via RDP. To confirm - migrating a running VM from one node to another should be fairly quick, and the VM stays running for the whole period - so a RDP session...
  6. V

    How to create multiple Ceph storage pools in Proxmox?

    We have three servers running a 3-node Proxmox/Ceph setup. Each has a single 2.5" SATA for Proxmox, and then a single M.2 NVNe drive and a single Intel Optane PCIe NVMe drive. I'd like to use both NVMe drives for *two* separate Ceph storage pools in Proxmox. (I.e. one composed of the three M.2...
  7. V

    Proxmox 6.0 - unable to create Ceph OSDs (Unable to create a new OSD id)

    Thank you! I will try this tonight as soon as I get home, I really want to get this working. So basically I just run that one command, and then the ceph-volume command should work as is? Do you think it might make sense to add an option in the Proxmox Ceph GUI to specify the number of OSDs per...
  8. V

    Proxmox 6.0 - unable to create Ceph OSDs (Unable to create a new OSD id)

    I have just installed Proxmox 6.0 beta on a 3-node cluster. I have setup the cluster, and also setup Ceph Managers/Monitors on each node. I’m now at the stage to create OSDs - I’m using Intel Optane drives, which benefit from multiple OSDs per drive. However, when I try to run the command to...
  9. V

    Proxmox won't install - "A volume group called pve already exists".

    I had to do this again recently - in case anybody else reads this, using wipefs -a /dev/<devicename> also did the trick. There is more info about the command in this post: https://forum.proxmox.com/threads/recommended-way-of-creating-multiple-osds-per-nvme-disk.52252/
  10. V

    Proxmox VE 6.0 beta released!

    I just tried to install using ZFS on a Samsung M.2 NVMe drive - however, it would not boot into Proxmox VE after installation. It simply took me to a screen, that said “Reboot into firmware interface”. However, when I re-did the installation using ext4 - I was able to boot successfully Does...
  11. V

    API to read QEMU VM creation time or uptime?

    Thank you for the detailed answer! I would never have discovered this otherwise. (Maybe I should document it somewhere?) I used the info you provided to search source code - it seems part of the logline is constructed in pve-common/src/PVE/Tools.pm. 1. One question - what is “dtype”? Are the...
  12. V

    Proxmox 6.0 - release date? Preview builds? (Debian Buster is out 6th July, 2019)

    Hi, I saw that Debian Buster is coming out in a few days! =) https://lists.debian.org/debian-devel-announce/2019/06/msg00003.html I read that the next version of Proxmox is 6.0 - and it will be based on Debian Buster, and have Ceph Luminous in it. Is there any idea of when Proxmox 6.0 is...
  13. V

    API to read QEMU VM creation time or uptime?

    So I’m tailing /var/log/messages. When I start a VM, I see: Jul 3 23:06:17 syd1 pvedaemon[617005]: <root@pam> starting task UPID:syd1:000A4DBA:0112844F:5D1CA849:qmstart:108:root@pam: When I shutdown a VM: Jul 3 23:07:20 syd1 pvedaemon[617005]: <root@pam> starting task...
  14. V

    Configuring hookscripts to run for all VMs? Or for all VMs cloned from an image template?

    We have a 3-node HA Proxmox cluster, with Ceph. So could we store the hookscript on CephFS? Or is some external SMB storage a better idea? And you're saying if we add it to the config textfile by hand on the image, then create an image template from that via the GUI, it should propagate...
  15. V

    Image templates on CephFS shared storage only appear on one node?

    Ah got it - but say we lose one node 1 - then we no longer have access to the image templates (even though they're stored in CephFS, which would still be available across the other 2 nodes). And if we make copies of those image templates on each machine - then it'd use up 3x the amount of...
  16. V

    Configuring hookscripts to run for all VMs? Or for all VMs cloned from an image template?

    I have been trying to find a way to script things for VM creation/shutdown, and hookscripts seem close to what I have been looking for! https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_hook_scripts However, in our workflow - we have multiple image templates setup (e.g. Windows 7, Windows...
  17. V

    Image templates on CephFS shared storage only appear on one node?

    Hi, I have a 3-node Proxmox and Ceph cluster. I am using Ceph RADOS for VM storage, and also CephFS to store some image templates. I can see the image templates under the first server - however, they are not seen under servers 3 and 4: I would have thought since CephFS is shared storage...
  18. V

    Installed Qemu agent on Windows machine, but Proxmox doesn't recognise it?

    Hi, I'm running a 3-node Proxmox 5.4 cluster. I've setup some VMs with Windows 7, Windows 8.1 and Windows 10. I've installed the guest-agent package from the VirtIO driver disk, in the guest-agent directory: If I go into services.msc, I do see that the Qemu agent service is started...
  19. V

    New Windows 8.1 VM always pegs CPU at 100% under Proxmox (Windows 7 and 10 are fine)

    Of course - here is the config file from /etc/pve/qemu-server - we tried creating two Windows 8.1. instances, and they both exhibit the same symptoms. Only difference between them is the version of virtio drivers installed: root@syd1:/etc/pve/qemu-server# cat 101.conf agent: 1 bootdisk: scsi0...
  20. V

    New Windows 8.1 VM always pegs CPU at 100% under Proxmox (Windows 7 and 10 are fine)

    From Task Manager, it is svchost.exe - and I suspect it's the update process. I'm not sure what to do beyond that though.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!