I can confirm this worked.
To clarify, ceph auth get client.bootstrap-osd simply prints out the key information, you actually need to redirect it to the correct location (this got me at first, haha):
root@vwnode1:~# ceph auth get client.bootstrap-osd > /var/lib/ceph/bootstrap-osd/ceph.keyring...
Hmm, weird - it still doesn't work for me. This is with the Ubuntu 19.04 Server ISO, on Proxmox 6.0 (Beta 1).
I tried this ISO:
ubuntu-19.04-live-server-amd64.iso (MD5 sum "9a659c92b961ef46f5c0fdc04b9269a6").
Note that I can use Alt + Left, or Alt + Right to switch to a different TTY - but...
I just appear to have hit this issue as well, trying to install a new VM under Proxmox 6.0 (beta 1) with Ubuntu 19.04.
Is this a temporary fix, or is this the permanent workaround? It is an issue on Ubuntu's side, or on our side?
Can the Default display be made with Ubuntu?
UPDATE - Wait, I...
Right - so it will be a new boot of that VM.
Curious - is there any method, or scenario under which it could be seamless migrated over, without a restart? Is such a thing possible under Proxmox (or elsewheere)?
Hi,
Say I have a Proxmox cluster, with Ceph as the shared storage for VMs. Our VMs are mostly running Windows, and clients access them via RDP.
To confirm - migrating a running VM from one node to another should be fairly quick, and the VM stays running for the whole period - so a RDP session...
We have three servers running a 3-node Proxmox/Ceph setup.
Each has a single 2.5" SATA for Proxmox, and then a single M.2 NVNe drive and a single Intel Optane PCIe NVMe drive.
I'd like to use both NVMe drives for *two* separate Ceph storage pools in Proxmox. (I.e. one composed of the three M.2...
Thank you! I will try this tonight as soon as I get home, I really want to get this working.
So basically I just run that one command, and then the ceph-volume command should work as is?
Do you think it might make sense to add an option in the Proxmox Ceph GUI to specify the number of OSDs per...
I have just installed Proxmox 6.0 beta on a 3-node cluster.
I have setup the cluster, and also setup Ceph Managers/Monitors on each node.
I’m now at the stage to create OSDs - I’m using Intel Optane drives, which benefit from multiple OSDs per drive. However, when I try to run the command to...
I had to do this again recently - in case anybody else reads this, using wipefs -a /dev/<devicename> also did the trick.
There is more info about the command in this post:
https://forum.proxmox.com/threads/recommended-way-of-creating-multiple-osds-per-nvme-disk.52252/
I just tried to install using ZFS on a Samsung M.2 NVMe drive - however, it would not boot into Proxmox VE after installation.
It simply took me to a screen, that said “Reboot into firmware interface”.
However, when I re-did the installation using ext4 - I was able to boot successfully
Does...
Thank you for the detailed answer! I would never have discovered this otherwise. (Maybe I should document it somewhere?)
I used the info you provided to search source code - it seems part of the logline is constructed in pve-common/src/PVE/Tools.pm.
1. One question - what is “dtype”? Are the...
Hi,
I saw that Debian Buster is coming out in a few days! =)
https://lists.debian.org/debian-devel-announce/2019/06/msg00003.html
I read that the next version of Proxmox is 6.0 - and it will be based on Debian Buster, and have Ceph Luminous in it.
Is there any idea of when Proxmox 6.0 is...
So I’m tailing /var/log/messages.
When I start a VM, I see:
Jul 3 23:06:17 syd1 pvedaemon[617005]: <root@pam> starting task UPID:syd1:000A4DBA:0112844F:5D1CA849:qmstart:108:root@pam:
When I shutdown a VM:
Jul 3 23:07:20 syd1 pvedaemon[617005]: <root@pam> starting task...
We have a 3-node HA Proxmox cluster, with Ceph.
So could we store the hookscript on CephFS? Or is some external SMB storage a better idea?
And you're saying if we add it to the config textfile by hand on the image, then create an image template from that via the GUI, it should propagate...
Ah got it - but say we lose one node 1 - then we no longer have access to the image templates (even though they're stored in CephFS, which would still be available across the other 2 nodes).
And if we make copies of those image templates on each machine - then it'd use up 3x the amount of...
I have been trying to find a way to script things for VM creation/shutdown, and hookscripts seem close to what I have been looking for!
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_hook_scripts
However, in our workflow - we have multiple image templates setup (e.g. Windows 7, Windows...
Hi,
I have a 3-node Proxmox and Ceph cluster.
I am using Ceph RADOS for VM storage, and also CephFS to store some image templates.
I can see the image templates under the first server - however, they are not seen under servers 3 and 4:
I would have thought since CephFS is shared storage...
Hi,
I'm running a 3-node Proxmox 5.4 cluster.
I've setup some VMs with Windows 7, Windows 8.1 and Windows 10.
I've installed the guest-agent package from the VirtIO driver disk, in the guest-agent directory:
If I go into services.msc, I do see that the Qemu agent service is started...
Of course - here is the config file from /etc/pve/qemu-server - we tried creating two Windows 8.1. instances, and they both exhibit the same symptoms. Only difference between them is the version of virtio drivers installed:
root@syd1:/etc/pve/qemu-server# cat 101.conf
agent: 1
bootdisk: scsi0...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.