Are you running the recommended VirtIO-Win drivers for Windows systems from Red Hat?
I believe there is a balloon driver if you system is allowing for memory ballooning. Also there may be other drivers depending on your setup.
Link to the latest folder...
Thank you for the information. I figured this is something you were already working on. Namespaces would be great.
I'm already using the namespace feature to connect my stand alone Ceph cluster to my Proxmox VE nodes.
That does bring up one other question though. I've noticed that while a...
Greetings, and thank you for sharing such a great product.
I have run into something with my home lab that I think others may run into in a more production oriented environment.
I have a Proxmox backup server running as a VM on a stand alone physical server, which is running a single instance...
Thank you! That fixed my QM CREATE issues.
The commands were part of an Ansible playbook and role(s) I have been putting together for automating Proxmox VM creation. I had them burred inside an Ansible shell command.
One other factor that may come into play:
My Proxmox hosts are set up to use Open vSwitch networking, set up per the Proxmox guide. I am not using the default Linux bridging.
The problem still exists with QM CREATE. The QM CREATE builds the VM, but errors out on any "net" option. Like above, I can still add networking using the QM SET command after the VM is created.
Here is the output:
root@HOST:~# qm create 209 --name sm4pxe-209 --boot order=scsi0;net0...
My mistake, sorry. The original failures were using the QM CREATE command, but probably around a month ago now.
Let me retest the QM CREATE with the "net" option and see if it is still an issue.
I actually do a version of what you are thinking about with around 30 different VLANs on two HA pairs of pfSense routers set up in a layered DMZ / Internal network stack. The Edge pair of pfSense routers handle personal DMZ needs like home IOT game consoles and friends phones, and the internal...
I have run into a situation when creating VMs using the qm config command from the command line.
The QM CONFIG command fails when I include any "net" options to create the VM NICs as part of VM creation. However, once the VM has been created, the QM SET command does work successfully with the...
I wish the instructions had worked. Those were the exact instructions I used when trying to connec to a navitive Ceph Octopus cluster.
Like I said above, I was able to authenticate successfully, but could not actually use any of the storage. Neither RDB or FS worked.
I'll take another shot...
I'm wondering if there is any chance we will be able to connect to external CEPH RDB and FS sources?
Currently connecting to an external Octopus cluster authenticates, but refuses to allow it to be used for actual storage. I have not tried that same configuration with a Nautilus cluster...
I also ran into not being able to create bluestore OSDs today. About two days ago a created a test cluster and everything worked normally. Today when I created a test cluster I was unable to create OSDs, either by CLI or GUI.
Eventually I tried creating a filestore OSD and that worked. Next...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.