According to this post https://forum.proxmox.com/threads/virtio-scsi-vs-virtio-scsi-single.28426 virtio uses a single controller per disk just like for virtio scsi single.
As to which one is "faster", no idea.
This is fixed.
Was several issues.
First was updating Ansible to latest version to use the Proxmoxer pip module to support the Proxmox 6.x APIs to create VMs.
Second was that the behavior of the "connection: local" being used playbook-wide has changed since Ansible 2.8.5. I add to add...
I forgot the command-line kung-fu to tell urllib3 to ignore the self-signed certs on the Proxmox hosts. I have the same error as this post https://learn.redhat.com/t5/Automation-Management-Ansible/Ansible-proxmox-kvm-module/td-p/3935
Anyone remember the command-line or environment variable to...
Well, the fun continues.
Before I did a clean install of 6.0 beta, I backed up my ansible playbooks from 5.4 VE which worked.
So did a 'apt-get install ansible' and 'apt-get python-pip. Then did a 'pip install proxmoxer'
When I run the playbook to create a VM, I get the following error...
Can someone else confirm that /usr/sbin/ceph-disk exists? It shows up in 'apt-file list ceph-osd' but not in /usr/sbin.
I did check my other 2 nodes and no ceph-disk.
I also did a 'apt-get install --reinstall ceph-osd' but still no ceph-disk.
I'm running a proof-of-concept full-mesh 3-node Ceph cluster with identical servers. Odd number needed for quorum. It's currently running Proxmox 5.4 but will be installing 6.0 beta for performance optimizations.
Since an odd number of nodes is needed for a quorum, create either a 3 or 5 node...
The mini mono H310 supposedly can be flashed to IT mode https://www.youtube.com/watch?v=Y1Xi5NZRlXM and https://www.reddit.com/r/homelab/comments/bkxszi/flashing_the_h310_mono_mini_to_it_mode/
Here are the some that are already flashed to IT mode...
I believe WD Reds are SATA drives? If so, you may want to confirm if write caching is enabled with the 'hdparm' command.
I don't use SATA. I use SAS drives, which uses the 'sdparm' command.
I don't have Dells but I do have 8-drive SunFires just as old as the R610s. They don't have 10GbE but do have full mesh 1GbE setup https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
Since you can configure Ceph via the GUI, I suggest you go with that...
I did everything through the GUI.
The steps I did were:
1) Create CephFS first https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes#pveceph_fs. This will create two pools called cephfs_data and cephfs_data
2) Create RBD by clicking on Datacenter -> Add -> RBD. It should...
Thanks to morph027, the MAC address is returned upon VM creation. Here's my Ansible VM Create task:
tasks:
- name: VM create
proxmox_kvm:
api_user: "root@pam"
api_password: "{{ api_password }}"
api_host: "{{ api_host }}"
node: "{{ node }}"
name: "{{...
For PERC, delete any existing RAID volumes. This will make all the drives "Unassigned" and the PERC should pass through drives.
I'm also using 15K RPM SAS drives as well.
Anyone successfully used the Ansible module of proxmox_kvm to get a MAC address of a VM for a PXE install?
Here is the task code:
- name: Get Facts
proxmox_kvm:
api_user: "root@pam"
api_password: test
api_host: test
node: test
name: test
validate_certs: no...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.