Yes, we read this type of experience in the other thread. It is like when using iSCSI LUNS, qemu is accessing directly to the SCSI disk (LUN exposed through iSCSI).
tried but it not as simple as that. In cluster mode, it seems that LVs are spawned when PVE starts the VM :
root@proxmox3:~# /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc...
To be completly clear, do you agree with :
virtio-blk = virtio disk in proxmox Hardware tab + virtio controller in proxmox Options tab
virtio-scsi = scsi disk in proxmox Hardware tab + virtio controller in proxmox Options tab
NEW VM : 110 (SCSI / VIRTIO / SeaBIOS) : sees 100G LUN (bad)...
spirit : virtio = disk type SCSI and hardware type = VIRTIO, you confirm ?
My test debian VM uses default partitioning : no UEFI leads to msdos partiton, no LVM usage.
Now : I changed the drive to scsi-blk. Problem gone. Installed qemu-guest-agent, still no problem.
root@proxmox1:~# cat /etc/pve/qemu-server/100.conf
agent: 1
bios: seabios
boot: cdn
bootdisk: virtio0
cores: 4
ide2: none,media=cdrom
memory: 2048
name: vm1
net0...
madsmao : can you give your feedback concerning your tests with "lxc.cap.drop: syslog" and side effects please ? I find this very interesting.
Thank you.
Bad luck, I use virtio-blk.
I tested with virtio-scsi, results are bad, see https://forum.proxmox.com/threads/problem-with-iscsi-virtio-ovmf.26192/
If I missed something OK, but currently my feeling is that OVMF bios integration is not right (ie not production ready). I spent some tests hours...
Hello, here is my setup :
synology server, 2x100G LUN exposed via a common target
proxmox 4.1-1 i cluster on 3 servers
new VM :
bios: ovmf
bootdisk: scsi0
cores: 4
ide2: local:iso/debian-8.2.0-amd64-netinst.iso,media=cdrom
memory: 2048
name: vm1
net0: virtio=3A:34:66:31:36:37,bridge=vmbr0...
Hello,
I am asking if PVE has an admin fencing mechanism in order to avoid LVM table manipulation on two nodes at the same time ?
Reading this : http://www.tldp.org/HOWTO/html_single/LVM-HOWTO/
section : "LVM is not cluster aware"
I am sharing a PV/VG over iSCSI LUN (untick "use directly" in...
Hello,
I installed debian-8.2.0-amd64-netinst.iso on a KVM guest with UEFI boot (OVMF BIOS).
When the guest starts, the BIOS doesn't boot over the EFI partition (Debien doesn't start, the BIOS comes to the falback EFI command line).
In the BIOS, I can start Debian when I use "boot from file"...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.