so qemu-system-x86 and pve-qemu can't coexist? (and can't be made coexist by some linking trickey?)
If so, then that's to accept.
I just wanted to have the tools on the pve hosts rathern than have to have them on separate system and then have to copy over the vm disk files in case I need to work...
I have to repeat: I DON'T want to use libvirt.
I know, there is no use case for libvirt with PVE.
libvirt came into discussion, because you said it is a dependency for libguestfs-tools (an this is all about using libguestfs-tools).
You mentioned libvirt is the culprit why libguestfs-tools is...
I know libvirt can't handle pve managed VMs.
BUT libguestfs-tools should be able to handle the VMs disk-files (different formats), as PEV isn't using anything PVE-specific for this.
And libguestfs-tools is exactly for this kind of use case (among others that 'might' not work).
But e.g...
Thank you. Sorry for asking again and if that's 'obvious' (but I am not an Linux 'expert').
If you don't use libvirt at all, why can't it be installed side by side with proxmox and why does it conflict then?
I understand that I might not be able to manage Proxmox VMs with...
Thank you for quick reply.
Out of curiousity: May I ask if it can be made 'compatible' or what will/is the root cause for being 'incompatible'. PVE is using no special formats for the Disks.....
hi eddyah,
As I would like to have libguestfs-tools available as well could you please resopond if:
- after reinstalling proxmox-ve to fix, were the guest libguestfs-tools then available/usable/operational?
- did you found other broken things during time after?
Otherwise I will step back form...
I can add some more/new observations from my side.
While testing of Install of debian-netinst-8.8 into a new VM and simultanously looking at atop on then node where the VM installs:
A) using QCOW2 disk format for IDE/SATA/SCSI/VirtIO
Looking at atop values immediately shows
- disk(s) being busy...
Any news on this?
As I am having the same problem on several Proxmox installs with ZFS (2node Clusters, v4.latest, AND even with new v5.0!).
Servers are DELL (different Modells, R815, R630, R410, .. ) with different HBAs, Disk-Drives/-Brands and even CPU-Architecture (AMD, Intel).
Symptom(s)...
Is someone aware of being something on the horizon to have proxmox and current Gluster-versions working again?
I ran into same issue(s) (on test servers, so nothing bad happened), but would like go with the 3.7 or 3.8 stable/latest as soon as possible without having to 'donwgrade' gluster.
Is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.