Have you checked for any potential issues with the node's backing storage device(s), RAID or whatever else you are using?
Any clues in the system logs ?
Hello,
I've got a quick question in regards to LXC containers and the possibility to add persistent mount options for the rootfs filesystem (ext4). Just wanted to clarify that I am *not* using ZFS as a backend for my LXC containers (not sure if that plays a role for having this settings set...
I noticed that PVE 5.0 beta it's shipping with DRBD 8.4.7 kernel module (which is fine). Would it be possible to install drbd9 kernel module instead for people who want to test that one?
Sorry for bringing back this post, but I have a quick question. I understand that Linbit is now responsible for maintaining the storage plugin, but what about GUI integration in things like snapshots for example? Does PVE team have any plans integrating that to the GUI? I have also noticed that...
Hello,
I read that SPICE currently supports smartcard passthrough. My question is if this is supported in PVE as well?
Is it possible to enable it in guest vm , similar to what we do to enable usb support (usb:spice).
I would like to passthrough both builtin and usb card readers. I have...
Hello,
I tried to create a lxc container after applying the latest pve (no subscription) updates, but it seems that it doesn't provide me anymore the option to select ceph rbd storage during the 'create CT' wizard.
I have a single ceph pool which I export as 2 separate storages on pve (one...
- I changed the content to reflect to lxc and images accordingly.
- I copied keyring to the correct file name.
Everything is working perfectly fine now, many thanks to all!
Your help is much appreciated.
Thanks for the clarification. Any ideas how to accomplish this? Can't find it in wiki..
My current Ceph config is:
rbd: cephstor1
monhost 192.168.149.115;192.168.149.95;192.168.148.65
pool rbdpool1
username admin
content images,rootdir
Tried to add a second entry:
rbd...
Thanks for the info, yes actually I have a 'shared' rbdpool and KRBD enabled in order to run LXCs.
I will disable KRBD for the existing pool and I will create another pool with KRBD enabled for LXCs.
I did some further tests:
- Restarted all nodes
- Tried to snapshot when VM is powered off --> Result: success (both in creating and destroying snapshot)
- Tried to snapshot (including RAM) when VM is powered on --> Result: fail (VM 108 qmp command 'savevm-start' failed - failed to open...
Hello,
I have the following situation after the last upgrade to proxmox-ve 4.3-66.
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-3 (running version: 4.3-3/557191d3)
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster...
Hello,
Just installed PVE 4.2 ontop of a blank Debian Jessie.
I followed this link from wiki in order to do that.
Initially I could access the webif without any issues, but after installing openvswitch on the same machine I can't anymore, although I can access normally the machine via ssh...
ZVOL is a block device like /dev/sda for example.
It is not a filesystem like ext3,ext4 etc , so you can dd a raw image directly to the block device as you would do with a normal hdd.
Hello list,
I've upgraded a standalone node from PVE 3.4 to 4.1 by following the guide in Wiki (https://pve.proxmox.com/wiki/Upgrade_from_3.x_to_4.0).
Everything went smooth until the point that I restarted the node.
During startup I'm getting the following message which is flooding the...
Hello,
PVE 4.0 ships with DRBD9 already included in the pve kernel.
Is it possible to use drbd 8.4.6 (compiled from source) instead?
I'm trying to compile it against 4.2.3-2-pve but I get the following error:
PVE headers and build-essential already installed.
Thanks
I've made some tests with zfs+drbd and openvz containers.
It seems that openvz containers can be stored only on a ext3/ext4 formatted volume.
That means that you cannot use this drbd resource in dual/primary mode since that will corrupt ext3/ext4 file system on top of it.
If you need to use...
Can't answer that since I dont use vz, but you can easily test it though.If vz works with lvm and drbd then it shouldn't matter if you have zfs or hardware raid below drbd.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.