My goal is to avoid having many bridges because if I missed one bridge on one host then some VMs may not work when migrated to this host. One bridge for all VLANs would be prefferable.
Hi,
i am sure that this has been asked many times before. I believe I configured it as in manuals, but it does not work.
We have VLAN 201 in trunk mode on the switch and I want to use it in guest using non-tagged bridge:
PVE network:
bridge over the bond with no tags and separate VLAN for the...
What happens when I reboot the node or migrate the container to another host? Isn't there a way to import manually created vm-XXX-disk-N as if they were created through the GUI?
Hello,
I need to use RBDs with custom object size different from the default 25 (4MB).
While it is possible to create it via command prompt:
rbd -p poolName create vm-297-disk-1 --size 16G --object-size 16K
i don't know how to import it to make it available in LXC in some mount point?
Hi,
I have privileged container in which is needed to access devices on host machine.
Added in lxc.conf the following:
lxc.cgroup.devices.allow: c 196:* rwm
This was enough in kernel 5.4.x and PVE 6 to access devices in LXC when executing:
mknod /dev/dahdi/ctl c 196 0
However after upgrading...
Confirm - had the same problem and resolved it by your recommendation.
The issue is that even fresh install creates those lines, effectively rendering the machine no network access.
I've read the manual many times, but it is easy to miss on some minor details when doing anything for the first time. That's why I am trying to plan ahead and also to collect community advises. Thank you all!
Thank you for detailed answer. My general plan is exactly the same.
Happy to say that your plan matches my outlines 1:1, plus all the valuable advises on not using any additional options and more details.
This was something not clear to me and I felt it is important.
In regard to removing OSDs...
Re-reading your answer, I believe something was not very clear. By adding and removing OSD-s I mean new additional drives. Not literally extracting one drive from chassisX and inserting it into chassisY and so forth.
Thanks for the opinion, but this is chosen deliberately and good enough for us reasons. Other than that i agree with SPOF concern. Data transfer is not an issue especially that we are not in a hurry and we can extract one OSD at a time. Let's focus on the topic if you wish to contribute to the...
Yes to all questions and comments. This is the chosen and approved design and we have to deal with it already existing. Do you have experience with adding and removing OSDs, using "no-out" and triggering rebalance?
Hello,
We operate a cluster with 10+ nodes where 1 of them is serving as SAN with CEPH. It has 20+ disks (OSD) inside, with one monitor and one manager installed on the SAN.
The rest of the nodes are data nodes with 4 bay chassis with 2 installed disks in ZFS RAID1 mode.
We have scheduled...
That was one of the first things that I checked. scsi-*** ids are the same during install and during boot time.
It is as simple as /etc/zfs/zpool.cache file missing. Once creating it and updating initramfs, system started booting properly mounting rpool.
I think this is some kind of bug in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.