Search results

  1. K

    VLAN in guest not working

    My goal is to avoid having many bridges because if I missed one bridge on one host then some VMs may not work when migrated to this host. One bridge for all VLANs would be prefferable.
  2. K

    VLAN in guest not working

    Hi, i am sure that this has been asked many times before. I believe I configured it as in manuals, but it does not work. We have VLAN 201 in trunk mode on the switch and I want to use it in guest using non-tagged bridge: PVE network: bridge over the bond with no tags and separate VLAN for the...
  3. K

    RBD with custom object size

    What happens when I reboot the node or migrate the container to another host? Isn't there a way to import manually created vm-XXX-disk-N as if they were created through the GUI?
  4. K

    RBD with custom object size

    Hello, I need to use RBDs with custom object size different from the default 25 (4MB). While it is possible to create it via command prompt: rbd -p poolName create vm-297-disk-1 --size 16G --object-size 16K i don't know how to import it to make it available in LXC in some mount point?
  5. K

    IOMMU groups - ACS patch

    Hi, I have enabled IOMMU: intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction Tried with "downstream","multifunction" and "downstream,multifunction" Added necessary module options echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" >...
  6. K

    Operation not permitted in privileged container

    OMG thank you very much! That did it. I spent hours today digging and did not find it :-(
  7. K

    Operation not permitted in privileged container

    Hi, I have privileged container in which is needed to access devices on host machine. Added in lxc.conf the following: lxc.cgroup.devices.allow: c 196:* rwm This was enough in kernel 5.4.x and PVE 6 to access devices in LXC when executing: mknod /dev/dahdi/ctl c 196 0 However after upgrading...
  8. K

    Proxmox Mail Gateway 7.0 released

    Upgrade on LXC failed, postgresql got f*** up and all databases disappeared. Also LXC for 7.0 is not found, only 6.4 is available at the moment yet.
  9. K

    [SOLVED] Upgrade to Proxmox 7 - Bond (LACP) Interface not working anymore

    I don't have ifupdown2 installed, perhaps this is the difference and reason
  10. K

    [SOLVED] Upgrade to Proxmox 7 - Bond (LACP) Interface not working anymore

    Confirm - had the same problem and resolved it by your recommendation. The issue is that even fresh install creates those lines, effectively rendering the machine no network access.
  11. K

    Scheduled downtime of large CEPH node

    I've read the manual many times, but it is easy to miss on some minor details when doing anything for the first time. That's why I am trying to plan ahead and also to collect community advises. Thank you all!
  12. K

    Scheduled downtime of large CEPH node

    Thank you for detailed answer. My general plan is exactly the same. Happy to say that your plan matches my outlines 1:1, plus all the valuable advises on not using any additional options and more details. This was something not clear to me and I felt it is important. In regard to removing OSDs...
  13. K

    Scheduled downtime of large CEPH node

    Re-reading your answer, I believe something was not very clear. By adding and removing OSD-s I mean new additional drives. Not literally extracting one drive from chassisX and inserting it into chassisY and so forth.
  14. K

    Scheduled downtime of large CEPH node

    Thanks for the opinion, but this is chosen deliberately and good enough for us reasons. Other than that i agree with SPOF concern. Data transfer is not an issue especially that we are not in a hurry and we can extract one OSD at a time. Let's focus on the topic if you wish to contribute to the...
  15. K

    Scheduled downtime of large CEPH node

    Yes to all questions and comments. This is the chosen and approved design and we have to deal with it already existing. Do you have experience with adding and removing OSDs, using "no-out" and triggering rebalance?
  16. K

    Scheduled downtime of large CEPH node

    Hello, We operate a cluster with 10+ nodes where 1 of them is serving as SAN with CEPH. It has 20+ disks (OSD) inside, with one monitor and one manager installed on the SAN. The rest of the nodes are data nodes with 4 bay chassis with 2 installed disks in ZFS RAID1 mode. We have scheduled...
  17. K

    [SOLVED] PMG 6.3 --> 6.4: Upgrade performed properly?

    During apt update there are issues with conf files for clamav and freshclam. Do you suggest to install package provided or keep custom versions?
  18. K

    Installation of 6.4 fails on ZFS root

    That was one of the first things that I checked. scsi-*** ids are the same during install and during boot time. It is as simple as /etc/zfs/zpool.cache file missing. Once creating it and updating initramfs, system started booting properly mounting rpool. I think this is some kind of bug in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!