Search results

  1. F

    Configure VLAN on VMs without using bridges

    Common scenario when you need to bind VM to particular VLAN is to create bridge on host node and use this bridge interface in Network section of particular VM. Is there any solution to not create bridge interfaces at all? Network options page allows only bridge-type interfaces to be chosen. So...
  2. F

    Large %sy and %si values in top's output

    Running lxc on 7.1. Use ceph@ssd as rbd. Container runs rabbitmq with 5-10K rps. What could be wrong with such large system and interrupts values? top - 17:01:09 up 5 days, 15:03, 1 user, load average: 77.19, 93.43, 96.93 Tasks: 1755 total, 28 running, 1727 sleeping, 0 stopped, 0 zombie...
  3. F

    [SOLVED] CVE-2022-0185

    I clearly understand that, just wanted to remind of the issue.
  4. F

    Support of lazytime for mount points

    Thank you a lot. Will check and report for any issues.
  5. F

    [SOLVED] CVE-2022-0185

    Any solutions other than set kernel.unprivileged_userns_clone to 0? Or is fixed kernel version released?
  6. F

    Support of lazytime for mount points

    How could I enable lazytime mount option for lxc containers? Current mount options have only noatime nosuid when checking Mount point. P.S. Probably it may be done by direct editing of .conf file at /etc/pve/lxc?
  7. F

    [SOLVED] Create OSD using GPT partitions

    Looks as I solved it. Key problem was to place correct data into /etc/pve/priv/ceph.client.bootstrap-osd.keyring and /var/lib/ceph/bootstrap-osd/ceph.keyring files. So, you just need to get auth data for client.bootstrap-osd (real key is replaced): # ceph auth ls // skipped and screened...
  8. F

    [SOLVED] Create OSD using GPT partitions

    Upd. Looks as I need to carefully read Cephx Config Reference.
  9. F

    [SOLVED] Create OSD using GPT partitions

    Hi, I'm trying to create Bluestore-backed OSDs using partitions not complete disk. Assuming that /dev/sda4 will be future OSD (placed on HDD drive), I created separate partition on SSD for DB/WAL. First try: # pveceph osd create /dev/sda4 -db_dev /dev/sdc1 -db_size 54 unable to get device info...
  10. F

    Rootless Docker inside unprivileged LXC container

    Hi, Has anybody succeeded with running rootless (as ordinal user not root) Docker inside unprivileged LXC container? I followed this official guide: https://docs.docker.com/engine/security/rootless/. Installation failed with the following message:dockerd-rootless.sh[355]: [rootlesskit:parent]...
  11. F

    High latency on recently added SSD OSDs

    Hi, I added several dozens SSDs to Ceph cluster and found that Proxmox reports Apply latency for them from 200 to 500 ms. I checked with iostat program - zero activity. What could be wrong with them? P.S. No migration process happens - they are linked to separate 'root' container.
  12. F

    New install on mSATA

    Try to add nomodeset to kernel boot settings.
  13. F

    drbdmanage license change

    Looks like Linbit chose the way that Apple has taken once: no other players. Although I was successful with making master+2slaves DRBD cluster in production.
  14. F

    Resize VM disk on Ceph

    Hi, Some notes: 1. Resizing from Proxmox UI failed with message 'VM 102 qmp command failed - VM 102 qmp command 'block_resize' failed - Could not resize: Invalid argument' 2. Successfully resized image on ceph with 'qemu-img resize -f rbd rbd:rbd/vm-102-disk-1 48G' 3. But Proxmox UI and VM...
  15. F

    [SOLVED] all vms down and lvm-thin not backupable

    Please, paste cat /proc/mdstat output.
  16. F

    Ceph pool may be deleted easily in UI

    https://bugzilla.proxmox.com/show_bug.cgi?id=1043
  17. F

    Ceph pool may be deleted easily in UI

    Hi, I found simple way to get all CT/VMs unusable. Simply delete ceph pool in Ceph->Pools->Remove. No warnings or locks even it's used in any way. Running pve-manager/4.2-15/6669ad2c (running kernel: 4.4.10-1-pve) Regards, Alex
  18. F

    [SOLVED] Journal was not prepared with ceph-disk

    For those who are interested in creating osd journal on separate partition (not disk) here are the steps: (assume that sda3 and sdb3 - 5GB partitions for journal, sdc and sdd - disks for osd data) 1. Create partition with correct size. If using fdisk partition's size should be 10483712 sectors...
  19. F

    [SOLVED] Journal was not prepared with ceph-disk

    Very strange/bad. Even running ceph-disk prepare --fs-type xfs --cluster ceph --cluster-uuid 908ceb45-91b6-4c31-8ede-00acab17c9ef --journal-dev /dev/sdc /dev/sda3 deletes /dev/sda3 :(
  20. F

    [SOLVED] Journal was not prepared with ceph-disk

    Hi, I use Proxmox 4.2 and wonder if things are ok: # pveceph createosd /dev/sdc -journal_dev /dev/sda3 create OSD on /dev/sdc (xfs) using device '/dev/sda3' for journal Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!