Search results

  1. V

    Passthrough USB Bluetooth adapter to LXC

    OK - thank you for very fast answer.
  2. V

    Passthrough USB Bluetooth adapter to LXC

    Hi, I'm trying to pass: baseus BA04 usb 2.0 Bluetooth 5.1 adapter root@pve1:~# lsusb | grep -i bluetooth Bus 002 Device 003: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) root@pve1:~# ls -l /dev/bus/usb/002/003 crw-rw-r-- 1 root root 189, 130 Sep 30 14:43...
  3. V

    Pass hard disk to container - startup problem

    tried but still no luck: lxc_map_ids: 3672 newgidmap failed to write mapping "newgidmap: gid range [1000-1001) -> [1000-1001) not allowed": newgidmap 464103 0 100000 1000 1000 1000 1 1001 101001 64535 lxc_spawn: 1791 Failed to set up id mapping. __lxc_start: 2074 Failed to spawn container "103"...
  4. V

    Pass hard disk to container - startup problem

    Hello, I'm trying to pass two mounted hard disks to container (103) when I start it I get:lxc_map_ids: 3672 newuidmap failed to write mapping "newuidmap: uid range [1000-1001) -> [1000-1001) not allowed": newuidmap 458165 0 100000 1000 1000 1000 1 1001 101001 64530 lxc_spawn: 1791 Failed to set...
  5. V

    Proxmox three node cluster - ceph - got timeout

    All software reinstalled and works fine; thanks for suport
  6. V

    Proxmox three node cluster - ceph - got timeout

    I tried to do if as stated: https://forum.proxmox.com/threads/reinstall-ceph-on-proxmox-6.57691/page-2#post-300278 but no luck. Those are clean cluster, I will reinstall proxmox on all 3 nodes
  7. V

    Proxmox three node cluster - ceph - got timeout

    Is there any "manual" how to uninstall ceph from proxmox? (I will try to search by myself anyway)
  8. V

    Proxmox three node cluster - ceph - got timeout

    I also found this (don't know if it's causes the problem) in: /var/log/ceph/ceph-mon.pve1.log 2022-01-26T17:28:38.760+0100 7fd0145f2580 -1 monitor data directory at '/var/lib/ceph/mon/ceph-pve1' does not exist: have you run 'mkfs'? 2022-01-26T17:28:49.005+0100 7f7ae6332580 0 set uid:gid to...
  9. V

    Proxmox three node cluster - ceph - got timeout

    Was dead: ceph-mon@pve1.service - Ceph cluster monitor daemon Loaded: loaded (/lib/systemd/system/ceph-mon@.service; disabled; vendor preset: enabled) Drop-In: /usr/lib/systemd/system/ceph-mon@.service.d └─ceph-after-pve-cluster.conf Active: inactive (dead) I started...
  10. V

    Proxmox three node cluster - ceph - got timeout

    Hello, I have three node proxmox cluster: optiplex 7020 xeon e3-1265lv3 16GB 120GB SSD for OS 512GB nvme for ceph 1GbE network for "external" access dual 10GbE network (for cluster) Network is connected as stated here: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server as routed...
  11. V

    10GbE cluster network(s) - 3 nodes without switch

    I did this as in manual; I will post another thread in appropriate forum thank you for suport.
  12. V

    10GbE cluster network(s) - 3 nodes without switch

    I checked all networks settings, nodes were rebooted, ping working for each other machines, I know that I messing with thread about network, but the problem is I cannot run ceph. There were no errors during installation; I did everything as in doc but ceph gives me timeout error; this is my...
  13. V

    10GbE cluster network(s) - 3 nodes without switch

    Here you are: pve1: auto lo iface lo inet loopback iface eno1 inet manual mtu 9000 auto enp1s0f0 iface enp1s0f0 inet static address 192.168.20.10/24 mtu 9000 up ip route add 192.168.20.30/32 dev enp1s0f0 down ip route del 192.168.20.30/32 auto enp1s0f1...
  14. V

    10GbE cluster network(s) - 3 nodes without switch

    OK - second option: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server routed setup works great, I have: bridge on 1GbE card for external access and dual 10GbE for mesh cluster for each node. All cards set MTU at 9000. But ceph gives me error on each node: root@pve2:~# pveceph status...
  15. V

    10GbE cluster network(s) - 3 nodes without switch

    Hi, I made fresh installation for 3 cluster node with network as follows: is there any option to set up a cluster? if not anything I can do to set up a cluster? Now each node has hosts to point to other two nodes, and at OS level communication are working properly But at set up cluster seems...
  16. V

    Dual NIC passthrough

    Thank for you answer; but unfortunately when I pass 04:00 the pfSense cannot find any network interfaces ... (it should work as it works when I pass second port only)
  17. V

    Dual NIC passthrough

    Hello, I have Asrock J4105M mobo with proxmox 5.3 and Intel dual network card which I would like to pass to pfSense. 04:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev ff) 04:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller...
  18. V

    Is this right hardware for passthrough?

    I bought ASRock J4105M (micro ATX) with 3 PCIe (2 x x1 and 1 x x16 (logically x1)) and passing through for network and dell perc h310 raid card working properly. Act as NAS/router/home automation combo ;-) Also works with PCIe to NVME adapter as system disk.
  19. V

    OpenMediaVault installation in LXC with attached HW block device

    Inside container there's /dev/sdc and /dev/sdb. Maybe there's problem that my drives are formated as whole /dev/sdb (and /dev/sdc) but not contains any partition. I would like not to reformat them due to that they contain big amount of data.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!