How to run a QDevice in Docker

baron164

New Member
Jan 16, 2024
25
3
3
I'm looking into running a QDevice for my proxmox cluster within docker on my Synology nas. I've never used docker before so this is still new to me. I've downloaded a few "images' from the registry but while I can get at least one of them to build and run I can't seem to connect to it. I'm wondering if anyone here has experience running a qdevice in docker and may be able to point me in the right direction.

I've had the most success running this image: https://hub.docker.com/r/bcleonard/proxmox-qdevice/
This is the YAML config I've been using.
YAML:
version: "3.5"
services:
  qdevice:
    container_name: qdevice
    image: 'bcleonard/proxmox-qdevice'
    build:
      context: ./context
      dockerfile: ./Dockerfile
    hostname: qdevice
    restart: unless-stopped
    volumes:
     - /volume1/docker/qdevice/qnetd/corosync-data:/etc/corosync
     - /sys/fs/cgroup:/sys/fs/cgroup:ro
    ports:
      - '2222:22'
      - '5403-5412:5403-5412/udp'
    networks:
     - qdevice-net
    environment:
     - NEW_ROOT_PASSWORD=Password1
networks:
  qdevice-net:
    name: qdevice-net
    driver: bridge

I can ping the 2222 port but I can't connect to it using ssh. No data is being written in the corosync-data folder either so while the container is running I don't think it's doing anything. Any help would be greatly appreciated.
 
qnetd is 5403 TCP (not UDP)

(NB I have no idea what's in the docker container by a stranger and myself would not want to run it that way.)
 
qnetd is 5403 TCP (not UDP)

(NB I have no idea what's in the docker container by a stranger and myself would not want to run it that way.)
I checked that and from what I was reading TCP is supposedly implied and doesn't need to be specifically called out in the config file. So the way it's written "should" allow for both TCP and UDP.

This is my first time messing with docker and I haven't figured out how to build something like this entirely from scratch yet. Though that looks like it's going to be how I need to proceed. My other option was to install the VMM component on the Synology and run a full VM for Proxmox in order to add a third host. I figured running just a qdevice in docker would be the more streamlined approach so I'm just giving it a try.
 
I've had the most success running this image: https://hub.docker.com/r/bcleonard/proxmox-qdevice/
I wonder why you haven't used the author provided docker-compose.yml.


This is my first time messing with docker and I haven't figured out how to build something like this entirely from scratch yet.
As it looks, you technically built the image yourself. The Dockerfile looks OK, yet a little long / unecessarly multistepped for my taste.
 
@baron164 , did you ever get this to work without using port 22? I am not getting my proxmox to connect to it on port 2222 even though I can connect to it.
 
@baron164 , did you ever get this to work without using port 22? I am not getting my proxmox to connect to it on port 2222 even though I can connect to it.
Have you tried configuring the setup on the PVE side with ~/.ssh/config so that you have a default configuration setup for the QDevice?
 
FYI, I've had straight success at implementing this QDevice Docker Container:
https://raymii.org/s/tutorials/Proxmox_VE_7_Corosync_QDevice_in_Docker.html

I liked the macvlan approach and well, it seems (although unnamed within the article) that I've hosted the same NAS device brand here.
I made a very few modifications on the Dockerfile (adding some packages I find useful, ss, ping etc..):

Code:
FROM debian:bullseye
RUN echo 'debconf debconf/frontend select teletype' | debconf-set-selections
RUN apt-get update && apt-get dist-upgrade -qy && apt-get install -qy --no-install-recommends systemd systemd-sysv corosync-qnetd openssh-server iproute2 iputils-ping net-tools mc wget nano && apt-get clean && rm -rf /var/lib/apt/lists/* /var/log/alternatives.log /var/log/apt/history.log /var/log/apt/term.log /var/log/dpkg.log
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# Change password to something secure
RUN echo 'root:password_CHANGE_IT' | chpasswd
RUN chmod 1777 /var/run
RUN chown -R coroqnetd:coroqnetd /etc/corosync/
RUN systemctl mask -- dev-hugepages.mount sys-fs-fuse-connections.mount
RUN rm -f /etc/machine-id /var/lib/dbus/machine-id
FROM debian:bullseye
COPY --from=0 / /
ENV container docker
STOPSIGNAL SIGRTMIN+3
VOLUME [ "/sys/fs/cgroup", "/run", "/run/lock", "/tmp" ]
HEALTHCHECK CMD corosync-qnetd-tool -s
EXPOSE 5403
CMD [ "/sbin/init" ]

I'll next try to build the image using debian bookworm sources.

Also, I'm not building the image on the Syno, I've deployed a docker enabled debian VM here on which I'm doing such things, exporting the image, importing it within the Syno -- note that the Docker "macvlan" network needs to be present before seating your container in there, obviously.

Cheers,
m.
 
I agree with you on this although well, here this is deployed within OOB network solely reachable by the PVE nodes with absolutely no breakouts outside of the given vlan itself, hence whoever would logon that container, most probably would have compromised the PVE nodes first and/or the Syno device and in whole honesty, I'd wonder exactly what they'd be after on a qdevice container...

Furthermore, from the originally disscussed container image in this thread https://github.com/bcleonard/proxmox-qdevice/blob/master/Dockerfile:

Code:
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
COPY set_root_password.sh /usr/local/bin/set_root_password.sh
RUN chown root.root /usr/local/bin/set_root_password.sh \
    && chmod 755 /usr/local/bin/set_root_password.sh

Though, yes please, share with us what you'd do instead
 
Last edited:
I agree with you on this although well, here this is deployed within OOB network solely reachable by the PVE nodes with absolutely no breakouts outside of the given vlan itself,

No worries, but I was just told (in this forum post) that "anyone can read this thread" and so we better do not tell these things to all people. :D

hence whoever would logon that container, most probably would have compromised the PVE nodes first and in whole honesty, I'd wonder what they'd be after on a qdevice container...

They would use it as a jump host, that's always handy.

BTW I also agree that with current default setup of PVE, it might as well use telnet, so you are not doing anything worse. But lots of people run QDs external, e.g. public cloud. It is also why I literally brought it up to change the docs.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!