Disk partition using cloud-init config

nbanik

New Member
Apr 22, 2024
24
2
3
Hello Experts,

I'm using cloud-init config to partition the disk like below:

YAML:
resize_rootfs: false

disk_setup:
  /dev/vda:
    table_type: 'mbr'
    layout:
      - 25
      - 50
      - 25
    overwrite: true
fs_setup:
  - label: root_fs
    filesystem: 'ext4'
    device: /dev/vda
    partition: vda1
    overwrite: true
  - label: home_disk
    filesystem: 'xfs'
    device: /dev/vda
    partition: vda2
    overwrite: true
  - label: var_disk
    filesystem: 'xfs'
    device: /dev/vda
    partition: vda3
    overwrite: true

runcmd:
  - [ partx, --update, /dev/vda ]
  - [ mkfs.xfs, /dev/vda2 ]
  - [ mkfs.xfs, /dev/vda3 ]
  - [ partprobe ]
  - parted /dev/vda set 1 boot on p

mounts:
  - ["/dev/vda1", "/"]
  - ["/dev/vda2", "/home"]
  - ["/dev/vda3", "/var"]

But it is not reflecting on my OS.

OS version Debian GNU/Linux 12.


In OS I see different partition:

Code:
root@testdebian:~# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sr0      11:0    1    4M  0 rom
vda     254:0    0   10G  0 disk
├─vda1  254:1    0  9.9G  0 part /
├─vda14 254:14   0    3M  0 part
└─vda15 254:15   0  124M  0 part /boot/efi
vdb     254:16   0    2G  0 disk

root@testdebian:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            965M     0  965M   0% /dev
tmpfs           197M  3.0M  194M   2% /run
/dev/vda1       9.7G  1.2G  8.1G  13% /
tmpfs           984M     0  984M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/vda15      124M   12M  113M  10% /boot/efi

/etc/fstab entry:

Code:
root@testdebian:~# cat /etc/fstab
PARTUUID=5da14e57-2f23-4358-b0a5-02975f3ef3c6 / ext4 rw,discard,errors=remount-ro,x-systemd.growfs 0 1
PARTUUID=4966f7ee-71c6-422f-b710-7eb411ef6087 /boot/efi vfat defaults 0 0
/dev/vda1       /       auto    defaults,nofail,x-systemd.requires=cloud-init.service,_netdev,comment=cloudconfig       0       2
/dev/vda2       /home   auto    defaults,nofail,x-systemd.requires=cloud-init.service,_netdev,comment=cloudconfig       0       2
/dev/vda3       /var    auto    defaults,nofail,x-systemd.requires=cloud-init.service,_netdev,comment=cloudconfig       0       2

I want to partition my disk to root fs (/) to 25%, /home to 50% and /var to 25%.

How can I achieve this by cloud-init?

Regards.
 
Last edited:
I think there is some confusion in your process.
Cloud-Init runs when the OS is already installed so it can do final touches. You seem to be expecting it to re-arrange your boot partition with the root filesystem on it. Thats not going to happen.
If you want the root disk to have a specific organization - you will need to create the template/image yourself.

The configuration stanza that you are trying to apply should work great on an additional, i.e. data, disk.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I think there is some confusion in your process.
Cloud-Init runs when the OS is already installed so it can do final touches. You seem to be expecting it to re-arrange your boot partition with the root filesystem on it. Thats not going to happen.
If you want the root disk to have a specific organization - you will need to create the template/image yourself.

The configuration stanza that you are trying to apply should work great on an additional, i.e. data, disk.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks for clearing.

I was trying to setup additional disk like below for disk vdb:

YAML:
resize_rootfs: false
disk_setup:
    /dev/vdb:
        table_type: 'mbr'
        layout:
        - 50
        - 50
        overwrite: true

fs_setup:
    - label: DATA1
      filesystem: 'xfs'
      device: '/dev/vdb'
      partition: vdb1
      overwrite: true
    - label: DATA2
      filesystem: 'xfs'
      device: '/dev/vdb'
      partition: vdb2
      overwrite: true

runcmd:
  - [ partx, --update, /dev/vdb ]
  - [ mkfs.xfs, /dev/vdb1 ]
  - [ mkfs.xfs, /dev/vdb2 ]
  - [ partprobe ]

mounts:
    - [vdb1, /data1, auto, "defaults,discard", "0", "0"]
    - [vdb2, /data2, auto, "defaults,discard", "0", "0"]

But it is not working. OS boot is stuck:
1716209337124.png

Could you please help?

Regards.
 
I dont think its stuck because of your cloudinit. I saw this last week and ended up downloading a newer official Cloud Image. The boot problem went away.
You can test this by booting without any custom configuration. Also, in some cases the instance booted after a prolonged wait.

Other than that, if the partitioning still doesnt work - check cloudinit log line by line after boot.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I dont think its stuck because of your cloudinit. I've this this last week and ended up downloading a newer official Cloud Image. The boot problem went away.
You can test this by booting without any custom configuration. Also, in some cases the instance booted after a prolonged wait.

Other than that, if the partitioning still doesnt work - check cloudinit log line by line after boot.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

first booting was ok, i've configure post_state like below:

YAML:
power_state:
    delay: now
    mode: reboot
    message: Rebooting after cloud-init completion
    condition: true

Then it stuck for ever.

Where is the cloud-init log?
 
first booting was ok, i've configure post_state like below:
Its hard to say without analyzing all details and steps. My advice is to establish a baseline:
Take official image and boot/reboot it with minimal setup, keep adding configuration until you break it. You can always loop-mount the resulting disk and analyze the logs offline.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Its hard to say without analyzing all details and steps. My advice is to establish a baseline:
Take official image and boot/reboot it with minimal setup, keep adding configuration until you break it. You can always loop-mount the resulting disk and analyze the logs offline.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi bbgeek17,

I found below log in /var/log/cloud-init-output.log

Code:
/var/lib/cloud/instance/scripts/runcmd: 2: partx --update /dev/vdb: not found
/var/lib/cloud/instance/scripts/runcmd: 3: mkfs.ext4 /dev/vdb1: not found
/var/lib/cloud/instance/scripts/runcmd: 4: mkfs.ext4 /dev/vdb2: not found

Runcmd:
Code:
runcmd:
  - [ partx --update /dev/vdb ]
  - [ mkfs.ext4 /dev/vdb1 ]
  - [ mkfs.ext4 /dev/vdb2 ]

But all the command works in VM:

Code:
root@testdebian:/var/log#  partx --update /dev/vdb
root@testdebian:/var/log# mkfs.ext4 /dev/vdb1
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done                           
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 6ee33107-5580-45a8-aa7a-cc3d163bdb80
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

root@testdebian:/var/log# mkfs.ext4 /dev/vdb2
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done                           
Creating filesystem with 261888 4k blocks and 65536 inodes
Filesystem UUID: 8d65a118-2d43-43ad-a8da-4dbbe7a2cdbf
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

root@testdebian:/var/log

Please help.
 
Please help.
The commands may work once the VM is booted, but they are being executed in a very different environment. One where PATH is set.

We can analyze it as following:
- "not found" can refer to two things a) command b) device
- a simple way to exclude one is to run the command on non-existing device: partx --update /dev/sdxyz
- the resulting error is: partx: stat of /dev/sdxyz failed: No such file or directory
- so now we are pretty sure that "not found" refers to the actual "partx"
- the most likely cause is the fact that PATH is not set
- the solution is to use absolute paths in all your command executions

Give it a try


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
The commands may work once the VM is booted, but they are being executed in a very different environment. One where PATH is set.

We can analyze it as following:
- "not found" can refer to two things a) command b) device
- a simple way to exclude one is to run the command on non-existing device: partx --update /dev/sdxyz
- the resulting error is: partx: stat of /dev/sdxyz failed: No such file or directory
- so now we are pretty sure that "not found" refers to the actual "partx"
- the most likely cause is the fact that PATH is not set
- the solution is to use absolute paths in all your command executions

Give it a try


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Tried with full path, still getting below error:
Code:
/var/lib/cloud/instance/scripts/runcmd: 2: /usr/bin/partx --update /dev/vdb: not found
/var/lib/cloud/instance/scripts/runcmd: 3: /usr/sbin/mkfs.ext4 /dev/vdb1: not found
/var/lib/cloud/instance/scripts/runcmd: 4: /usr/sbin/mkfs.ext4 /dev/vdb2: not found

pls help.
 
start adding debug commands:
- [ which, part ]
- [ which, mkfs.ext4 ]
- [ /usr/bin/lsblk ]

etc


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Now try with below config:
Code:
runcmd:
  - [ which, partx ]
  - [ which, mkfs.ext4 ]
  - [ /usr/bin/lsblk ]
  - [ /usr/bin/partx --update /dev/sdxyz ]
  - [ /usr/bin/partx --update /dev/vdb ]
  - [ /usr/sbin/mkfs.ext4 /dev/vdb1 ]
  - [ /usr/sbin/mkfs.ext4 /dev/vdb2 ]

Output in /var/log/cloud-init-output.log

Code:
/usr/bin/partx
/usr/sbin/mkfs.ext4
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sr0      11:0    1    4M  0 rom
vda     254:0    0   10G  0 disk
├─vda1  254:1    0  9.9G  0 part /
├─vda14 254:14   0    3M  0 part
└─vda15 254:15   0  124M  0 part /boot/efi
vdb     254:16   0    2G  0 disk
├─vdb1  254:17   0    1G  0 part
└─vdb2  254:18   0 1023M  0 part
/var/lib/cloud/instance/scripts/runcmd: 5: /usr/bin/partx --update /dev/sdxyz: not found
/var/lib/cloud/instance/scripts/runcmd: 6: /usr/bin/partx --update /dev/vdb: not found
/var/lib/cloud/instance/scripts/runcmd: 7: /usr/sbin/mkfs.ext4 /dev/vdb1: not found
/var/lib/cloud/instance/scripts/runcmd: 8: /usr/sbin/mkfs.ext4 /dev/vdb2: not found


What is difference between

- [ which, part ]
and wihtout coma
- [ which part ]
 
Last edited:
What is difference between

- [ which, part ]
and wihtout coma
- [ which part ]
One is a List, the other is a String:
#cloud-config

# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - runcmd only runs during the first boot
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
#
# Note, that the list has to be proper yaml, so you have to quote
# any characters yaml would eat (':' can be problematic)
runcmd:
- [ ls, -l, / ]
- [ sh, -xc, "echo $(date) ': hello world!'" ]
- [ sh, -c, echo "=========hello world=========" ]
- ls -l /root
# Note: Don't write files to /tmp from cloud-init use /run/somedir instead.
# Early boot environments can race systemd-tmpfiles-clean LP: #1707222.
- mkdir /run/mydir
- [ wget, "http://slashdot.org", -O, /run/mydir/index.html ]



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
One is a List, the other is a String:
#cloud-config

# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - runcmd only runs during the first boot
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
#
# Note, that the list has to be proper yaml, so you have to quote
# any characters yaml would eat (':' can be problematic)
runcmd:
- [ ls, -l, / ]
- [ sh, -xc, "echo $(date) ': hello world!'" ]
- [ sh, -c, echo "=========hello world=========" ]
- ls -l /root
# Note: Don't write files to /tmp from cloud-init use /run/somedir instead.
# Early boot environments can race systemd-tmpfiles-clean LP: #1707222.
- mkdir /run/mydir
- [ wget, "http://slashdot.org", -O, /run/mydir/index.html ]



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

By the way I tried now with below command:

Code:
runcmd:
  - which  partx
  - which mkfs.ext4
  - /usr/bin/lsblk
  - /usr/bin/partx --update /dev/sdxyz
  - /usr/bin/partx --update /dev/vdb
  - /usr/sbin/mkfs.ext4 /dev/vdb1
  - /usr/sbin/mkfs.ext4 /dev/vdb2

Seems formating works.

But auto mount does not work:

Code:
mounts:
    - [/dev/vdb1, /data1, ext4, "defaults,discard", "0", "0"]
    - [/dev/vdb2, /data2, ext4, "defaults,discard", "0", "0"]


After booting, I typed manually mount -a, then it works without error.


So why mount is not working.
 
By the way I tried now with below command:

Code:
runcmd:
  - which  partx
  - which mkfs.ext4
  - /usr/bin/lsblk
  - /usr/bin/partx --update /dev/sdxyz
  - /usr/bin/partx --update /dev/vdb
  - /usr/sbin/mkfs.ext4 /dev/vdb1
  - /usr/sbin/mkfs.ext4 /dev/vdb2

Seems formating works.

But auto mount does not work:

Code:
mounts:
    - [/dev/vdb1, /data1, ext4, "defaults,discard", "0", "0"]
    - [/dev/vdb2, /data2, ext4, "defaults,discard", "0", "0"]


After booting, I typed manually mount -a, then it works without error.


So why mount is not working.
Here is the error in /var/log/cloud-init.log
Code:
2024-05-20 18:17:14,312 - subp.py[DEBUG]: Running command ['mount', '-a'] with allowed return codes [0] (shell=False, capture=True)
2024-05-20 18:17:14,320 - cc_mounts.py[WARNING]: Activate mounts: FAIL:mount -a
2024-05-20 18:17:14,320 - util.py[WARNING]: Activate mounts: FAIL:mount -a
2024-05-20 18:17:14,320 - util.py[DEBUG]: Activate mounts: FAIL:mount -a
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/cloudinit/config/cc_mounts.py", line 617, in handle
    subp.subp(cmd)
  File "/usr/lib/python3/dist-packages/cloudinit/subp.py", line 335, in subp
    raise ProcessExecutionError(
cloudinit.subp.ProcessExecutionError: Unexpected error while running command.
Command: ['mount', '-a']
Exit code: 32
Reason: -
Stdout:
Stderr: mount: /data1: wrong fs type, bad option, bad superblock on /dev/vdb1, missing codepage or helper program, or other error.
               dmesg(1) may have more information after failed mount system call.
        mount: /data2: wrong fs type, bad option, bad superblock on /dev/vdb2, missing codepage or helper program, or other error.
               dmesg(1) may have more information after failed mount system call.
2024-05-20 18:17:14,327 - subp.py[DEBUG]: Running command ['systemctl', 'daemon-reload'] with allowed return codes [0] (shell=False, capture=True)
2024-05-20 18:17:14,631 - cc_mounts.py[DEBUG]: Activate mounts: PASS:systemctl daemon-reload
2024-05-20 18:17:14,631 - handlers.py[DEBUG]: finish: init-network/config-mounts: SUCCESS: config-mounts ran successfully


May be trying to mount even before formatting to ext4.

what do you think?
 
So why mount is not working.
My advice is to review the logs, try various things, run cloud-init manually and keep working on it.

The conversation in this thread has long gone out of scope of "Proxmox VE: Installation and configuration"

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: nbanik
My advice is to review the logs, try various things, run cloud-init manually and keep working on it.

The conversation in this thread has long gone out of scope of "Proxmox VE: Installation and configuration"

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks for your kind help.

I added below lines:

Code:
power_state:
    delay: now
    mode: reboot
    message: Rebooting after cloud-init completion
    condition: true

After reboot of cloud-init, everything becomes ok.
 
Thanks for your kind help.

I added below lines:

Code:
power_state:
    delay: now
    mode: reboot
    message: Rebooting after cloud-init completion
    condition: true

After reboot of cloud-init, everything becomes ok.
Can you please share your final yaml that works?
 
Can you please share your final yaml that works?
Final Yaml file. This is for debian vm.

YAML:
#cloud-config

resize_rootfs: false
disk_setup:
    /dev/vdb:
        table_type: 'mbr'
        layout:
        - 50
        - 50
        overwrite: true

fs_setup:
    - label: DATA1
      filesystem: 'ext4'
      device: '/dev/vdb'
      partition: vdb1
      overwrite: true
    - label: DATA2
      filesystem: 'ext4'
      device: '/dev/vdb'
      partition: vdb2
      overwrite: true

runcmd:
  - which  partx
  - which mkfs.ext4
  - /usr/bin/lsblk
  - /usr/bin/partx --update /dev/sdxyz
  - /usr/bin/partx --update /dev/vdb
  - /usr/sbin/mkfs.ext4 /dev/vdb1
  - /usr/sbin/mkfs.ext4 /dev/vdb2

mounts:
    - [/dev/vdb1, /data1, ext4, "defaults,discard", "0", "0"]
    - [/dev/vdb2, /data2, ext4, "defaults,discard", "0", "0"]

hostname: testdebian
package_upgrade: true
packages:
  - qemu-guest-agent
  - net-tools

timezone: Asia/Dhaka
users:
  - default
  - name: vagrant
    passwd: "$6$JwfNX94exjrdBozM$FEYyyVh4ZLnrGMfd9tZh8ajqPM60IJIjc4Z7ATEL7MZZ8aAqmVO8WH7HBaqp3IXNSaRQNQ33LuHfQq4hQxQDN."
    groups: [adm, cdrom, dip, plugdev, lxd, sudo]
    lock-passwd: false
    chpasswd: { expire: False }
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
    ssh_authorized_keys:
    - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3ozAcjgaphMeWnFoOak0CwbL4CdExtWjI6BjoPISVQcriDbl7W7ai12kLDrZOs14dyC0IgSvHyEv/3qb7f7QW8/kLFYfQo4CB2XLlZCoKu+CReoXJy6ILHSmBfAhXRgqjfrudQCW2zzzOHgp4OJrk3tpAjnpVRqI1HfMhULRS0s0yfCXs2bQvMoxBkJRQ58R+wN8huYk/boWu5vco4aT0JzjPtKccjEbvVnx8+L6yOu0q3hVv/9xC5C3xjb6ucTc8I81o6xDmcJj95bDNTI6AHY2dkcOPgZ6ONITXN6Nozi6I1bJYOGYZhkppPGsbCgOCjszXd+OZhyVMzeeYPrZViXVyMf0K7mssXlTrm23bdqyOLqssDrazHVRWngq2ZWRkSk5iRMtoQTYPGXav5tsZcVR98mZsZyHsSkTBJ00ncSQ2DgesDDrWTtZOtOfIZ6Aq8Lt5gb23gsnFftDRYRrihh2dps2ma4p3Ob1ey5DXP5DpQrzA/zL5A7dhtZEcnZs= vagrant@JumpHost

power_state:
    delay: now
    mode: reboot
    message: Rebooting after cloud-init completion
    condition: true

ssh_pwauth: True
 
  • Like
Reactions: mrkhachaturov
Final Yaml file. This is for debian vm.

YAML:
#cloud-config

resize_rootfs: false
disk_setup:
    /dev/vdb:
        table_type: 'mbr'
        layout:
        - 50
        - 50
        overwrite: true

fs_setup:
    - label: DATA1
      filesystem: 'ext4'
      device: '/dev/vdb'
      partition: vdb1
      overwrite: true
    - label: DATA2
      filesystem: 'ext4'
      device: '/dev/vdb'
      partition: vdb2
      overwrite: true

runcmd:
  - which  partx
  - which mkfs.ext4
  - /usr/bin/lsblk
  - /usr/bin/partx --update /dev/sdxyz
  - /usr/bin/partx --update /dev/vdb
  - /usr/sbin/mkfs.ext4 /dev/vdb1
  - /usr/sbin/mkfs.ext4 /dev/vdb2

mounts:
    - [/dev/vdb1, /data1, ext4, "defaults,discard", "0", "0"]
    - [/dev/vdb2, /data2, ext4, "defaults,discard", "0", "0"]

hostname: testdebian
package_upgrade: true
packages:
  - qemu-guest-agent
  - net-tools

timezone: Asia/Dhaka
users:
  - default
  - name: vagrant
    passwd: "$6$JwfNX94exjrdBozM$FEYyyVh4ZLnrGMfd9tZh8ajqPM60IJIjc4Z7ATEL7MZZ8aAqmVO8WH7HBaqp3IXNSaRQNQ33LuHfQq4hQxQDN."
    groups: [adm, cdrom, dip, plugdev, lxd, sudo]
    lock-passwd: false
    chpasswd: { expire: False }
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
    ssh_authorized_keys:
    - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3ozAcjgaphMeWnFoOak0CwbL4CdExtWjI6BjoPISVQcriDbl7W7ai12kLDrZOs14dyC0IgSvHyEv/3qb7f7QW8/kLFYfQo4CB2XLlZCoKu+CReoXJy6ILHSmBfAhXRgqjfrudQCW2zzzOHgp4OJrk3tpAjnpVRqI1HfMhULRS0s0yfCXs2bQvMoxBkJRQ58R+wN8huYk/boWu5vco4aT0JzjPtKccjEbvVnx8+L6yOu0q3hVv/9xC5C3xjb6ucTc8I81o6xDmcJj95bDNTI6AHY2dkcOPgZ6ONITXN6Nozi6I1bJYOGYZhkppPGsbCgOCjszXd+OZhyVMzeeYPrZViXVyMf0K7mssXlTrm23bdqyOLqssDrazHVRWngq2ZWRkSk5iRMtoQTYPGXav5tsZcVR98mZsZyHsSkTBJ00ncSQ2DgesDDrWTtZOtOfIZ6Aq8Lt5gb23gsnFftDRYRrihh2dps2ma4p3Ob1ey5DXP5DpQrzA/zL5A7dhtZEcnZs= vagrant@JumpHost

power_state:
    delay: now
    mode: reboot
    message: Rebooting after cloud-init completion
    condition: true

ssh_pwauth: True
Thank you
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!