Hi I'm trying to use Proxmox with the debian bookworm cloud image and cloud-init.
But when I start the vm it gets stuck on boot when it checks that it has a network connection Job systemd-networkd-wait-online.service/start running (#s / no limit) for about 2 minutes.
It seems Debian bookworm...
Thanks that makes more sense, so for my current disk the path is then /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1.
I'm running debian cloud image as stated and it uses cloud-init 20.4.1 but it seems the following doesn't work
disk_aliases: #Found a bug saying the name should be disk not...
For those of you using zoho as smtp server I managed to get it working with the following:
# See /usr/share/postfix/main.cf.dist for a commented, more complete version
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
I checked the /var/log/cloud-init-output.log file it still says ci-info: no authorized SSH keys fingerprints found for user peter. and ssh-ed25519: ..... peter@....something is just replaced content.
And yes I copied the content from my id_ed25519.pub but I added a - in front and a : after...
I'm trying to use cloud init to add my ssh key, I have generated one on my desktop PC using id_ed25519 after that I added the following to my cloud-init.yml users:
- name: peter
passwd: "..." # to enable auth using password until I get my key to work
Hi thanks for your quick reply.
One way to check if the file is run is to start terminal immediately after VM start, ie "qm start 100 && qm terminal 100". Then watch the console messages. I have attached a file with the output (not sure i was able to capture all of it.) I can see some cloud...
Hi I have created the following script to create a VM and with a cloud-init config:
echo "Loading variables"
echo "VMID: $VMID"
echo "STORAGE: $STORAGE"
echo "IMAGE: $IMAGE"...
I think I managed to find the correct way qm set $VMID --delete unused0 would have been nice if
qm set $VMID --delete scsi0 had printed out unused0 so that I had a hint. (This also seems to delete the zfs volume so I don't have to delete it any more)
Hi I run proxmox 7.2-7, I'm trying to remove a VM disk from the CLI.
I have tried the following:
qm set $VMID --delete scsi0
zfs destroy -f $STORAGE/vm-$VMID-disk-0
The first command seems to mark the disk as unused, and the second removes the zfs volume, but the UI still believes the disk is...
After mapping the video and render group into the container it seems to be detected
This was done by running the following commands getent group video | cut -d: -f3 and getent group render | cut -d: -f3 on the host and in the container and mapping them using the config file and /etc/subuid and...
Hi I have a machine with a Intel® Core™ i5-4690K CPU, that has Intel Quick Sync according to Arc
I have installed the i965-va-driver-shaders package and can see that something is detected by running vainfo
error: can't connect to X server!
libva info: VA-API version 1.10.0
libva info: Trying to...
I'm 99% that I enabled/disabled the correct service as the command systemctl --failed reflects the change correctly after a reboot.
But disabling the service at least removes/hides the problem.
The big question is if the zfs pool fails for some reason, will some other service get a failed state?
I renamed the zpool.cache file and enabled the service again, and did a restart.
Im guessing this is the interesting part?
Dec 17 10:12:56 proxmox kernel: i915 0000:00:02.0: [drm] Cannot find any crtc or sizes
Dec 17 10:12:56 proxmox systemd: Created slice system-lvm2\x2dpvscan.slice.
Hi again, tried to rename the /etc/zfs/zpool.cache to /etc/zfs/zpool.cache.backup but sadly after a reboot the failed service persists.
But disabeling the service worked and the /asgard directory is still there, so that seems to have done the trick..
I'm a bit confused what i did to cause this...
Is there any way to fix this issue, as I'm planning on adding monitoring and this would cause it to get a error..
I tried to add ZFS_INITRD_PRE_MOUNTROOT_SLEEP='15' and ZFS_INITRD_POST_MODPROBE_SLEEP='15' into /etc/default/zfs then run update-initramfs -u. but that didn't help.