Search results

  1. S

    Bug in issuing wildcard certificate with Proxmox ACME

    Hi, also filed an issue https://bugzilla.proxmox.com/show_bug.cgi?id=5719 I did manage to pull a wildcard certificate with the suggested patches to the DNS_NAME_FORMAT schema. There is of course still something weird happening. You need to add two domain entries - the.domain.tld -...
  2. S

    Bug in issuing wildcard certificate with Proxmox ACME

    So I'd like to issue a valid LE wildcard certificate for my pbs instance. This is especially useful to hide detailed information behind the public scope of the infrastructure in the LE domain log. I have a working infrastructure for rfc2136 (dns-01) challenge handling through an alias domain...
  3. S

    Kernel 6.8.4-2 causes random server freezing

    There are other threads [1] with crash dumps, which narrow down the issue to blk_flush_complete_seq, this in turn calls blk_flush_restore_request. This function has recent activity [2]. Namingly fixing a NULL pointer dereference. As far I can see, this has just been scheduled to be in 6.10-rc1...
  4. S

    [SOLVED] Reproducible Proxmox Crash on Kernel 6.8.4-2-pve: KVM force termination renders Web UI, SSH, and all running VMs Unresponsive

    I wonder if this has something to do with https://lore.kernel.org/all/20240501110907.96950-9-dlemoal@kernel.org/
  5. S

    Migrate backup to another datastore

    I have achieved moving vm's between existing datastores. * create a remote "localhost" * add sync job on the target datastore, pulling from localhost's source datastore with a group filter only covering the desired vm * run-now the sync job, after that remove the temporary sync job...
  6. S

    2-node cluster second host hard-resets on first host reboot

    This is actually what I expected too. Thats why I migrated my kind of important vm's manually over to the second node before first node reboot. I expected the cluster to get read-only (no management input possible) but not to die completely. I also don't have shared ressources within the VM's...
  7. S

    2-node cluster second host hard-resets on first host reboot

    Yes, you seem to be totally right. Fencing really kills the host. There was HA configured on a vm template... Not only that solved, thank's for pointing out the slim qdevice. There is of course a 3rd node in progress but this really is the newly available(?) fix for that. If I remembered right...
  8. S

    2-node cluster second host hard-resets on first host reboot

    TL;DR if one host is rebooted, the second hosts hard-resets. No logs in dmesg/messages. I have a little setup running on ThinkCenters, with nvme formatted as lvm. This setup serves a few home applications and is mainly for educational purposes. A playground so to say. Both hosts have 16GB RAM...
  9. S

    [SOLVED] Cannot remove Snapshot ( VM is locked (snapshot-delete) )

    Happened to me today after stopping a manual snapshot task because I forgot to uncheck RAM while it was writing RAM to disk. The vm remained killed and locked. The snapshot also remained in snapshot list but NOW was not shown as child. After `qm unlock`, trying to remove snapshot, resulted in >...
  10. S

    VM's loose network connectivity after few minutes

    Turns out disabling multicast_snooping on the proxmox host has solved connectivity issues so far. echo -n 0 > /sys/class/net/*/bridge/multicast_snooping
  11. S

    Access Denied creating CIFS share between Proxmox and TrueNAS Scale

    I have a similar problem, being unable to connect to samba/cifs shares. From what I see in smbd's log on the storage side, pvesm always tries to connect as user nobody regardless of what --username is supplied to cifsscan. This doesn't look right. On the other hand cifsscan is able to list...
  12. S

    VM's loose network connectivity after few minutes

    This issue still persists. What I've configured above was host: - vmbr0 vlan aware with enp1s0 as slave vm: - net3 vmbr0 tag 97 - net4 vmbr0 tag 98 In this case net3 and net4 stop forwarding packets after about a minute. What I've tried now, wich runs stable host: - vmbr0 without vlan aware...
  13. S

    VM's loose network connectivity after few minutes

    Hi. I'am running proxmox on a rockpi-x wich hosts a few tiny routers and telegraf instances. After a recent dist-update from basicly 5.4.98-1-pve to 5.4.106-1-pve on the host one openwrt (gluon) based vm shows unstable network connectivity. The vm has 5 NICs configured of wich two of them...
  14. S

    VM in a DMZ

    I've recently learned the solution to this. If VMs have to be configured to multiple vlans, don't create a bridge vor every vlan on the host. A linux bridge is by design an interface for the host to be reachable in. Just use one bridge with "vlan aware" enabled and only use this one bridge as...
  15. S

    removing nfs at datacenter level

    I had a very similar experience with CIFS share on datacenter/storage level not unmounting after unchecking "Enabled". Before also filing a bug I will make shure my expectency is correct. A storage/mount has to go after being disabled or undefined. - correct? My current solution is to manually...
  16. S

    Ping with unprivileged user in LXC container / Linux capabilities

    As some people say, root inside a container could be considered "safe" as it's allready running in user context on the host. Sadly the best answer to this on the internet. So in my case of telegraf in a container, run it as root and it is able to ping.
  17. S

    Ping with unprivileged user in LXC container / Linux capabilities

    Same here while trying to get telegraf working using native ping plugin. After setcap, user telegraf inside the container is able to execute ping (legacy, screen scrape). This workaround does not work for telegraf's native ping implementation. Even after also applying setcap to telegraf binary...
  18. S

    VM in a DMZ

    I have to disagree, that an interface/bridge wich just has no IP given, is not listening or reachable via its connected L2 network. Why? By default, the kernel always sticks his plug into a linux-bridge. - Interfaces (vmbrX) still pick up ipv6 adresses if the network it switches announces one...
  19. S

    SSD Wearout S.M.A.R.T - N/A

    PATCH an pve-devel@proxmox.com gesendet.
  20. S

    SSD Wearout S.M.A.R.T - N/A

    Hallo, ich wundere mich gerade über "N/A" Werte in der Wearout-Spalte der Disks-Übersicht auf der Proxmox-Oberfläche. Eigentlich sind die SMART-Werte vorhanden. Ein wenig reverse-engineering und der (Bug) ist gefunden. `get_wear_leveling_info` schaut nach dem Vendor-Namen im Model-String der...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!