Hi,
also filed an issue https://bugzilla.proxmox.com/show_bug.cgi?id=5719
I did manage to pull a wildcard certificate with the suggested patches to the DNS_NAME_FORMAT schema.
There is of course still something weird happening. You need to add two domain entries
- the.domain.tld
-...
So I'd like to issue a valid LE wildcard certificate for my pbs instance. This is especially useful to hide detailed information behind the public scope of the infrastructure in the LE domain log.
I have a working infrastructure for rfc2136 (dns-01) challenge handling through an alias domain...
There are other threads [1] with crash dumps, which narrow down the issue to blk_flush_complete_seq, this in turn calls blk_flush_restore_request. This function has recent activity [2]. Namingly fixing a NULL pointer dereference. As far I can see, this has just been scheduled to be in 6.10-rc1...
I have achieved moving vm's between existing datastores.
* create a remote "localhost"
* add sync job on the target datastore, pulling from localhost's source datastore with a group filter only covering the desired vm
* run-now the sync job, after that remove the temporary sync job...
This is actually what I expected too. Thats why I migrated my kind of important vm's manually over to the second node before first node reboot. I expected the cluster to get read-only (no management input possible) but not to die completely. I also don't have shared ressources within the VM's...
Yes, you seem to be totally right. Fencing really kills the host. There was HA configured on a vm template...
Not only that solved, thank's for pointing out the slim qdevice. There is of course a 3rd node in progress but this really is the newly available(?) fix for that. If I remembered right...
TL;DR if one host is rebooted, the second hosts hard-resets. No logs in dmesg/messages.
I have a little setup running on ThinkCenters, with nvme formatted as lvm. This setup serves a few home applications and is mainly for educational purposes. A playground so to say. Both hosts have 16GB RAM...
Happened to me today after stopping a manual snapshot task because I forgot to uncheck RAM while it was writing RAM to disk.
The vm remained killed and locked. The snapshot also remained in snapshot list but NOW was not shown as child.
After `qm unlock`, trying to remove snapshot, resulted in
>...
Turns out disabling multicast_snooping on the proxmox host has solved connectivity issues so far.
echo -n 0 > /sys/class/net/*/bridge/multicast_snooping
I have a similar problem, being unable to connect to samba/cifs shares.
From what I see in smbd's log on the storage side, pvesm always tries to connect as user nobody regardless of what --username is supplied to cifsscan. This doesn't look right.
On the other hand cifsscan is able to list...
This issue still persists.
What I've configured above was
host:
- vmbr0 vlan aware with enp1s0 as slave
vm:
- net3 vmbr0 tag 97
- net4 vmbr0 tag 98
In this case net3 and net4 stop forwarding packets after about a minute.
What I've tried now, wich runs stable
host:
- vmbr0 without vlan aware...
Hi.
I'am running proxmox on a rockpi-x wich hosts a few tiny routers and telegraf instances.
After a recent dist-update from basicly 5.4.98-1-pve to 5.4.106-1-pve on the host one openwrt (gluon) based vm shows unstable network connectivity.
The vm has 5 NICs configured of wich two of them...
I've recently learned the solution to this.
If VMs have to be configured to multiple vlans, don't create a bridge vor every vlan on the host. A linux bridge is by design an interface for the host to be reachable in.
Just use one bridge with "vlan aware" enabled and only use this one bridge as...
I had a very similar experience with CIFS share on datacenter/storage level not unmounting after unchecking "Enabled".
Before also filing a bug I will make shure my expectency is correct.
A storage/mount has to go after being disabled or undefined. - correct?
My current solution is to manually...
As some people say, root inside a container could be considered "safe" as it's allready running in user context on the host.
Sadly the best answer to this on the internet.
So in my case of telegraf in a container, run it as root and it is able to ping.
Same here while trying to get telegraf working using native ping plugin.
After setcap, user telegraf inside the container is able to execute ping (legacy, screen scrape).
This workaround does not work for telegraf's native ping implementation. Even after also applying setcap to telegraf binary...
I have to disagree, that an interface/bridge wich just has no IP given, is not listening or reachable via its connected L2 network.
Why?
By default, the kernel always sticks his plug into a linux-bridge.
- Interfaces (vmbrX) still pick up ipv6 adresses if the network it switches announces one...
Hallo,
ich wundere mich gerade über "N/A" Werte in der Wearout-Spalte der Disks-Übersicht auf der Proxmox-Oberfläche.
Eigentlich sind die SMART-Werte vorhanden.
Ein wenig reverse-engineering und der (Bug) ist gefunden. `get_wear_leveling_info` schaut nach dem Vendor-Namen im Model-String der...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.