Hello
I'm having problems creating firewall rules using aliases created dinamically.
I started by creating a simple zone and a vnet on top of that. The zone is using integrated IPAM and PowerDNS for name registration and resolution.
The vnet...
Feature requests should be filed on bugzilla.proxmox.com Since the forum is the community forum things might get missed by staff members. New bugzilla however are guaranteed to get noticed
Feature requests should be filed on bugzilla.proxmox.com Since the forum is the community forum things might get missed by staff members. New bugzilla however are guaranteed to get noticed
Hello Proxmox Devs,
I know PDM is early but here are some of the top features I'd like to be added that I have not seen on the Roadmap. Hopefully others can agree on at least one of these below points.
Ability to access shell/console of both...
Hi,
you can check the host's and guest's system logs from around the time the issue happened. Please also share the output of pveversion -v and qm config ID with the numerical ID of the VM.
Let me explane it a bit more.
What I want is not exactly to separate Ceph RBD from CephFS. I want to give access to VM as Ceph clients from a different network.
On proxmox nodes I have a 10GbE dual NIC. One port dedicated to Ceph (public and...
Nope, no luck so far.
Node with the NFS-Server runs with the old Kernel for now.
Strange thing is I didn't get the Stale file handle on other nodes only on the one that is itself exporting the shares.
Ok.
What did you do as special parameters or installs on the servers that doesn't come from PVE ISO installer ?
Did the server already worked on an other OS with any problem ?
Did you look at hardware logs for over-temperature or manual hard reset ?
Hi @sdettmer,
systemd versions >= 242 requires nesting to be able to create Linux namespaces, which is used to isolate services. There are still distros that do not use systemd that you could use for your containers.
Hi!
I'm the biggest proponent of ceph, it's really an amazing product and technology. Multi-node redundancy, self-healing and rebalancing is really fantastic.
Please note though that it has a very different performance profile than other...
Hi!
I'm the biggest proponent of ceph, it's really an amazing product and technology. Multi-node redundancy, self-healing and rebalancing is really fantastic.
Please note though that it has a very different performance profile than other...
Thanks crashes stopped. I am using this as a baseline; switching to VirtIO SCSI (was avoiding it to try and avoid "VM" identifiers) rendered Windows unbootable due to driver and I didn't care enough to fix it since it's a single-purpose VM, so I...
Ok, decision made, one more 4TB WD Red SA500 ordered. Paid about the same amount for it as I did in May 25 for three of them :mad:.
I checked the installation process on a nested test system. It offers to build a ZFS Raid10 to install the system...