Hello Proxmox Devs,
I know PDM is early but here are some of the top features I'd like to be added that I have not seen on the Roadmap. Hopefully others can agree on at least one of these below points.
Ability to access shell/console of both...
Hi,
you can check the host's and guest's system logs from around the time the issue happened. Please also share the output of pveversion -v and qm config ID with the numerical ID of the VM.
Let me explane it a bit more.
What I want is not exactly to separate Ceph RBD from CephFS. I want to give access to VM as Ceph clients from a different network.
On proxmox nodes I have a 10GbE dual NIC. One port dedicated to Ceph (public and...
Nope, no luck so far.
Node with the NFS-Server runs with the old Kernel for now.
Strange thing is I didn't get the Stale file handle on other nodes only on the one that is itself exporting the shares.
Ok.
What did you do as special parameters or installs on the servers that doesn't come from PVE ISO installer ?
Did the server already worked on an other OS with any problem ?
Did you look at hardware logs for over-temperature or manual hard reset ?
Hi @sdettmer,
systemd versions >= 242 requires nesting to be able to create Linux namespaces, which is used to isolate services. There are still distros that do not use systemd that you could use for your containers.
Hi!
I'm the biggest proponent of ceph, it's really an amazing product and technology. Multi-node redundancy, self-healing and rebalancing is really fantastic.
Please note though that it has a very different performance profile than other...
Hi!
I'm the biggest proponent of ceph, it's really an amazing product and technology. Multi-node redundancy, self-healing and rebalancing is really fantastic.
Please note though that it has a very different performance profile than other...
Thanks crashes stopped. I am using this as a baseline; switching to VirtIO SCSI (was avoiding it to try and avoid "VM" identifiers) rendered Windows unbootable due to driver and I didn't care enough to fix it since it's a single-purpose VM, so I...
Ok, decision made, one more 4TB WD Red SA500 ordered. Paid about the same amount for it as I did in May 25 for three of them :mad:.
I checked the installation process on a nested test system. It offers to build a ZFS Raid10 to install the system...
You can use
zpool status -v
lsblk -f
to check, but it really seems like the Universe storage is formatted for ZFS right now, not LVM.
Again, you can wipe the disks and re-create an LVM storage using these disks if that's what you want.
Proxmox...
I didn't say that I don't use ZFS at all in Proxmox.
ZFS are the two 600 GB disks where the VM is located.
Linux raid-a is not ZFS.
I don't think the problem is in the different names.
I changed them everywhere and it's still the same.
If it is all happening on the same host, you could consider using vmbr interfaces without a bridge port, or SDN simple zones without a gateway. Those two should technically be the same under the hood.
That way you can have a completely isolated...
If you need to use the same subnet multiple times, you need to utilize VRFs to separate them on the PVE host. This functionality is currently not implemented for Simple Zones. When using NAT this way, you'd also need a way of discerning return...
If it is all happening on the same host, you could consider using vmbr interfaces without a bridge port, or SDN simple zones without a gateway. Those two should technically be the same under the hood.
That way you can have a completely isolated...
If it is all happening on the same host, you could consider using vmbr interfaces without a bridge port, or SDN simple zones without a gateway. Those two should technically be the same under the hood.
That way you can have a completely isolated...