Btw, which EPYC model do you have installed? I am just reading up on it again so this is from memory and probably not technically 100% correct.
EPYC Naples (1 at the end of the 4 digit product code) has Numa nodes because they had to group the chiplets and not every chiplet has the same fast...
which command did you use to get that output?
On a AMD EPYC 7302P 16-Core Processor
I get the following:
# dmesg | grep -i numa
[ 0.006392] No NUMA configuration found
Too bad but I guess you need to use the machine in production and cannot tinker around for too long.
It would have been interesting to know what the cause was. I doubt that it was the same problem that LTT encountered in that video because you are far from having that many really fast SSDs...
Well, you have it in you container config that it will use vmbr0 as bridge to connect to any other network.
Your network config needs to be adapted then.
An example config that you would also get if you use the installer or GUI to set up your network would be:
iface eno1 inet manual
auto...
yes
yep
having the IP set on the bridge port and not the bridge is not a default setup. that might cause some issue but I am not sure. The webui should still be reachable. After a regular install with the installer that would be the setup you have out of the box. Not sue if it would be the...
AFAIR the number of max backups is set for each storage. Do you back up all VMs to the same storage or to different ones which might have a different max backup setting?
The first is the running kernel, the second one the version of PVE packages.
You might want to install updates. Those...
As another note, bridges do not need to have an IP set. So if the NIC which is configured as bridge port should only be used for the VMs and not to access the Proxmox UI, just don't configure an IP address.
You can also create "internal" networks by not setting a bridge port.
Without a config of the container at hand I have to guess a bit, but I do think that your vmbr0 config is missing a bridge port.
bridge-ports none is not using any of the physical NICs.
Can you try to set the bridge_port to either eno1 or eno2, depending on which physical NIC the containers...
Some things I would try:
Install some monitoring tool, especially to know how much of the RAM is going towards ZFS' ARC (cache in ram) and to see some stats from the disks (avg write delay, queue,...) but other systems stats might give insight as well.
How did you configure the disks of the VM...
I am running ZFS on root for year now on my Arch based laptops/desktops and for quite a while on my Proxmox servers.
You should consider that whatever problem you read about here in the forum is a small part of the happily working 300.000 installations out there.
given these restrictions, your idea is probably an okay one. Depending on the disk layout and RAID capabilities you will face different issues. ZFS on HW RAID is a bad idea, PVE only offers software raid via ZFS.
You would probably have to install debian first in an md raid to get redundancy...
Do you have an entry in your /etc/hosts file with your node and the IPv6 address? AFAIK the spiceproxy is checking that to determine on which IP to listen.
This is a feature that is built on top of ZFS snapshots and the send/receive mechanism of ZFS. It is not enabled automagically. You have to enable it per VM and define to which node it should be replicated and the interval.
Luckily, because I only replicate the important VMs in my 2 node...
Hmm, let's talk about replication and HA within a cluster first. This is what Proxmox can do out of the box. Either with a shared file system (Ceph, nfs, samba, iscsi,...) or with replication (ZFS) though this works best in a two node cluster because AFAIK you can only replicate to one other...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.