I have a cluster mixed of single and dual socket machines. Is migrating machines from the dual socket machines > single socket with NUMA enabled have any negative effects?
This is a dual socket AMD EPYC system, NUMA is enabled but only for the single VPS I am testing with as I did not set it before noticing this issue.
Running the test sysbench --test=memory --memory-block-size=4G --memory-total-size=32G run on the Proxmox host and on virtual machines shows...
I seem to have an issue with one of my machines https://pastebin.com/zK4KZL9r it keeps crashing, the memory is near full, it's only swapping a few GB.
The following is being used
vm.swappiness=20
options zfs zfs_arc_min=10737418240
# Set to use 20GB Max
options zfs zfs_arc_max=21474836480
##...
It will either boot up as enp133s0f or enp129s0f, what would cause it to keep changing?
4: enp133s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:10:18:c3:a1:80 brd ff:ff:ff:ff:ff:ff
5: enp133s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop...
One of my servers is taking a very long time to migrate data. As you see from this screenshot, it does eventually complete
syslog shows the following, is it a sign of a bad disk? nvme2n1p1 & nvme3n1p1 are in the ZFS pool
May 13 16:45:16 HOME1 zed: eid=90 class=delay pool='zfs' vdev=nvme3n1p1...
Prune & GC job running daily and a verify job running daily
Second picture is the total disk space of PBS, it just keeps going up and up even though there are not more backups being added (maxiumum of 3)
Last part of the image is my ZFS pool size hosting the virtual machines, it's not going up...
I am trying to figure out why my Proxmox node crashed this morning and entered into read-only mode, below is syslog anything stand out?
https://pastebin.com/Tj5cYPUk
A reboot fixed it, disks are nowhere near full
Our public IPs are 202.55.21.xxx I am running the following to switch from public to internal IPs for the cluster
killall -9 corosync
systemctl stop pve-cluster
systemctl stop pvedaemon
systemctl stop pvestatd
systemctl stop pveproxy
sed -i 's/202.55.21/10.0.10/g' /etc/corosync/corosync.conf...
My node needs to be rebooted how can I bulk Hibernate (suspend to disk) lots of VMs?
On a side note are there any risk of data loss with hibernation? or any known issues
When I create a template from a cloud image I run the following
qm create 8110 --name Ubuntu20.04 --memory 2048
qm importdisk 8110 focal-server-cloudimg-amd64.img local
cd /var/lib/vz/images/8110
qemu-img convert -O qcow2 vm-8110-disk-0.raw vm-8110-disk-0.qcow2
qm set 8110 --scsihw...
I would like to sync my ZFS datastore (root@prox1) to my remote storage server (root@storage) preferably every 15 minutes. It must only sync new data just like zfs replication (which is local only???)
root@prox1:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfs...
Wondering has anyone tested if live migration with host flag between different AMD EPYC (7001 series).
It would be good to know what works and what doesn't if anyone has tests.
I want to block virtual machines from being able to connect to proxmox interfaces on https://10.0.12.100:8006 for example. I've only tried the following which blocked all access not just the VMs.
[/code][RULES]
IN ACCEPT -i vmbr1 -source 10.0.12.0/24 -log nolog[/code]
If possible I want it to...
I am seeing a bunch of errors when my system boots I am not sure what they all mean except for the /net/ lines below
asrock X570D4U
NIC: Solarflare SF432-1012
OS: Installed Proxmox after Debian11
https://pastebin.com/Qe9HJgYv
Nov 19 11:29:52 ENT1 systemd-sysctl[820]: Couldn't write '1' to...
I have about 100 virtual machines I back up daily and experience terrible IO performance during verificatinos that take 10+ hours
The disks used are 4x RAID 10 (Hardare RAID) WUH721414AL5201...
I'm searching for some newer upgraded hardware and am stuck on if I should buy in regards to the performance from 1 CPU Socket vs Dual CPU sockets.
When looking at CPU benchmarks I have noticed that Dual CPUs always have a lower score than having two separate nodes.
For example if you look at...
I am attempting to convert from OpenVZ (ploop) to Proxmox LXC
Export:
vzctl stop 1145 && vzdump 1145 --bwlimit 9999999999999
Import:
pct restore 2136 /var/lib/vz/dump/vzdump-1145.tar
The following errors are shown
recovering backed-up configuration from '/var/lib/vz/dump/vzdump-1145.tar'...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.