Thanks - and this shouldn't cause any problems? I can't see a need for swap being used really... Not when we've got a spare 100GB of RAM kicking about.
Chris.
Hi,
I'm running a hyperconverged cluster (Proxmox & Ceph) across three nodes. I've noticed the swap usage is creeping up. Each node has 192GB of RAM and 8GB of swap.
Ever since system install I've reduced the swappiness to 10.
Looking at what is actually eating the swap, it's a few KVM...
Hi,
Our backup servers RAID array hit a bit of an issue this morning and it ultimately caused the PBS to crash.
Now I can't view the datastore and my journal is giving the following errors -
Jun 24 04:01:52 pbs proxmox-backup-proxy[191]: GET...
Hi,
I've just upgraded to 8.2 and after a little trouble with networking (interface names changed) I'm up and running and testing out SDN.
I'd like to create an isolated subnet 192.168.100.0/24 that uses a specific upstream so that only machines within this subnet can talk to eachother...
So for anyone searching for this, the solution effectively just overwrites the chosen CPU model to use the host CPU directly. This will introduce migration problems etc if you're machines aren't running the same CPU's with the same updates. You're best off writing your own CPU model...
Hi,
I was just wondering if there is any particular benefit in configuring a VM to use multiple sockets, e.g. 2 sockets, 5 cores, vs assigning it 1 socket, 10 cores?
I'm running some benchmarks trying to squeeze as much performance as I can and it seems that 1 socket, 10 cores is a little more...
Sorry to dig this one up, but what exactly is it doing behind the scenes. Is it setting the CPU model to host rather than the chosen CPU model?
Thanks!
I always enable maintenance mode just in case but I thought failover already automatically occurred for HA assets that had additional nodes defined with lower priority.
Oh really, I thought it'd just install/update packages that are provided by Proxmox repos?
Morning. That file doesn't seem to exist but I do have history files in /var/log/apt. Most recent history file below...
So pve02 & pve03 are running 6.2.16-12-pve while pve01 is running 6.2.16-10-pve
So after spending the day testing, we're still no further on. Relocated eno2 & eno4 on nodes 2 & 3 to an entirely different freshly configured switch with no success. Speeds are still seemingly throttled.
We've also tried changing the bond-lacp-rate to fast, and tested with layer 2 2/3 & 3/4...
Just checked the driver versions if of any help...
modinfo i40e | grep ver
filename: /lib/modules/6.2.16-10-pve/kernel/drivers/net/ethernet/intel/i40e/i40e.ko
description: Intel(R) Ethernet Connection XL710 Network Driver
srcversion: F4CBEC026738F03F2EDD1D1
vermagic...
Such a shame I really had high hopes for that working! After applying the configuration and a reboot of all nodes:
chris@pve03:~$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.2.16-10-pve root=/dev/mapper/pve-root ro quiet intel_idle.max_cstate=0 intel_iommu=on iommu=pt
Yet I'm still really...
After 'failed' migrations I get this when making updates / updates to grub -
Generating grub configuration file ...
WARNING: VG name ubuntu-vg is used by VGs ydaT6c-EZ7p-ku2w-1etM-k29V-vwBx-J5BUQk and 2TvggJ-6Xok-Zdpw-Gfnc-Pl1n-pJcx-ul7DDg.
Fix duplicate VG names with vgrename uuid, a...
We are running Intel, currently configured with only max cstate's -
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_idle.max_cstate=0"
GRUB_CMDLINE_LINUX=""
Thank you - I will give your suggestions a go...
Thanks for your reply.
We don't have a /etc/kernel/cmdline file on the nodes. What should be in this file?
Ceph config is as below:
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.10.50.111/24...
Hi,
I've got a three node proxmox cluster with a dedicated Ceph OSD network on a 10GBit link. It's been working great on our providers switching equipment but they've recently had some rack changes and plugged into new switches.
During this change, I took the opportunity to update the Proxmox...
Similar situation then. We've run a lot of migrations in Proxmox 6 & 7 and since the 8 update is actually the first time I've ever seen migrations completing with warnings.
So I think the might be something introduced with the release of 8.
Chris.
Interesting, I have seen those same errors in some other migrations I was making.
So in your case did this cause any physical problems or just throwing up some errors?
Are you running/have you recently upgraded to Proxmox 8?
Chris.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.