Moin!
Wir nutzen HA für unsere VMs. Laut Dokumentation wird die Option "Start at boot" nicht verwendet, sobald eine VM im HA ist. Was jetzt passiert ist: Wenn durch Probleme eine Maschine umgezogen wurde auf einem neuen Host, aber noch auf einem anderen Host vorhanden ist (eine Art...
While testing the 'new' possibility of setting up tags for different categories we've stumbled upon the fact that when we setup tags on a datacenter level and color-override them accordingly, those tags are then available for selection on lxc/vms but the predefinied colors are not correctly...
I have just setup two new "Tiny Mini Micro" servers (HP EliteDesk 800 G4 / i5 8500) and installed Proxmox. Idetical systems. One system is stuck at the bootloader screen.
Problem: The "menu timeout set to" is constantly counting up (increasing) without any limit.
When pressing any key the...
I've setup a fresh Proxmox 8 Box at Hetzner DC and have a new kind of problem with it i never had before. After setting up the box, i used to outward ping hosts to check the connectivity from time to time and its link latency - which is fine. But while pinging servers, waiting for the reply made...
We've recently (yesterday) updated our test-cluster to the latest PVE-Version. While rebooting the system (upgrade finished without any incidents), all OSDs on each system crashed:
** File Read Latency Histogram By Level [default] **
2023-01-30T10:21:52.827+0100 7f5f16fd1700 -1 received...
Since Proxmox 7.3 introduced LXC 5.0 i'm wondering when it will be possible to make use of LXC's feature of un/tagging VLANs on veth devices
veth.vlan.id
veth.vlan.tagged.id
also IMHO it might be useful to add the rx/tx queues to the advanced tab
veth.n_rxqueues
veth.n_txqueues
Good Morning everyone!
Background: We've been running without errors prior to our yesterdays upgrade to 7.2-7 for weeks. Since our upgrade from 7.1-12 to 7.2-7 including the upgrade of Ceph to 16.2.9 we are not able to snapshot our LXC containers anymore, if they are running. This is...
Hello,
tonight we've had quite the outage.
Cluster has been healthy and not overloaded
NVMe/SSD-Discs are all fine, 2-4% wearout
It all started with:
2022-06-22T01:35:34.335404+0200 mgr.PXMGMT-AAA-N01 (mgr.172269982) 2351345 : cluster [DBG] pgmap v2353839: 513 pgs: 1 active+clean+laggy...
Hey guys,
we've bought some new Hardware and receive this kernel panic on several machines (although they all have the same HW, some panic):
[ 20.699939] ------------[ cut here ]------------
[ 20.700608] kernel BUG at mm/slub.c:306!
[ 20.701277] invalid opcode: 0000 [#1] SMP NOPTI
[...
Hallo,
wir haben am Wochenende einen massiven Absturz eines unserer Proxmox-Cluster erlebt. Von jetzt auf gleich ist ein ganzes Cluster abgestürzt, zeitgleich. Hier der Auszug aus der messages:
Feb 24 07:25:59 PX20-WW-SN06 kernel: [1448261.497103] cfs_loop[12091]: segfault at 7fbb0bd266ac ip...
Moin!
Wir betreiben einen großen CEPH / Proxmox Cluster und ich finde leider im Moment keinen Ansatz zur Problemlösung der Performanceprobleme. IOStat zeigt extrem hohe Auslastungen einzelner RBDs, wenn ich in einem Pool einen Ceph-Benchmark laufen lassen geht ein bestimmtes RBD an die Wand...
Hello everyone,
i'd like to ask for a best practice solution for our current setup. We're running a Proxmox-Cluster over 3 datacenters, on Hypervisors with SD-Cards (32GB each). Our problem is as follows:
After testing LXC-Containers for our production enviroment, we've figured out, that even...
Hello everyone,
i'd like to ask for help regarding a problem i recently got my hands on. We're running 3 Proxmox clusters over 3 datacenters. Backup routines run by night for all 3 clusters. Backups are done via CIFS and also NFS. From time to time i'm running into a problem, where the...
Dear Devs,
This time i need some help :) In KVM/libvirt its common to be able to set the system/bios/product information. The problem we encounter is, that this information can be multilined (one line for each part of the information). Proxmox only allowes one string. For licensing purposes we...
The manual states for creating an OSD with filestore:
If you want to use a dedicated SSD journal disk:
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
But we're missing a certain flag here:
-bluestore 0
So it should be:
pveceph createosd /dev/sd[X] -bluestore 0 -journal_dev /dev/sd[Y...
In PVE-5.1 Bluestore is the default for CEPH. Wondering why we got an extra flag "Bluestore" in the GUI, shouldnt this be "Filestore" instead to use the legacy option?
Just wondering, because this flag doesnt change anything at all.
Hey guys,
we're running Proxmox (4) on multiple Lenovo Flex x240 m5-nodes with Dual E5 CPU and 10GbE. From time to time multiple nodes just go nuts. Maybe someone can lighten up my thoughts towards any solving idea
Complete Bootlog: https://pastebin.com/raw/Wz2Ky8v8
Is this maybe related to...
Hey guys,
Despite beeing an enterprise customer with dozens of machines, i've tried to replicate a setup we run from time to time:
Enterprise Hardware
Xeon 1650v2
4x 2 TB SATA
Softraid 10-F2
LUKS Full-Encryption
Using standalone setups our VMs achive same performance as our hosts using...
Hey guys, here is a tricky one:
We've setup a proxmox (4.4) cluster of 14 machines and wanted to join more servers into it. Cluster is running fine, there is no problem at all. The new servers got linked into the very same VLAN where the current running Cluster is operating in.
Difference: The...
Dear Members,
we're planning to integrate a Proxmox/Ceph Instance into our production-enviroment. After testing Proxmox for several weeks we think this might be a good idea for some of our services to rely on.
We've accomplished to archive the following so far:
Setup 2 Bladecenters in two...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.