Search results

  1. fstrankowski

    Proxmox HA & "Start at boot"

    Moin! Wir nutzen HA für unsere VMs. Laut Dokumentation wird die Option "Start at boot" nicht verwendet, sobald eine VM im HA ist. Was jetzt passiert ist: Wenn durch Probleme eine Maschine umgezogen wurde auf einem neuen Host, aber noch auf einem anderen Host vorhanden ist (eine Art...
  2. fstrankowski

    [SOLVED] [TAGS] Datacenter wide tag color override not applied to vms/lxc

    While testing the 'new' possibility of setting up tags for different categories we've stumbled upon the fact that when we setup tags on a datacenter level and color-override them accordingly, those tags are then available for selection on lxc/vms but the predefinied colors are not correctly...
  3. fstrankowski

    [SOLVED] Stuck in "menu timeout set to"

    I have just setup two new "Tiny Mini Micro" servers (HP EliteDesk 800 G4 / i5 8500) and installed Proxmox. Idetical systems. One system is stuck at the bootloader screen. Problem: The "menu timeout set to" is constantly counting up (increasing) without any limit. When pressing any key the...
  4. fstrankowski

    [SOLVED] Proxmox 8 | Sporadic ICMP / Scheduler Problems

    I've setup a fresh Proxmox 8 Box at Hetzner DC and have a new kind of problem with it i never had before. After setting up the box, i used to outward ping hosts to check the connectivity from time to time and its link latency - which is fine. But while pinging servers, waiting for the reply made...
  5. fstrankowski

    Proxmox 7.3.3 / Ceph 17.2.5 - OSDs crashing while rebooting

    We've recently (yesterday) updated our test-cluster to the latest PVE-Version. While rebooting the system (upgrade finished without any incidents), all OSDs on each system crashed: ** File Read Latency Histogram By Level [default] ** 2023-01-30T10:21:52.827+0100 7f5f16fd1700 -1 received...
  6. fstrankowski

    Proxmox 7.3 (LXC 5.0) using veth with multiple tagged and untagged vlans

    Since Proxmox 7.3 introduced LXC 5.0 i'm wondering when it will be possible to make use of LXC's feature of un/tagging VLANs on veth devices veth.vlan.id veth.vlan.tagged.id also IMHO it might be useful to add the rx/tx queues to the advanced tab veth.n_rxqueues veth.n_txqueues
  7. fstrankowski

    Proxmox 7.1.-12 > 7.2-7 Upgrade from Ceph 16.2.7 to Ceph 16.2.9 Snapshot Problems

    Good Morning everyone! Background: We've been running without errors prior to our yesterdays upgrade to 7.2-7 for weeks. Since our upgrade from 7.1-12 to 7.2-7 including the upgrade of Ceph to 16.2.9 we are not able to snapshot our LXC containers anymore, if they are running. This is...
  8. fstrankowski

    CEPH Outage "active+clean+laggy" resulted in task kmmpd-rbd*:7998 blocked

    Hello, tonight we've had quite the outage. Cluster has been healthy and not overloaded NVMe/SSD-Discs are all fine, 2-4% wearout It all started with: 2022-06-22T01:35:34.335404+0200 mgr.PXMGMT-AAA-N01 (mgr.172269982) 2351345 : cluster [DBG] pgmap v2353839: 513 pgs: 1 active+clean+laggy...
  9. fstrankowski

    Kernel Panic with our new Cluster

    Hey guys, we've bought some new Hardware and receive this kernel panic on several machines (although they all have the same HW, some panic): [ 20.699939] ------------[ cut here ]------------ [ 20.700608] kernel BUG at mm/slub.c:306! [ 20.701277] invalid opcode: 0000 [#1] SMP NOPTI [...
  10. fstrankowski

    Absturz ganzes Proxmox-Cluster mit 12 Nodes / Segfault cfs_loop

    Hallo, wir haben am Wochenende einen massiven Absturz eines unserer Proxmox-Cluster erlebt. Von jetzt auf gleich ist ein ganzes Cluster abgestürzt, zeitgleich. Hier der Auszug aus der messages: Feb 24 07:25:59 PX20-WW-SN06 kernel: [1448261.497103] cfs_loop[12091]: segfault at 7fbb0bd266ac ip...
  11. fstrankowski

    [SOLVED] Performance Probleme / Ceph

    Moin! Wir betreiben einen großen CEPH / Proxmox Cluster und ich finde leider im Moment keinen Ansatz zur Problemlösung der Performanceprobleme. IOStat zeigt extrem hohe Auslastungen einzelner RBDs, wenn ich in einem Pool einen Ceph-Benchmark laufen lassen geht ein bestimmtes RBD an die Wand...
  12. fstrankowski

    [BEST PRACTICE] - Proxmox / Ceph / LXC (Question)

    Hello everyone, i'd like to ask for a best practice solution for our current setup. We're running a Proxmox-Cluster over 3 datacenters, on Hypervisors with SD-Cards (32GB each). Our problem is as follows: After testing LXC-Containers for our production enviroment, we've figured out, that even...
  13. fstrankowski

    LXC-Backups Hang (via NFS and CIFS)

    Hello everyone, i'd like to ask for help regarding a problem i recently got my hands on. We're running 3 Proxmox clusters over 3 datacenters. Backup routines run by night for all 3 clusters. Backups are done via CIFS and also NFS. From time to time i'm running into a problem, where the...
  14. fstrankowski

    [BUG] qm.conf / qemu-server / BIOS-Settings

    Dear Devs, This time i need some help :) In KVM/libvirt its common to be able to set the system/bios/product information. The problem we encounter is, that this information can be multilined (one line for each part of the information). Proxmox only allowes one string. For licensing purposes we...
  15. fstrankowski

    [BUG]? Misleading Manual

    The manual states for creating an OSD with filestore: If you want to use a dedicated SSD journal disk: pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y] But we're missing a certain flag here: -bluestore 0 So it should be: pveceph createosd /dev/sd[X] -bluestore 0 -journal_dev /dev/sd[Y...
  16. fstrankowski

    [BUG]? Weired Config Flat in PVE-5.1

    In PVE-5.1 Bluestore is the default for CEPH. Wondering why we got an extra flag "Bluestore" in the GUI, shouldnt this be "Filestore" instead to use the legacy option? Just wondering, because this flag doesnt change anything at all.
  17. fstrankowski

    Massive Problems with Proxmox on Lenovo Flex x240 m5

    Hey guys, we're running Proxmox (4) on multiple Lenovo Flex x240 m5-nodes with Dual E5 CPU and 10GbE. From time to time multiple nodes just go nuts. Maybe someone can lighten up my thoughts towards any solving idea Complete Bootlog: https://pastebin.com/raw/Wz2Ky8v8 Is this maybe related to...
  18. fstrankowski

    Test / LUKS / SoftRaid-10-F2 / Slow VM Performance

    Hey guys, Despite beeing an enterprise customer with dozens of machines, i've tried to replicate a setup we run from time to time: Enterprise Hardware Xeon 1650v2 4x 2 TB SATA Softraid 10-F2 LUKS Full-Encryption Using standalone setups our VMs achive same performance as our hosts using...
  19. fstrankowski

    [SOLVED] Proxmox 4 / Cluster over MPLS

    Hey guys, here is a tricky one: We've setup a proxmox (4.4) cluster of 14 machines and wanted to join more servers into it. Cluster is running fine, there is no problem at all. The new servers got linked into the very same VLAN where the current running Cluster is operating in. Difference: The...
  20. fstrankowski

    Medium to Large-Scale Deployment Questions

    Dear Members, we're planning to integrate a Proxmox/Ceph Instance into our production-enviroment. After testing Proxmox for several weeks we think this might be a good idea for some of our services to rely on. We've accomplished to archive the following so far: Setup 2 Bladecenters in two...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!