Latest activity

  • P
    Hatte das auch auf einem host und bei mir hat folgendes geholfen: 1. Proxmox 9 ISO herunterladen, auf einen USB Stick spielen und damit booten. 2. Dann Im Boot Menu des USB Sticks Rescue oder Recovery auswählen. (Kann mich nicht mehr an die...
  • Y
    I recorded the entire boot sequence and console updates a few minutes after. https://youtu.be/C9GJ2WJ-1x8
  • F
    So you have a rbd pool called vmpool and now you want to put CephFS into that pool? That doesn't work and doesn't make sense, to me at least. Would you like to elaborate on what you want to achieve?
  • tcabernoch
    tcabernoch reacted to SteveITS's post in the thread CEPH cache disk with Like Like.
    @tcabernoch IIRC from when I looked into it using a disk for read cache was deprecated or otherwise not viable with Ceph. I just don’t recall the details now. It does have memory caching. Our prior Virtuozzo Storage setup did have that ability...
  • N
    ^ That's very interesting. I've been downloading a Kali Everything torrent, withhout issue. 12+gb, but it's not stressing it as far as speed goes. I'll do more testing with iperf3 later. It may be of interest that our mac statuses and...
  • tcabernoch
    tcabernoch replied to the thread CEPH cache disk.
    Thanks, Steve. The reason I wanted to do a DB/WAL disk is that the capacity SSDs are SATA ... so they mount at 6gb/s. This is a Gen13 Dell. They should have bought SAS for12gb/s. Terrible original build choices. And I thought the speed I was...
  • tcabernoch
    tcabernoch reacted to SteveITS's post in the thread CEPH cache disk with Like Like.
    If you already have SSDs then I wouldn’t try to separate DB/WAL. Ceph capacity depends on replication, default is 3/2 so typically 1/3 of total space less overhead.
  • readyspace
    Quick checks: was the bind mount active before file creation? And are your LXCs unprivileged so UID/GID mapping could be hiding the file?
  • M
    Hallo zusammen, nach dem Upgrade auf Proxmox Mail Gateway 9.0 erhalte ich beim Klick auf den Link „Whitelist“ in der Quarantäne-Ansicht folgenden Fehler: internal error, unknown action 'whitelist' at /usr/share/perl5/PMG/API2/Quarantine.pm...
  • tcabernoch
    tcabernoch reacted to readyspace's post in the thread Cluster Issues with Like Like.
    HI, stale node entries or mismatched SSH keys can definitely cause cluster sync chaos. In addition, make sure the new node’s ring0_addr matches the existing subnet in /etc/pve/corosync.conf, and that /etc/hosts across all nodes correctly maps...
  • readyspace
    readyspace replied to the thread Cluster Issues.
    HI, stale node entries or mismatched SSH keys can definitely cause cluster sync chaos. In addition, make sure the new node’s ring0_addr matches the existing subnet in /etc/pve/corosync.conf, and that /etc/hosts across all nodes correctly maps...
  • S
    SteveITS replied to the thread CEPH cache disk.
    If you already have SSDs then I wouldn’t try to separate DB/WAL. Ceph capacity depends on replication, default is 3/2 so typically 1/3 of total space less overhead.
  • tcabernoch
    tcabernoch replied to the thread Cluster Issues.
    Clue there ... you wiped it clean. And then you probably rejoined it with the same name ... Did you delete /etc/pve/nodes/OLD-NODE-YOU-NUKED before rejoining the rebuilt machine? Did you comment out the old ssh key in...
  • R
    ramonmedina replied to the thread Keep copies of emails..
    Resurrecting this thread - I'm migrating to PMG from the free (but now defunct) EFA Email Filter Appliance in my home lab, and I have deployed the SpamTitan appliance at work. My hope was to also migrate work to PMG pending how well it works for...
  • readyspace
    Hi, you will need to setup using Proxmox bridges and VLAN tagging. Please try this... Create one bridge (e.g. vmbr0) for WAN (CHR ether1). Create another VLAN-aware bridge (e.g. vmbr1) for LAN and VLANs (CHR ether2). Attach VLAN interfaces (10...
  • M
    ^ That's very interesting. I've been downloading a Kali Everything torrent, withhout issue. 12+gb, but it's not stressing it as far as speed goes. I'll do more testing with iperf3 later. It may be of interest that our mac statuses and...
  • tcabernoch
    tcabernoch replied to the thread CEPH cache disk.
    So, after all of that questing, testing, and inquiry, I exposed it to some real-world load. I restored a whole bunch of client VMs to the cluster and ran them like hell. And then I completely filled the datastore till it stopped working, to see...
    • 1760322424113.png
    • 1760323412944.png
  • readyspace
    Hi, you’re likely hitting a routing and proxy ARP issue caused by multiple gateways and Hetzner’s MAC-bound IP setup. At Hetzner, each additional IP or subnet must be assigned to a unique virtual MAC and attached to the VM NIC — you can’t just...
  • readyspace
    Make sure your router → Proxmox host → LXC container network path is open. The usual blockers are service binding, firewalls, or missing NAT rules on the Proxmox host. Ask them to check that the game server is listening on the IP, that...
  • A
    alarsson replied to the thread pvestatd segfaults.
    FWIW, I am also now seeing the same issue on one of my MS-01s since upgrading from 8.4 to 9.0.10 with kernel 6.41. No other daemons are segfaulting. I suspected bad RAM but memtest86 isn't finding anything.