Recent content by PigLover

  1. PCI nuc

    It definitely not an enterprise class device, but the NUC 12 PCIe card is no slouch either. I9-12900. 3x m.2 slots (PCIe gen4 slots), 10gbe+2.5gbe LAN. 2x Thunderbolt 4. If I was building a business I'd probably use traditional servers. But you could build one heck of a cluster out of these...
  2. Ceph is not configured to be really HA

    In order to remain HA Ceph requires you to supply enough “spare” resource to absorb a failure. You need to have enough disk space free on each host in order to absorb the loss of your largest OSD on that host. Further, in a cluster with replica 3, you really should have at least 4 hosts in...
  3. Building an silent/fanless server

    If your up for a BYO project - take a look at the Akasa Euler series of heatsink chassis. They have a good choice of compatible miniITX motherboards and you can build performance to match your price target...
  4. Why its so difficult change network config without restart host (hypervisor)

    +1. The single most annoying thing about Proxmox.
  5. Disk spin down

    There is a better resolution discussed here (post in the thread #6): https://forum.proxmox.com/threads/hdd-never-spin-down.53522/#post-322522
  6. ODROID H2+ Realtek RTL8125B Network Interface Issues

    You might want to install the drivers using the dkms method (from the .deb file) rather than using the direct installer (its described on Odroid's wiki). If you install it directly with the driver compile script then you'll lose the driver and have to re-install any time you do an update that...
  7. Got some questions about my future server computer

    If your primary issue is Plex transcoding then the dual 2670s might not be the best choice. It will be a world better than you got on the Synology, but transcoding really works best on a GPU. Either (a) add a good Nvidia GPU to this project and pass it through to the Plex VM for transcoding...
  8. Gigabit NIC's running at 100Mb/s

    GigE ports running at 100Mbps is often a symptom of a cable with one or more bad pairs or a bad connector on the NIC or Switch. Have you tested/swapped them?
  9. Why Debian?

    But why do you care? You stated your belief that Proxmox is just an " an expensive packaging of KVM and a couple of open-source tools". You've trashed on their choice to make it Debian based (despite your denials - yes, you came here to trash talk them). Its clear you have no desire to use...
  10. Why Debian?

    Nobody at Proxmox "forces others to use their distro". They provide a comprehensive package for a purpose. Lots of people seem to like it. It happens to be built on Debian. If you don't want to use it you are free not to do so. You said: - "I use as I please": good. An expression of...
  11. Proxmox VE CEPH cluster build

    You should double check that assumption. Assuming you are buying new, current prices for 25GB NICs/Switches are only a small margin higher than 10gb. Of course this changes if you have significant installed base to leverage (existing switches or 10GB NIC inventory). But check current pricing...
  12. [SOLVED] Previously used zfs drives are not available

    Odd. Last suggestion is to reboot the Proxmox host now that they are wiped and see if it clears anything that is still stuck. After that I'm at a loss.
  13. [SOLVED] Previously used zfs drives are not available

    Try doing a "vgscan" to re-scan the LVM cache. They originally had LVM data on them (lsblk in post #3) and it may still be registered in the LVM cache.
  14. [SOLVED] Previously used zfs drives are not available

    You shouldn't need to wipe the disks using dd - just clean out the MBR & GPT tables, all of the backup GPT copies and any LVM data. "sgdisk --zap-all <device>" should do it. After you've done that - or if you used the "dd" wipe - you have to get the system to re-trigger the device info for the...
  15. ceph mgr segmentation fault

    Possibly this: https://tracker.ceph.com/issues/42026 I was seeing this same fault periodically on a non-Proxmox (K8s/rook) install of Ceph. Its fixed in Octopus (Ceph 15.x).

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!