Search results

  1. J

    Proxmox HA and Ceph

    May want to post your question at /r/ceph since they can answer DB/VM questions.
  2. J

    shell-based GUI Virtual Machine Conversion Tool

    May want to post it on Github/Gitlab. And can put that on your resume, LOL.
  3. J

    3node brand new ceph cluster vs 5node mixed ceph cluster

    Ceph really wants identical hardware to equally spread the load. Can do mixed but the slowest machine will be the bottleneck.
  4. J

    What specs do I need for a new proxmox server?

    Seems like the issue is RAM. I have a bunch of 12th and 13th-gen Dells using E5 CPUs. They have between 256GB and 512GB RAM. No issues. I do recommend a clean install. Backup first though.
  5. J

    Proxmox 8 Ceph Quincy monitor no longer working on AMD Opteron 2427

    Yeah, yeah, I know. EOL CPU. Ceph was working fine under Proxmox 7 using the same CPU. I did pve7to8 upgrade and a clean install of Proxmox 8. Both situations got the 'Caught signal (illegal instruction)' when attempting to start up a Ceph monitor. It's either pointing to a bad binary or...
  6. J

    Proxmox VE 8.0 released!

    Did a clean install of Proxmox 8 and using the no-sub Quincy repository. Still got the 'Caught signal (illegal instruction)' message. It's pointing a bad re-compile or the Ceph monitor binaries no longer are supported on AMD Opteron 2427 CPUs. Ceph was working fine under Proxmox 7.
  7. J

    Proxmox VE 8.0 released!

    My next step was to re-create the monitors manually by disabling the service and removing /var/lib/ceph/mon/<hostname> directory. Then ran 'pveceph mon create'. After awhile it timed-out. Running 'journalctl on the failed monitor service shows the following: Jun 25 13:29:03 pve-test-7-to-8...
  8. J

    Proxmox VE 8.0 released!

    Did the 'pve7to8 --full' on a 3-node Ceph Quincy cluster, no issues were found. Both PVE and Ceph were upgraded and 'pve7to8 --full' mentioned a reboot was required. After reboot, got "Ceph got timeout (500)" error. "ceph -s" shows nothing. No monitors, no managers, no mds. Any suggestions...
  9. J

    KVM to Proxmox convert initramfs failed boot centos7

    I used this guide https://unixcop.com/migrate-virtual-machine-from-vmware-esxi-to-proxmox-ve for migrating ESXi VMs to Proxmox. You still need to run dracut to include all the drivers before migration. I use the 'qm importdisk' command on the .vmdk file itself which is the metadata file which...
  10. J

    4 identical nodes | I need some recommendations

    I have a 4-node R630 setup but using SAS drives and no SSDs. The default Ceph replication of 3/2 works fine. Can do maintenance on 1 node while the other 3 still have quorum. Technically should have a QDevice for production but since this setup is a test cluster, I'm OK with just 4.
  11. J

    Using a Proxmox role to restrict VMs to a specific network?

    Got some summer interns incoming and they will have access to a Proxmox cluster. Is there a Proxmox role to restrict VMs to a specific network? Obviously they will need access to the management network but VMs will be on a different network for Internet access. If no such option is...
  12. J

    Re-importing existing OSDs after Proxmox reinstall additional steps?

    After a 3-node test Ceph cluster refused to boot after a lengthy power outage, I reinstalled Proxmox. After creating the Ceph monitors on each host, ran 'ceph-volume lvm activate --all' on the existing OSDs which ran without errors. However, still no OSDs. I'm guessing the new Ceph monitors...
  13. J

    [SOLVED] Migration from vCenter 7 to Proxmox VE

    Old school way is to use qemu-img convert command. New school way is to use Clonezilla over the network. Plenty of blog/videos on how to do it.
  14. J

    Hardware Compatibility List > Component Level?

    Proxmox uses Debian as the OS. Over at debian.org should tell you what hardware is supported. Debian just runs the vanilla Linux kernel, IMO. Proxmox does offer the 5.x LTS kernel and the 6.x kernel for newer hardware.
  15. J

    Dell Server Hardware Advice

    If buying used, I prefer 13th-gen Dells, specifically the R730xd in which you can use the rear drives as RAID-1 OS boot drives. If looking for something else newer, then 14th-gen Dells, like the R740xd. I would not get 15th-gen Dells until the Linux drivers for that hardware has matured.
  16. J

    Ceph HDDs slow

    I use SAS HDDs in production. I use the following optimizations learned through trial-and-error: Set write cache enable (WCE) to 1 on SAS drives (sdparm -s WCE=1 -S /dev/sd[x]) Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option...
  17. J

    vSphere migration to Proxmox

    When vSphere/ESXi 7 came out, VMware/Delll dropped official production support for 12th-gen Dells. Switch the entire fleet of 12th-gen Dells after flashing the disk controllers to IT-mode to Proxmox running ZFS and/or Ceph. No issues. I did use "qemu-img convert" initially but just learned...
  18. J

    Btrfs vs ZFS on RAID1 root partition

    I've had ZFS RAID-1 failing before on OS boot drives. It shows the zpool as degraded. It still boots. BTRFS RAID-1 is another story. If a drive fails, you are toast if just using 2 drives. Use it when you don't care about booting.
  19. J

    micro server Cluster with Ceph

    Best bang for the buck is used enterprise servers. Take a look at used Dell 13th-gen servers. Can optionally upgrade the internal NICs to 10GbE (fiber, wired, or both). The PERC H330 can be configured for HBA/IT-mode or get the PERC HBA330. I like the R730xd and can use the rear drives as...