Search results

  1. K

    Ceph vs ZFS - Which is "best"?

    Yes,like #12 Proxmox Staff say, performance gains are hard to quantify And database load need a lot of performance So,If you only have a small cluster, don't use Ceph Personally recommend to use PostgreSQL distributed, and use local disk, zfs RAIDz1 it's good choice you can look like this...
  2. K

    Create Partition For New Disk

    If you want use zfs on root it's big project you can view like this https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html
  3. K

    Unraid VM install HDD/SSD passthrough

    I use "LSI MegaRAID SAS 2008" ,in that Web GUI can view real hdd model is "TOSHIBA_MG06ACA8" Can you switch another SCSI Controller to test unraid VM?
  4. K

    ZFS or ?????

    You should be in rescue mode, it's not zfs problem, because you have only one system disk, a 2.5" spinning rust drive So, you can input another same capacity as the same model (can work differently) and reinstall pve use software RAID1 mode But one nvme on ZFS, it is easy to cause data loss...
  5. K

    Ceph vs ZFS - Which is "best"?

    if you hope get alot of performance, ceph need many node and many osd(hdd or ssd) but ,ceph have some "magic technology" can help you pull up performance like cache tier, can input a cache pool between client and backend like persistent writeback cache, it's move remote writes to near-end
  6. K

    Proxmox 7.1 and vGPU server

    I found this https://www.reddit.com/r/Proxmox/comments/rxhx2e/deskpool_vdi_for_proxmox_with_vgpu/
  7. K

    [SOLVED] No quorum error

    Oh, you only have 2 node and 1 it's died can view that https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_separate_node_without_reinstal systemctl stop pve-cluster corosync pmxcfs -l rm /etc/corosync/* rm /etc/pve/corosync.conf killall pmxcfs systemctl start pve-cluster
  8. K

    Proxmox Ceph HCI + Shared Storage via iscsi or NFS

    My English is so pool It's means the iscsi or nfs in promox, and you want access from outside? Yes, but must use CLI to config :D
  9. K

    [SOLVED] pct create --rootfs parameter syntax

    If you use local to storge the CT image, you need to enable this feature first In /etc/pve/local/lxc/xxx.conf can see the parameter looklike “rootfs: local:100/vm-100-disk-0.raw,size=8G” and you can use the search https://forum.proxmox.com/tags/pct-create/
  10. K

    [SOLVED] No quorum error

    In you running node CLI in put the code xxxx it's the died node name, can found the running node web GUI left pvecm delnode xxxx
  11. K

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    yep, IPoIB mode it's need CPU single core performance, the CPU E5-2682v4 have a lot of VMs load If in brand new, the bandwidth test about 40Gbps IPoIB in 56 Gbps FDR infiniband
  12. K

    Ceph OSD Performance is Slow ?

    In SUSE Enterprise Storage 7 documents, There are 2 different views https://documentation.suse.com/ses/7/html/ses-all/storage-bp-hwreq.html#storage-bp-net-private If you do not specify a cluster network during Ceph deployment, it assumes a single public network environment. While Ceph...
  13. K

    Ceph OSD Performance is Slow ?

    In my opinion this is expected low performance ceph quantity is needed to improve performance you cluster only 3 node , it's default minimum requirements By the way, not recommended any bonded with ceph traffic, can use one for Public Network another for Cluster Network I'm manage a cluster...
  14. K

    Mellanox ConnectX-3 - no mlx4_en, only IB

    You should configure manually in CLI, i'm show you some code .. apt update && apt upgrade -y apt install opensm htop nload pve-headers -y modprobe ib_ipoib echo "ib_ipoib" >> /etc/modules-load.d/infiniband.conf systemctl enable opensm systemctl restart opensm ## add in...
  15. K

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    Yes,I'm use infiniband 56gbps card and switch Work in IPoIB mode Use CLI configure the IP over infiniband ,and use GUI configure ceph It's work ,but bandwidth about 20Gbps

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!