I have 3 node difference type of host and disk number in hands, I am wondering can I build ceph with them? Here is what I get:
node 1: 2TB 2.5" 7.2K SATA disk x 10
node 2: 6TB 2.5" 7.2K SATA disk x 4
node 3: 1TB 2.5" 7.2K STAT disk x 20
In case if I don't care about performance and recovery...
I have heard that pve did not include NFS RGW for ceph implementation, and must setup it manually.
I would like to provide a Ganesha powered nfs rgw with my ceph cluster,
Does there any step-by-step guide that we can follow?
I have this statement in [global] config:
osd journal size = 5120
But when I create bluestore OSD with journal, it always partition the SSD with 1GB size only:
How can I enlarge the default 1GB size when I create bluestore OSD?
Does anyone has experience with OVS+DPDK installation in PVE 5?
I just installed OVS from pve repository but cannot found DPDK package for pve. Is there any SOP document which can be followed step by step to finish installation?
How can I identify failure disk with Ceph?
Does OSD deamon will trigger the disk indicator light when it sensed disk failure in JBOD or IT mode?
Or may I use RAID-0 with every OSD disk to make sure the controller will flash the light for me?
Does any one aware about this news?
https://www.servethehome.com/zfs-on-linux-0-7-7-disappearing-file-bug/
Is it safe to update PVE zfs right now?
In case if we must avoid it, how can I stick with 0.7.6 and update other packages?
I have installed 3 new PVE server with Intel X540-T2 interface and network connection worked well without SR-IOV via physical interface.
I try to enable VF by the following:
Modify /etc/default/grub line as:
--------------------------
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs...
I have run a apt-get update/upgrade command in this Monday, here is the result:
pveversion -verbose
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1...
Is there any success story about PVE 4 with Infiniband FDR (40Gbps) NIC/Switch?
I am interesting to build a PVE ceph cluster with IB, but cannot find any reference design. I also like to utilize mellanox connectx-3 vpi functionality with PVE but cannot find related document.
Does there lack of...
Is it possible to setup HA nfs server within exist proxmox cluster?
My idea design will be:
3 nodes proxmox clustered
two nodes will run ZFS with DRBD for both cluster storage and KVM virtualization
3rd node will run KVM as virtualization node and quoram only (no local storage)
Maybe joint...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.