I have 3 node difference type of host and disk number in hands, I am wondering can I build ceph with them? Here is what I get:
node 1: 2TB 2.5" 7.2K SATA disk x 10
node 2: 6TB 2.5" 7.2K SATA disk x 4
node 3: 1TB 2.5" 7.2K STAT disk x 20
In case if I don't care about performance and recovery...
I have heard that pve did not include NFS RGW for ceph implementation, and must setup it manually.
I would like to provide a Ganesha powered nfs rgw with my ceph cluster,
Does there any step-by-step guide that we can follow?
I have this statement in [global] config:
osd journal size = 5120
But when I create bluestore OSD with journal, it always partition the SSD with 1GB size only:
How can I enlarge the default 1GB size when I create bluestore OSD?
Does anyone has experience with OVS+DPDK installation in PVE 5?
I just installed OVS from pve repository but cannot found DPDK package for pve. Is there any SOP document which can be followed step by step to finish installation?
I cannot boot with 4.15.15, and kernel stuck with reading my lvm root volume:
(Dell R730xd + H730P mini)
BTW, I cannot find 4.15.17 with no-subscription repo?
How can I skip the 4.15.15 and go forward to 4.15.17?
I understood that I always can flash the light manually.
But would ceph osd flash the light automatically with HBA/Non-RAID mode in case of disk failure?
I Would like to have a automatic light up feature which instruct the OP to replace disk just review the enclosure panel everyday, and don't...
How can I identify failure disk with Ceph?
Does OSD deamon will trigger the disk indicator light when it sensed disk failure in JBOD or IT mode?
Or may I use RAID-0 with every OSD disk to make sure the controller will flash the light for me?
Does any one aware about this news?
https://www.servethehome.com/zfs-on-linux-0-7-7-disappearing-file-bug/
Is it safe to update PVE zfs right now?
In case if we must avoid it, how can I stick with 0.7.6 and update other packages?
Thanks for reply, wolfgang:
It is no problem to manage it manually, but I don't know what is the correct parameter syntax? I have try to append following to vm conf like:
virtio0: VMDisk:vm-103-disk-1,cache=writeback,size=32G,l2-cache-size=4194304
but the pve manager refused to start this VM?
Hmmm.....I just found I am the poor guy who installed an EOL gluster 3.5 but only found that after pushed the cluster into production........
Now I face to an huge risk to upgrade gluster above 3.8+ for my cluster and has no SOP can follow.....
Where is the upgrade docs for pve 4.4?
I have try to change the number per your way but got only 16 VF port and 2 VF port:
I have reviewed this doc:
https://github.com/pavel-odintsov/ixgbe-linux-netmap/tree/master/ixgbe-3.23.2.1
It said:
So I believe max_vfs=16,16 are corresponding to each PF port.
Is there any thing I have...
In addition: Here is result of lspci for ethernet from the two servers:
and my /etc/udev/rules.d looks like this:
I don't know why it did not honor the rules and rename the PF as some thing strange?
I have installed 3 new PVE server with Intel X540-T2 interface and network connection worked well without SR-IOV via physical interface.
I try to enable VF by the following:
Modify /etc/default/grub line as:
--------------------------
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs...
I have run a apt-get update/upgrade command in this Monday, here is the result:
pveversion -verbose
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1...
Is there any success story about PVE 4 with Infiniband FDR (40Gbps) NIC/Switch?
I am interesting to build a PVE ceph cluster with IB, but cannot find any reference design. I also like to utilize mellanox connectx-3 vpi functionality with PVE but cannot find related document.
Does there lack of...
Is it possible to setup HA nfs server within exist proxmox cluster?
My idea design will be:
3 nodes proxmox clustered
two nodes will run ZFS with DRBD for both cluster storage and KVM virtualization
3rd node will run KVM as virtualization node and quoram only (no local storage)
Maybe joint...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.