Just to mention my own experience with P420i,
I had been running Ceph with OSDs on Raid0 for a couple of years with no problem.
When building a new cluster I recently I decided to experiment with HBA mode.
I can confirm that with PVE 8 the hpsa driver is used by linux automatically if the...
I have a a few of these dl380p, currently running a ceph cluster with osds on raid0 and very happy with it. I am about to build a new cluster and was planning to use zfs replication rather than ceph. I was encouraged to read your first post and less so your second post.
One question in your...
I have been experimenting with ceph RBD mirror, and had some initial difficulty understanding the network requirements and how to meet them in my specific environment. Since first posting I have made some progress so I am updating this post to reflect what I have learned in the hope that it...
If you are using the P420i as RAID controller.
Then you must create logical drives for any disk's to be detected by Linux. So your RAID 1 is fine, but if you want additional individual disk's you must create a RAID0 for each.
Maybe this is already understood ? it wasn't clear to me.
"As others told you, putting zfs in a guest is not a "supported" solution"
Nowhere on this thread has anyone said any such thing.
Other than yourself the only other responses are both from members who see it as a viable solution, and one of whom is actually also doing the same thing.
Anyway...
With respect, this is one of the primary use cases for virtualization, i.e being able to leverage diverse best in class applications, and preserve legacy investments, with the advantage of benefitting from consistent approach to backup and high availability.
So it really depends what you mean by...
I have had two apps that use ZFS, (both because that are running on BSD) :-
pfsense and truenas.
I have since moved pfsense to dedicated redundant bare metal because it's hard to do remote diagnosis on a sick cluster when you are logged in via a firewall running on said cluster - lol
I am...
Just to update this after running for six months with a multi virtual disk Raid Z.
Its NOT a good idea.
Online backups leave the Raid Z in an inconsistent state, which
means backup restore always involves subsequently having to fix a corrupted ZFS pool.
Backup with the VM shutdown is fine...
Installed this today, initially on top of clean deb 12 install ( since in the past with pve 6/7 I never succeeded with the pve installer iso).
This time however I found deb 12 was generating an arbitrary hdd enumeration, a real PITA when you come to setup Ceph OSDs, so I decided to give the...
me too, clean install of pve 8 today, and seeing the same error.
I am on non-subscription repo, not too keen on building kernel, and reading this thread it seems noexec=off is not a viable
interim workaround ?
FYI I have hyper converged setup 3 identical nodes (dl380p gen8), all with GPU...
Hi,
I am running a cluster with Ceph Bluestore and some guest VMs that use ZFS file system.
To date I thought it prudent to set up a virtual raidZ in these VMs i.e. provide min three virtual disks to the guest.
The primary reason for using ZFS is features such as compression and snapshots and...
so it looks like every time the links toggle it is after a pmxcfs received log
May 5 10:28:15 mfscn03 pmxcfs[2044]: [status] notice: received log
May 5 10:29:06 mfscn03 corosync[2067]: [KNET ] link: host: 1 link: 0 is down
May 5 10:29:06 mfscn03 corosync[2067]: [KNET ] host: host: 1...
Just to follow up on this, after adding one of the other networks into the corosync as suggested things are considerably improved.
Now I only very occasional messages indicating only that an altenative corosync network path has been selected.
e.g.
May 4 16:15:47 mfscn03 corosync[2067]: [KNET...
Thanks for response
PBS is a physical server hanging off the 10G nexus switch, each node has a 10G connection to the switch,
same with server hosts PBS and NFS, but problem appeared with PBS before I added NFS.
The corosync network was previously also used for occasional remote management, so...
Hi ,
I have a classic three node hyper-converged cluster.
Each node is identical HP DL380p with 96GB memory.
12 disk bays as follows:-
1 x sata 320 GB HDD (PVE host OS)
1 x sata 120GB SSD (Ceph WAL)
10 x 1TB HDD (OSDs)
Network and interfaces are as follows:
1 Gbe corosync (single port on...
Thanks, yes it was quite straightforward in the end. I modified the global section to redefine the cluster network. I destroyed and recreated the monitors one at a time (maybe this wasn't necessary?) but it was only after restarting the OSDs that I saw traffic starting to flow on the cluster...
Hi,
I have a 3 node hyper-converged setup which was created with a combined cluster and public network.
I would like to add a separate cluster network, but am unsure if and how this can be done on an existing installation.
Any tips would be appreciated.
OK solved.
For the benefit of anyone else, I had upgraded all nodes but the upgrade for this particular mode must have failed.
The following dirs were missing
/lib/modules/5.3.18-3-pve/kernel/kernel
/lib/modules/5.3.18-3-pve/kernel/lib
/lib/modules/5.3.18-3-pve/kernel/mm...
I have a four node cluster, 3 Ceph nodes, plus a node hosting NFS for backups.
One of the three Ceph nodes has broken such that it cant access the NFS share.
The other two are fine.
All the nodes are identical hardware and software is pve install on top of a debian buster minimal net...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.