I can't find it right now, but I think you're talking about somebody who compared the performance given by kernel 4.10 from Proxmox 5.0 or 5.1 (I don't remember) with kernel 4.15 on later Proxmox versions.
In fact, as far as I remember, he tried to use the 4.10 kernel on Proxmox 5.4 (or 5.3...
Thanks lots Bengt, that seems to fit with my needs.
I've installed a proxmox test cluster on some VM using nested virtualization and I'll do some tests.
Kind regards,
Manuel
Hello,
After realising that it is not possible to create two Ceph clusters on a Proxmox one I'm looking for a way to have just one Ceph Cluster but two Ceph Pools with one condition, each ceph pool has to use exclusively OSD from selected ceph nodes allocated in different containers.
I've seen...
Hi again,
Now that I've add my new nodes to our Proxmox cluster, and after installing Ceph Packages I've realised that, as I already have an initialitzed Ceph Network and a ceph.conf file on the pmxcfs storage, my new Ceph nodes become part of the Ceph cluster. So the configuration I was...
It's good to know. At the moment, we're using 10K rpm SAS disks as Ceph OSD 4 disks each node, and we've not reached the 10GbE limits.
Perhaps in the future, if we switch to SSD disks we will consider to use 40 or 100GbE.
By the way, can you tell me a bit more of the 40GbE that you have...
I know you are not encouraging me to do anything! I'm glad to receive your opinion on this subject. Thanks!! ;)
In fact the recommendation I'm talking about was not to use 2x1gbit NIC and switch to 10GbE.
What I've read in the past is that a bond adds a complexity layer that doesn't add much...
Thanks Bengt,
Reducing recovery time when a OSD has failed is a good point. Thanks, I was not aware about that.
I've had two little problems with failing OSD and it is nice to know how to reduce recovery time and risk.
We are also using some freenas servers as NFS storage and iscsi. This was...
Hello,
I've been using a Ceph storage cluster on a Proxmox cluster for a year and a half and we are very satisfied with the performance and the behaviour of Ceph.
That cluster is on a 4node Dell C6220 server with 10GbE dual nic which it's been a very good server for us.
Now we've ordered a...
Hi,
As I pointed before, the use of NFS as a shared storage works fine. You can use it as a shared storage and move quickly the virtual machines between the nodes (live migration) and do snaps.
You can also connect directly iscsi LUN (managed from proxmox) to some of your virtual machines as a...
I mean from the Proxmox web interface. I'm mounting the iscsi LUN from proxmox/storage and then I see the storage under every node.
As I'm using it not directly because I've defined a Logical LVM volume to use on my nodes, it would be good to hide the scsi devices used by LVM.
The use of clvm...
Hi again,
Regarding LVM over iscsi, I'm also doing some tests also and I have some questions on this subject:
1) If I use an LVM storage over iscsi, then from every node in the cluster sees both the iscsi storage and the LVM storage. Is it posible to hide the iscsi storage from the nodes when...
Hi,
Until now, we have been using NFS as a shared storage on a FreeNAS server. It's as reliable as your NFS server, in our case it works great.
It is easy to manage and you can do snaps so I think is a good option to consider.
Now, as an improvement, we are going to add new servers to the...
Hi again,
I've been thinking a while, reading and doing some tests.
I understand that is better to have independent NIC for every kind of traffic and we will probably end adding more NIC to our servers but I would like to share one idea and get your opinion from you.
I've read that it is...
Thanks Alwin,
To avoid other traffic to interfere with corosync, and keep using just 2x1Gb bond in most of my nodes, would help to define a dedicated VLAN to corosync? I would define also other VLAN for Ceph and FreeNAS (and of course for client traffic)
Without changing the 2x1Gb NIC, what...
Hi,
We've got a Proxmox cluster using FreeNAS for sharing storage.
Most of the nodes have 2x1Gb NIC, and one have 2x1Gb+2x10GB NIC. We use a primary NAS that shares iSCSI resources (directly attached to some VM from proxmox) and NFS as a KVM storage (disk images), and a secondary one for...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.