Search results

  1. M

    PVE 5.4 - Nodes suddenly reboot - no entries in logs

    Are you using a dedicated corosync network? You say that the problem usually happends when doing backups. Regards, Manu
  2. M

    Performance PVE 5.3.1 vs. 5.4.1

    I can't find it right now, but I think you're talking about somebody who compared the performance given by kernel 4.10 from Proxmox 5.0 or 5.1 (I don't remember) with kernel 4.15 on later Proxmox versions. In fact, as far as I remember, he tried to use the 4.10 kernel on Proxmox 5.4 (or 5.3...
  3. M

    Ceph cluster suggestion

    Works fine! Thanks again.
  4. M

    Ceph cluster suggestion

    Thanks lots Bengt, that seems to fit with my needs. I've installed a proxmox test cluster on some VM using nested virtualization and I'll do some tests. Kind regards, Manuel
  5. M

    Ceph cluster suggestion

    Hello, After realising that it is not possible to create two Ceph clusters on a Proxmox one I'm looking for a way to have just one Ceph Cluster but two Ceph Pools with one condition, each ceph pool has to use exclusively OSD from selected ceph nodes allocated in different containers. I've seen...
  6. M

    Ceph cluster suggestion

    Hi again, Now that I've add my new nodes to our Proxmox cluster, and after installing Ceph Packages I've realised that, as I already have an initialitzed Ceph Network and a ceph.conf file on the pmxcfs storage, my new Ceph nodes become part of the Ceph cluster. So the configuration I was...
  7. M

    Ceph cluster suggestion

    It's good to know. At the moment, we're using 10K rpm SAS disks as Ceph OSD 4 disks each node, and we've not reached the 10GbE limits. Perhaps in the future, if we switch to SSD disks we will consider to use 40 or 100GbE. By the way, can you tell me a bit more of the 40GbE that you have...
  8. M

    Ceph cluster suggestion

    I know you are not encouraging me to do anything! I'm glad to receive your opinion on this subject. Thanks!! ;) In fact the recommendation I'm talking about was not to use 2x1gbit NIC and switch to 10GbE. What I've read in the past is that a bond adds a complexity layer that doesn't add much...
  9. M

    Ceph cluster suggestion

    Thanks Bengt, Reducing recovery time when a OSD has failed is a good point. Thanks, I was not aware about that. I've had two little problems with failing OSD and it is nice to know how to reduce recovery time and risk. We are also using some freenas servers as NFS storage and iscsi. This was...
  10. M

    Ceph cluster suggestion

    Hello, I've been using a Ceph storage cluster on a Proxmox cluster for a year and a half and we are very satisfied with the performance and the behaviour of Ceph. That cluster is on a 4node Dell C6220 server with 10GbE dual nic which it's been a very good server for us. Now we've ordered a...
  11. M

    Proxmox cluster with SAN storage

    Hi, As I pointed before, the use of NFS as a shared storage works fine. You can use it as a shared storage and move quickly the virtual machines between the nodes (live migration) and do snaps. You can also connect directly iscsi LUN (managed from proxmox) to some of your virtual machines as a...
  12. M

    Proxmox cluster with SAN storage

    I mean from the Proxmox web interface. I'm mounting the iscsi LUN from proxmox/storage and then I see the storage under every node. As I'm using it not directly because I've defined a Logical LVM volume to use on my nodes, it would be good to hide the scsi devices used by LVM. The use of clvm...
  13. M

    Proxmox cluster with SAN storage

    Hi again, Regarding LVM over iscsi, I'm also doing some tests also and I have some questions on this subject: 1) If I use an LVM storage over iscsi, then from every node in the cluster sees both the iscsi storage and the LVM storage. Is it posible to hide the iscsi storage from the nodes when...
  14. M

    Proxmox cluster with SAN storage

    Hi, Until now, we have been using NFS as a shared storage on a FreeNAS server. It's as reliable as your NFS server, in our case it works great. It is easy to manage and you can do snaps so I think is a good option to consider. Now, as an improvement, we are going to add new servers to the...
  15. M

    Mixed Proxmox + Ceph networking model

    Ok, we'll buy some more NIC and network switchs. Thanks lots, Manuel
  16. M

    Mixed Proxmox + Ceph networking model

    Hi again, I've been thinking a while, reading and doing some tests. I understand that is better to have independent NIC for every kind of traffic and we will probably end adding more NIC to our servers but I would like to share one idea and get your opinion from you. I've read that it is...
  17. M

    Mixed Proxmox + Ceph networking model

    Thanks Alwin, To avoid other traffic to interfere with corosync, and keep using just 2x1Gb bond in most of my nodes, would help to define a dedicated VLAN to corosync? I would define also other VLAN for Ceph and FreeNAS (and of course for client traffic) Without changing the 2x1Gb NIC, what...
  18. M

    Mixed Proxmox + Ceph networking model

    Hi, We've got a Proxmox cluster using FreeNAS for sharing storage. Most of the nodes have 2x1Gb NIC, and one have 2x1Gb+2x10GB NIC. We use a primary NAS that shares iSCSI resources (directly attached to some VM from proxmox) and NFS as a KVM storage (disk images), and a secondary one for...
  19. M

    Replication for Disaster Recovery solution (feature request)

    Great! Very glad to know. Best regards, Manuel Martínez.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!