Search results

  1. M

    Docker LXC Unprivileged container on Proxmox 7 with ZFS

    I've been using docker on LXC with Proxmox (6.4 and earlier versions) for 3 years. It works fine. The purpose of using Debian/ubuntu LXC containers is that I need to give many machines to a group of about 20 students with just one fisical server and it would not be able to run 20 (or even more)...
  2. M

    Proxmox 6.4 with auth_allow_insecure_global_id_reclaim=true causes backup problems

    I can ensure that I upgraded Ceph on all Ceph nodes. Some of the afected VM were stopped during the backup so the only possibility left is that there is some PVE service accessing Ceph on the failing nodes that I have not restarted. I'll try again and check it. Thanks
  3. M

    Proxmox 6.4 with auth_allow_insecure_global_id_reclaim=true causes backup problems

    Hello, We updated our cluster from 6.2 to 6.4 a few months ago. After that, we had the warning message "mons are allowing insecure global_id reclaim". We found information about this issue in the forum...
  4. M

    PVE 5.4 - Nodes suddenly reboot - no entries in logs

    Hi, Maybe you can just check at the HA Status to see if the nodes keep quorum during the backup. If I'm not wrong, if one node gets out of sync it simply reboots (if you use HA). You can also try to reduce the network load during the backups using the bwlimit option in /etc/vzdump.conf. Hope...
  5. M

    PVE 5.4 - Nodes suddenly reboot - no entries in logs

    Are you using a dedicated corosync network? You say that the problem usually happends when doing backups. Regards, Manu
  6. M

    Performance PVE 5.3.1 vs. 5.4.1

    I can't find it right now, but I think you're talking about somebody who compared the performance given by kernel 4.10 from Proxmox 5.0 or 5.1 (I don't remember) with kernel 4.15 on later Proxmox versions. In fact, as far as I remember, he tried to use the 4.10 kernel on Proxmox 5.4 (or 5.3...
  7. M

    Ceph cluster suggestion

    Works fine! Thanks again.
  8. M

    Ceph cluster suggestion

    Thanks lots Bengt, that seems to fit with my needs. I've installed a proxmox test cluster on some VM using nested virtualization and I'll do some tests. Kind regards, Manuel
  9. M

    Ceph cluster suggestion

    Hello, After realising that it is not possible to create two Ceph clusters on a Proxmox one I'm looking for a way to have just one Ceph Cluster but two Ceph Pools with one condition, each ceph pool has to use exclusively OSD from selected ceph nodes allocated in different containers. I've seen...
  10. M

    Ceph cluster suggestion

    Hi again, Now that I've add my new nodes to our Proxmox cluster, and after installing Ceph Packages I've realised that, as I already have an initialitzed Ceph Network and a ceph.conf file on the pmxcfs storage, my new Ceph nodes become part of the Ceph cluster. So the configuration I was...
  11. M

    Ceph cluster suggestion

    It's good to know. At the moment, we're using 10K rpm SAS disks as Ceph OSD 4 disks each node, and we've not reached the 10GbE limits. Perhaps in the future, if we switch to SSD disks we will consider to use 40 or 100GbE. By the way, can you tell me a bit more of the 40GbE that you have...
  12. M

    Ceph cluster suggestion

    I know you are not encouraging me to do anything! I'm glad to receive your opinion on this subject. Thanks!! ;) In fact the recommendation I'm talking about was not to use 2x1gbit NIC and switch to 10GbE. What I've read in the past is that a bond adds a complexity layer that doesn't add much...
  13. M

    Ceph cluster suggestion

    Thanks Bengt, Reducing recovery time when a OSD has failed is a good point. Thanks, I was not aware about that. I've had two little problems with failing OSD and it is nice to know how to reduce recovery time and risk. We are also using some freenas servers as NFS storage and iscsi. This was...
  14. M

    Ceph cluster suggestion

    Hello, I've been using a Ceph storage cluster on a Proxmox cluster for a year and a half and we are very satisfied with the performance and the behaviour of Ceph. That cluster is on a 4node Dell C6220 server with 10GbE dual nic which it's been a very good server for us. Now we've ordered a...
  15. M

    Proxmox cluster with SAN storage

    Hi, As I pointed before, the use of NFS as a shared storage works fine. You can use it as a shared storage and move quickly the virtual machines between the nodes (live migration) and do snaps. You can also connect directly iscsi LUN (managed from proxmox) to some of your virtual machines as a...
  16. M

    Proxmox cluster with SAN storage

    I mean from the Proxmox web interface. I'm mounting the iscsi LUN from proxmox/storage and then I see the storage under every node. As I'm using it not directly because I've defined a Logical LVM volume to use on my nodes, it would be good to hide the scsi devices used by LVM. The use of clvm...
  17. M

    Proxmox cluster with SAN storage

    Hi again, Regarding LVM over iscsi, I'm also doing some tests also and I have some questions on this subject: 1) If I use an LVM storage over iscsi, then from every node in the cluster sees both the iscsi storage and the LVM storage. Is it posible to hide the iscsi storage from the nodes when...
  18. M

    Proxmox cluster with SAN storage

    Hi, Until now, we have been using NFS as a shared storage on a FreeNAS server. It's as reliable as your NFS server, in our case it works great. It is easy to manage and you can do snaps so I think is a good option to consider. Now, as an improvement, we are going to add new servers to the...
  19. M

    Mixed Proxmox + Ceph networking model

    Ok, we'll buy some more NIC and network switchs. Thanks lots, Manuel