Recent content by MMartinez

  1. M

    Ceph multi-public-network setup: CephFS on separate network

    Yes, thanks. That's what I'm going to do. Anyway, I'm considering the possibility to separate Ceph public and private networks first, and then add the VM interface to the public network. This way the setup will be more secure. Regards.
  2. M

    Ceph multi-public-network setup: CephFS on separate network

    Thanks. It seems that the problem is described on this "tip". I was trying to keep ceph private network isolated so it is not routed. It looks that both public networks needs to be routed and visible between them. In that case, I will choose the other aproach. Add a second interface into the...
  3. M

    Ceph multi-public-network setup: CephFS on separate network

    Let me explane it a bit more. What I want is not exactly to separate Ceph RBD from CephFS. I want to give access to VM as Ceph clients from a different network. On proxmox nodes I have a 10GbE dual NIC. One port dedicated to Ceph (public and private) and the other is shared by corosync...
  4. M

    Ceph multi-public-network setup: CephFS on separate network

    Hi, I'm trying to setup a secondary public network to a ceph cluster and I have problems. I've read two posts related to that but they are a bit different and that's why I start a new thread. https://forum.proxmox.com/threads/ceph-changing-public-network.119116/...
  5. M

    Network configuration issue after kernel 6.8 upgrade

    I'll do it then. Thank's for your help!! Kind regards, Manuel Martínez
  6. M

    Network configuration issue after kernel 6.8 upgrade

    Thanks. That could be the reason. But the network interfaces naming scheme is the cause of the problem. Any way to avoid it? I've seen that I can pin the network name (https://pve.proxmox.com/pve-docs/pve-admin-guide.html#network_override_device_names) but, does it make sense? Regards, Manuel
  7. M

    Network configuration issue after kernel 6.8 upgrade

    Probably is not needed. I'ts been working without this line with kernel 6.5 and previous.
  8. M

    Network configuration issue after kernel 6.8 upgrade

    Hi, I've experimented a problem after upgrading kernel to 6.8 on Proxmox 8.2 on a Dell C6420 node. After renaming interfaces as described in Kernel 6.8 - "Known Issues & Breaking Changes" (https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.2), I've been able to use almost every network defined...
  9. M

    Debian 12 LXC Image problem

    Hi, I have a similar problem with LXC templates Debian 11 and Debian 12 on ARM64. When I choose these templates I and before completing the container creation I get the message: extracting archive '/var/lib/vz/template/cache/debian-bullseye-20231124_arm64.tar.xz' Total bytes read: 344780800...
  10. M

    CPU Sockets / NUMA

    I'm a bit confused about this subject. I understand that if one node has 2 physical sockets with 2 cores each, and you set up a VM with 4 vCPU, then it is recommended to use NUMA and configure the VM as the node, with 2 sockets and 2 cores. This way, the VM is aware of the physical specs of the...
  11. M

    VM swap best practice

    Yes, I knew that, but I thought that syncs to disk should be as quick as possible and therefore it would not be needed to use swap for that.
  12. M

    VM swap best practice

    Using a dedicated swap disk on the VM would be a good way also to reduce the size and time of incremental backups using PBS, so it seems a win win choice. Setting the VM disk options to cache=unsafe can cause to be swapped on the host? I wasn't aware of that. What other things can cause the use...
  13. M

    VM swap best practice

    Hi, I'd like to ask about the best practice on KVM guest configuration to avoid host swap usage and have the best performance. On virtualitzation systems I liked to remove the swap partition on my VMs and give enought resources to the VMs and Hosts, but this can be sometimes problematic, so I...
  14. M

    NFS or SSHfs shared from host randomly hangs VM.

    We have experimented similar problems lately, using Proxmox 7.1 with kernel 5.13. We use both Ceph and NFS (FreeNAS 11) over dedicated 10Gbps networks (one network for each type of storage). We have experienced some disk corruption on different VM, always on NFS FreeNAS storage. There's nothing...
  15. M

    C6320, LSI2008 HBA and disk order

    No, I didn't. We decided to change it for a C6420 with a PERC H330. It performs better than C6320 and in most cases better than the C6220. The C6220 still outperforms the C6420 in write cache, as it has a LSI9265 with write cache and the H330 on the C6420 does not have write cache. We solve this...