Recent content by Dan Nicolae

  1. D

    Ceph Network - LACP Unifi USW-24 - Proxmox

    Hey. We have a 4 nodes ceph cluster and a host that runs the virtual machines. Each server has for the ceph network 2 gigabit cards. We would like to configure the Ceph network using LACP on an Unifi USW-24 switch. I did the configuration on the switch, 2 ports LACP Aggregating, Auto Negociate...
  2. D

    Migration fails (PVE 6.2-12): Query migrate failed

    I just encounter a similar issue. Proxmox 6.4.1.3, up to date, VM Ubuntu 20.04 with qemu-guest-agent activated. During migration, VM just stopped. I'll disable all guest-agents on vm's and see if the problem remains.
  3. D

    RAID0 ZFS over hardware RAID

    Always is a good ideea to have backups. :) Multumesc/ Thank you.
  4. D

    RAID0 ZFS over hardware RAID

    I'll ask them to reconfigure the array, I know that is just crazzy. But the question remains, is ok to use RAID0 ZFS on a hardware RAID? :)
  5. D

    RAID0 ZFS over hardware RAID

    Looks like. It's not my creation. I'm just the software guy. :)
  6. D

    RAID0 ZFS over hardware RAID

    Hello, everyone. I have to setup an Proxmox VE 6.2 cluster that use local disks as storage for VM's (KVM). The local storage is an hardware raid array (8 HDD in RAID0 on a Dell PERC H70 mini). We would like to use the live migration feature with this local storage that is available on ZFS only...
  7. D

    Error on Boot up: A start job is running for Activation of LVM2 logical volumes

    I have the same problem here. Boot is stuck on "A job is running..." Any suggestions are appreciated. Thanks.
  8. D

    clean old kernels

    @fabian , I'm sure that you know better proxmox than me. I did'nt had any problems so far. Well, from the whole command line, he can use, hopefully safe, apt-get autoremove && apt-get autoclean :)
  9. D

    Move 600Gig vm from 3.4 to 5

    Yes. Click on the node, click on the VM you want to move, go to hardware, select the VM disk, click on the button above called Move disk, select Target Storage (in this case, the nfs server) and hit Move disk.
  10. D

    2 node cluster + CEPH

    # POC Environment — Can have a minimum of 3 physical nodes with 10 OSD’s each. This provides 66% cluster availability upon a physical node failure and 97% uptime upon an OSD failure. RGW and Monitor nodes can be put on OSD nodes but this may impact performance and not recommended for...
  11. D

    2 node cluster + CEPH

    Minimum for Ceph is 3 nodes but not recomended for production. You should use at least 6 nodes, 2 osd each and an enteprise ssd for bluestore db. Hardware, CPU 1 core for each OSD, 1GB RAM for each 1TB of OSD, 3 gigabit network cards, one for proxmox network, two for ceph network (bond). Do...
  12. D

    clean old kernels

    apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade && apt-get -y autoremove && apt-get -y autoclean
  13. D

    Auto Scalability Feature

    There is a plugin for WHMCS for that. I have no ideea if and how it works. If there is someone on this forum that is using it, maybe could share some info. ;)