Search results

  1. B

    HW RAID or ZFS on Dell PowerEdge R630

    Depending on your setup on Dell servers you can simply setup drives to non-RAID on Perc controller. This allows for drives that you want to use with RAID to use RAID1, 6, 10 and the other drives will behave like JBOD. I found that setup to be most flexible.
  2. B

    Storage migration failed io_uring known to cause issues

    Anybody on this ? I changed drive settings to IO thread "on" + Async IO: native and can move now from Ceph to local drive. I am not that clear on what option I should use. I primary use an external Ceph storage but at times move VMs to local drive on as needed basis, local drives are usually...
  3. B

    HW RAID or ZFS on Dell PowerEdge R630

    ZFS will eat more RAM , you could limit the use of RAM for ZFS but the RAM is what makes ZFS fast/faster then RAID. You could also do more with ZFS, replication Proxmox is one example. So out of the box ZFS is faster. One of the disadvantages is that ZFS can cause excessive wear out on SSDs...
  4. B

    Error while restoring VM, PVE version 8.3.0

    I am getting that message when moving VMs from old PVE cluster to the new PVE cluster using PBS: restore image complete (bytes=34359738368, duration=150.76s, speed=217.35MB/s) can't deactivate LV '/dev/local2-2.4TB-HDD/vm-324056-disk-0': Logical volume local2-2.4TB-HDD/vm-324056-disk-0 in...
  5. B

    Storage migration failed io_uring known to cause issues

    I am getting this error when trying to move drive from Ceph storage to PVE local hard drive: TASK ERROR: storage migration failed: target storage is known to cause issues with aio=io_uring (used by current drive) This started to happen on 8.2.x/8.3.0 , I have a 7.3 and 7.4 cluster and that...
  6. B

    HELP with ver 8.3.0 , downgrade possibly reinstall

    I have not experience any reboot on 6.8 and the new version 8.3.0 is on 6.8.x. I preferer stability over anything else. UdoB - if I understand what you are saying , it is that the version 8.1.x ,8.2.x and now 8.3.x is essentially small progression with but fixes for essential service and it...
  7. B

    HELP with ver 8.3.0 , downgrade possibly reinstall

    I am putting 2 new clusters in , they are not in production (be ion production next week) and I noticed new packages in the update section for Enterprise Repo. I did the update as I like to keep clusters updated to the newest firmware at least right before going live. After the update I notice...
  8. B

    Proxmox 8.2 with Ceph , time server

    What are you doing for time server for Ceph on Prox 8.2 ? I used to have two dedicated time serer on Ceph under Proxmox 6 otherwise I got clock skew errors. On Proxmox 7.x I tested Ceph with default setting and for last several years I had no problems. Is this the case with 8.2, I see it is...
  9. B

    Multiple Proxmox clusters on the same subnet

    I moved VMs from one cluster to the other with that setup before , no issues. As long as you don't have them running at the same time on two cluster it is fine. Thank you
  10. B

    Multiple Proxmox clusters on the same subnet

    I am confident in the DHCP setup , the bigger you scale the more flexibility is needed especially when it comes to backup and secondary site availability. DHCP servers are on a VM with redundancy and the other one on a redundant piece of hardware. Hosts are setup with no expiration so only if...
  11. B

    Ceph cluster with one node having no hard drives for specific pool.

    I have 5 node Ceph cluster run under Proxmox. I am moving VMs from an old PVE/Ceph clusters to a new one PVE/Ceph clusters. During this time my new 5 node Ceph cluster will not have all the drives installed. I will have only 8 SSDs, planning to put 2 per server which will cover 4 servers, the...
  12. B

    Multiple Proxmox clusters on the same subnet

    MAC addresses are set manually, there is a DHCP reservation for all hosts. It is not possible to get the same MAC address. Thank you
  13. B

    Multiple Proxmox clusters on the same subnet

    Just wanted to confirm , on 8.2 version we can have multiple PVE clusters on the same subnet as long as the "Cluster Name" is different , right ? We are adding servers and creating new cluster , we will have 5 PVE cluster on the same subnet for about 2 week, after 2 week we will go down to 3...
  14. B

    10GBE cluster without switch

    As for Ceph, it is more challenging to install. You should at least go over this wiki document. https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster Because you are installing Ceph under Proxmox you can do that from web admin portal and add storage during the deployment. You need...
  15. B

    10GBE cluster without switch

    yes, that is the idea. Ceph on high capacity interfaces.
  16. B

    10GBE cluster without switch

    sorry , yes you can also use Ceph. Ceph is harder to configure whereas the replication is easy. I am not sure what is the minimum number of storage drives (OSDs) requirement for Ceph but I had replication working on just 4 drives with 2 nodes + 3rd node to keep quorum - it is easy to setup. I...
  17. B

    10GBE cluster without switch

    In order to fail back you need external storage or replication (with zfs drives). I had a 3 node cluster with replication on with zfs and it worked great. The cluster should be really on 1 Gbps interfaces there is not much in there , depending how much traffic your VMs will use to communicate...