Search results

  1. M

    Migrating Win2022 VM from Esxi (boot fail)

    Yes. Always use the virtio disk and networking. Get it booted, install the virtio driver ISO. Add a temp 1GB Virtio disk as a second Hard Disk. Boot it up make sure the drivers are working. Then power it off, remove the temp 1GB Hard Disk. Change the boot disk over to viritio and adjust the...
  2. M

    Migrating Win2022 VM from Esxi (boot fail)

    Hardware / select the existing Hard Disk, click Detach. Double click the unattached disk and use IDE for controller type. Then go to Options and fix the Boot Order for IDE. Power it up.
  3. M

    Unable to add storage, RAID 50 4.6TB, Dell R720

    My personal opinion is you should just swap out the RAID card for an H310 HBA in IT mode and run ZFS. If you really want to learn what Proxmox is all about and get away from the legacy ESXi world this is the way. Then you can use the new ESXi migration and pull some VMs over directly to test...
  4. M

    Quick wear out on raid1 - Looking for suggestions

    Used pair of cheap Intel S3700 / S3710 400GB makes a great Proxmox ZFS mirrored boot drive IMHO.
  5. M

    Quick wear out on raid1 - Looking for suggestions

    IIRC it helps disable logging of services that you are not using.
  6. M

    IP Address Change

    Assuming your PC is Windows you can go into the IPv4 Properties and add a second IP address in the proper 10.0.x.x network long enough to change your Proxmox to a 192.168.x.x by editing the /etc/network/interfaces file.
  7. M

    Quick wear out on raid1 - Looking for suggestions

    You might want to use the Proxmox post-install helper script and turn off Cluster mode to save some wear and tear. https://tteck.github.io/Proxmox/
  8. M

    DR Strategy

    I love Synology units for what they are. Just make sure you actually test that functionality in the DR site. Might I suggest you set a backup speed limit in the Proxmox Datacenter / Option / Bandwidth settings or in PBS under Traffic Control as you may be maxing out IOPS or possibly network...
  9. M

    DR Strategy

    Your initial question was: "Cluster DR would it be better if i run ceph"? YES. Of course running CEPH on the DR cluster is better than using a single Synology. You should really write out your RTO and RPO goals. Then design your DR plan around those goals.
  10. M

    Trying to recreate ZFS volume - keep getting error {path "/mnt/datastore/backup1" already exists (400}

    The mount folder /mnt/datastore/backup1 probably does already exist. Remove the folder, then create the ZFS pool again.
  11. M

    Boot fail to load Proxmox

    Did you have anything in your local node 2 /etc/fstab that's not mounting now? Drop to the recovery console and edit fstab nano /etc/fstab Years ago I had trouble occasionally booting Proxmox when it would timeout on mounting the ZFS boot rpool. Sometimes you can just type exit and it works...
  12. M

    ZFS pool disk retrieve

    Is the ZFS pool mounted? Was this ZFS pool mirrored or RAIDz with some redundancy in case of a drive failure? That is the point of ZFS. zpool status zfs list zfs import If the drive is timing out, try add extra cooling or give it time to cool off and try again. If you are not getting anywhere...
  13. M

    DR Strategy

    Something seems way off balance here. Why do you have 8 storage nodes and 10 compute nodes? Just setup 10 compute nodes with storage and eliminate the 8 dedicated storage nodes. Are you going to fit all 8 storage nodes worth of data on a single Synology in the DR cluster and expect that to be...
  14. M

    [SOLVED] Recommendation for a san iscsi pro with redundancy

    The big difference with a CEPH vs ZFS in the cluster is your VM migration doesn't have to move the storage. With CEPH your data is on a shared pool and it's already available to all the nodes. If you have a small cluster and want to setup ZFS on each node, it will work. You will just have to...
  15. M

    [SOLVED] Recommendation for a san iscsi pro with redundancy

    Go with a CEPH cluster. ZFS is great but CEPH is much better in a cluster.
  16. M

    [SOLVED] Ceph performance degradation

    # ceph tell osd.12 bench { "bytes_written": 1073741824, "blocksize": 4194304, "elapsed_sec": 0.45615625700000001, "bytes_per_sec": 2353890377.524735, (2300 MB/s) "iops": 561.21119917028784 (561 IOPS) } SAMSUNG MZPLL6T4HMLS-000MV 6.4TB...
  17. M

    Maintenance mode GUI indication

    A follow up picture. Maintenance mode only shows in Datacenter / HA after you have setup a group and added a VM as a resource to the group. My node maintenance script: ## Usage: enablemaint.sh node-1a (node name as first parameter here) ## Take a node offline, set CEPH not to failover while...
  18. M

    Maintenance mode GUI indication

    Datacenter > HA does show if a node is in maintenance mode. root@nx1a:~# ha-manager status quorum OK master nx1d (active, Thu May 16 20:31:13 2024) lrm nx1a (idle, Thu May 16 20:31:17 2024) lrm nx1b (idle, Thu May 16 20:31:18 2024) lrm nx1c (maintenance mode, Thu May 16 20:31:19 2024) lrm nx1d...
  19. M

    New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

    Reboot the ESXi host. I had better luck rebooting vs just restarting the services.
  20. M

    New Import Wizard Available for Migrating VMware ESXi Based Virtual Machines

    On ESXi 7 it was set to 0 which I assumed meant disabled, but that doesn't work on ESXi 6.x. I guess the hunt for the best setting continues.