Search results

  1. V

    Very large volume (10TB disk)

    so we are talking about PBS, wouldn't all backups and their respective files be larger than 128K? If 4K and smaller are heading over to the special_device wouldn't setting the normal RaidZ2 be best set to 512 or 1M record size? what other files are being stored on the RaidZ2 that are going to...
  2. V

    Very large volume (10TB disk)

    record size for ZFS storage target. (not the special device) ""Cheers G
  3. V

    [SOLVED] proxmox-file-restore failed?

    Hi All not trying to hijack this thread but seeing the same issue here, only problem is i'm not familiar with the language being spoken so hard to follow in detail. @dcsapak is there anything i can help with on this issue to assist with a fix? English please if possible - sorry :) ""Cheers G
  4. V

    Very large volume (10TB disk)

    Interested in this thread as we are currently working with PBS to find the best fit special_small_blocks size as a question what block size did you set on your ZFS datastore is it the default 128k block size? if this is default you'll find that it may perform better with a 1M block size for...
  5. V

    Ceph Performance Understanding

    Ok i think i have worked out the issue. the Rados benchmark isn't pushing the drives hard enough to get the max IOP's available for Reads/ Writes. Have performed some VM drive tests and can see much higher available IOP's in the VM benchmark using CrystalMark and then observing the IOP's...
  6. V

    Ceph Performance Understanding

    Hey Everyone this be sound like a stupid question on this topic but would love some clarity on the numbers we are seeing. let me provide some background to the results for clarity. as an example below: 3 x Hosts 256 GB Ram each Intel v3/4 12 CPU dual socket Mirror ZFS Boot 1 x OSD per host...
  7. V

    3-node Proxmox/Ceph cluster - how to automatically distribute VMs among nodes?

    We used to use DRS on our VMware clusters and found that it was too disruptive to Windows servers etc. vMotion would create a small pause when moving between hosts and this creates issues for all RDS instances and user experience. over time for many work loads we may do a manual DRS balance...
  8. V

    Disk smart status no longer working

    Quick update ive checked the Dell R630 and can confirm that the drive SMART status is reporting correctly in the GUI. let me know how else i can help assist with this issue? Wondering if the HP HBA Card has some specific driver requirements? just throwing it out there. ""Cheers G
  9. V

    Disk smart status no longer working

    Just to chime in we have the same issue with all of our HP G9 servers using hba 240ar controllers. not sure if its the same issue as its a fresh Proxmox install of 6.3 upgraded to 6.4. We also have a Dell we are about to test as well, something tells me its an HP specific issue. ill update...
  10. V

    [SOLVED] Mail notifications

    Hi @dcsapak sorry i cant fing this anywhere in Proxmox are you able to provide more precise steps as there is no section in PVE or in DataCentre that has options > notify. thank you. G
  11. V

    [SOLVED] CIFS Issue - error with cfs lock 'file-storage_cfg' (working now but shows question mark)

    not sure to be honest haven't played with it enough at the end of the day we know its an issue and the fix is to create another user name for the new share. enjoy! ""Cheers G
  12. V

    All the nodes and VMs are suddenly showing unknown

    Hi @jegan did this ever get resolved? seeing the same error in our environment on 1 specific VM. ERROR: Backup of VM 114 failed - Node 'drive-virtio0' is busy: block device is in use by block job: mirror ""Cheers G
  13. V

    [SOLVED] CIFS Issue - error with cfs lock 'file-storage_cfg' (working now but shows question mark)

    Hi All thought i would add to this thread for completeness :) create storage failed: error with cfs lock 'file-storage_cfg': mount error: Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Another discovery i've found that triggers the same error is when you use the same user name...
  14. V

    PMG Suitability and recommendations for customer / prospect

    Hi @Stoiko Ivanov thanks for jumping in and answering some questions. I think PMG has outgrown this setup/ design there are MSP's and reseller that are or would like to use it in a more robust way, the product it self is capable of managing mas amounts of emails as in @mmidgett 's example...
  15. V

    PMG Suitability and recommendations for customer / prospect

    some good points there @mmidgett is PMG able to be configured to send out via multiple IP's? haven't explored this yet would be great to be able to manually make this change if there is a block issue due to spamming. ""Cheers G
  16. V

    PMG Suitability and recommendations for customer / prospect

    May I ask why ? what’s the difference if you also send mail via subdomain ? More info please to better understand the problem. ta
  17. V

    ZFS and Ceph on same cluster

    Thanks for confirming @ph0x . Im hoping someone with this same setup can comment and confirm as well. ""Cheers G
  18. V

    ZFS and Ceph on same cluster

    hi @ph0x thanks for the reply. That’s the reason for my question, specifically in regards to mix and match with Ceph. I’m looking for a confirmation that it’s usable in this way with a mix of zfs and Ceph. using ProMox Ceph specifically opposed to an external 3rd party Ceph cluster. ta
  19. V

    Strange slowness and micro interruptions (solved but want to share)

    Hi @mgiammarco May I please ask with your dl360 G9‘s running Ceph, what storage controller are you using? we are looking at repurpose similar hardware into a Ceph cluster and still reviewing what the best controller from HP is. we have both H240 and 440 controllers available. would love...
  20. V

    ZFS and Ceph on same cluster

    Hi All just wondering is it possible to have a ProxMox cluster with both local ZFS data store per host + Ceph on the same cluster? example. 5 or 6 hosts 10 bays per host 4 bays for ZFS mirror 6 bays for Ceph is it possible to have this mixed storage design in a single cluster? From a...