Search results

  1. Z

    Grafana Proxmox Dashboards

    I am seeing if anyone is willing to share their proxmox dashboards using the external metrics? A lot of the ones on grafana's site seem to be outdated or just old. Thanks for anyone willing to share!
  2. Z

    msata ssd (mlc type) to usb 3.0

    Checking to see if anyone has tried using a msata ssd to a usb 3.0 converter as the proxmox boot drive. 1. If so have you had any problems? 2. Any endurance issues with this type of ssd? I am trying to free up another drive bay in several servers if this is a possible method. Thanks, Zombie
  3. Z

    SanDisk LB406S 400GB for ceph

    @tburger All good here! I have been a bit busy with a couple extra projects on top of this one, so I have had to put this on the back burner for a hot minute. I saw a solution onliine (dont' have the link right now) for being able to get the ceph information into grafana with a ceph exporter...
  4. Z

    Ceph Octopus upgrade notes - Think twice before enabling auto scale

    Ok so let me make sure I have this correct. Based on the link above for choosing the number of placement groups (OSD * 100)/ Replicas. For me (21 * 100)/3 = 700. So should I set it to 1024 since it is the next one up or go to 512? Right now mine is set to 256, but if it recommends higher I...
  5. Z

    Ceph Octopus upgrade notes - Think twice before enabling auto scale

    So I am confused a little bit and maybe need some clarification. Prior to my upgrade I was running with the PGs set to 256 (3 node cluster 7 OSD's per node) and now that I have upgraded the auto scaler is recommending the PGs set to 32. I have not enabled it yet because that seems like a big...
  6. Z

    SanDisk LB406S 400GB for ceph

    Do you know if I enable the metric server collection in the proxmox gui to my influx DB if it would be possible to create a grafana chart of just reads and writes, so it could be looked at over a few weeks?
  7. Z

    SanDisk LB406S 400GB for ceph

    Thank you tburger! Lot of good information for me to process and determine what exactly where the work load is used. Right now it appears to be leaning towards the random access, but I am going to watch for a couple of weeks to make sure I do the right upgrades the first time. Thanks again!
  8. Z

    SanDisk LB406S 400GB for ceph

    Well you are right they would work, but I guess I should give some more of my thoughts. I would be upgrading from 10K SAS drives so really any enterprise SSD will be a big improvement over my old drives. I guess between these SanDisk drives and drives like a intel DC SSD S3610 is there really...
  9. Z

    SanDisk LB406S 400GB for ceph

    Looking to see if any one uses these disks in a ceph cluster SanDisk LB406S 400GB 2.5" SAS SSD 6Gbps. If so can you tell me if they are performing well for you or not or if these drives should be avoided? Thanks
  10. Z

    Move vm-cloudinit drive

    Thank you, that does work work, but I ended up have to use the qm set cicustom because I was using a custom config.
  11. Z

    Move vm-cloudinit drive

    So I am moving my ceph OSD's from sas drives to SSD drives. I was able to move the vm's to the new ssd ceph pool with ease using using the move disk in the vm hardware. But there is no option to move the CloudInit Drive (ide2) over to the new ssd ceph pool. So right now in my hdd ceph pool...
  12. Z

    Ceph nodes and non-ceph nodes

    I might have missed the mark, but I have a simple question to see if Im correct or if I need to get a few more parts. Currently I have a three node ceph cluster that was deployed through the proxmox gui. I have 2 x 10 Gb ports, one for public and other for private ceph traffic. Then I have 2...
  13. Z

    Cephfs usage as samba or something

    I am curios if you can point me in the right direction for this or if I am even looking for the right thing. In short I have a ceph cluster with two different rule sets for hdd and sdd. I use my sdd pool to run my all my VMs and big work loads. Since I have transitioned half of my sas hdd to...
  14. Z

    Pvesh command for snippets

    I am looking to see if there is a pvesh command to get the snippet storage available on either the node or cluster? I have not found one, but just checking to see if I overlooked something. Thanks
  15. Z

    [SOLVED] Cloud-init snippet user data

    UPDATE (SOLVED) So if you have snippets enabled for whatever storage it will work when you clone a template and then qm set --cicustom "user=STORAGENAME:snippets/user.yaml" it works. But it doesn't work if you are just run through the cloud-init creation of a cloud image. Then instead of...
  16. Z

    [SOLVED] Cloud-init snippet user data

    Question to see if this will work or if works only one way. Right now if I want to use the user data in in snippets I set it with qm set 900 --cicustom "user=proxmoxnfs-iso:snippets/user.yaml" after I clone a template and it works fine. BUT Want I have been trying to do and it does not seem...
  17. Z

    [SOLVED] NFS problem after latest upgrade .. vzdump backup status: backup failed

    So I ran the above command on my zpool/VMS and now when I create a VM on the zfs pool I get this error: TASK ERROR: unable to create CT 10222 - zfs error: filesystem successfully created, but not shared I must be missing a setting, but this pool is local and not shared in the cluster. Any...
  18. Z

    [SOLVED] NFS problem after latest upgrade .. vzdump backup status: backup failed

    I have the same problem, is there a file that needs to set the no_root_squash or should I run the zfs command to set that? Thanks for any clarification.
  19. Z

    [SOLVED] Enable KRBD on non-KRBD Ceph Pool

    I would like to hear what others have to say about it too. Someone mentioned there was pros and cons for having KRBD enabled or not enabled.