Search results

  1. I

    Windows VM Trimming

    That's fine. The setup is an experiment. Once I'm happy with it, I plan on changing them out for Micron's which do have PLP and handles the load better. But thanks for the advise :)
  2. I

    Windows VM Trimming

    I am playing with it now thank you. Side question though, Does this then not happen with Proxmox Backup servers too? On that scale, with 3 stripes of 12 drives on raidz it would be hard for me to pick up by myself, How can one check if the padding is an issue in that array too?
  3. I

    Windows VM Trimming

    It is defaulting to 8K and recordsize of 64K root@pm2:~# zfs get volblocksize,recordsize zStorage NAME PROPERTY VALUE SOURCE zStorage volblocksize - - zStorage recordsize 64K local
  4. I

    Windows VM Trimming

    Ok that's interesting. What would you recommend I do to resolve the issue? I don't want a massive performance loss issue, but I assume I need to tweak it to better the loss?
  5. I

    Windows VM Trimming

    I just noticed the block size on zfs is 128K, we use ReFS with 64K block size. Would that difference be what's causing a lot of wasted space?
  6. I

    Windows VM Trimming

    I have added the output below, Unsure if what to look for in this regard @Dunadan? root@pm2:~# zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 135G 3.08G 132G -...
  7. I

    Windows VM Trimming

    Hey Guys, I have scoured through the forum to try and answer my question with no success... I have 3 PM nodes running windows VM's with the storage being ZFS backed, with thin provision enabled. I noticed the 1 ZFS array reporting 17TB used, but in the VM there's 20TB storage assigned, but...
  8. I

    Server Downing From Transfer

    The Corosync is on the same switch, but it's own network range? I assume that's not an issue?
  9. I

    Server Downing From Transfer

    but the 10G is the main network. I would then rather move the 10G network to an internal range and the 1G network to the live IP facing ranges :). But the 1 question still remains. What's causing the reboot? Was it because the traffic was on the live IP range? Or was something else causing it...
  10. I

    Server Downing From Transfer

    I did not even know about that option thank you!. So now the question becomes. Which IP rangedo I need to avoid sending the traffic over? Meaning, If I need to avoid using the corosync range, or if I need to avoid the lice range or what would you recommend?
  11. I

    Server Downing From Transfer

    Hey Guys, I'm hoping for some advise. I've got a cluster of 5 servers. Each has 1 10GB SFP+ port and 2 1Gb network ports connected. The ip address ranges for each differ, i.e. 10G = 10.0.0.2-7 1G = 10.0.1.2-7 1G = 10.0.2.2-7 And i've setup corosync to use the last 1G port range while I use...
  12. I

    [SOLVED] Can't Remove Ceph Pool

    Perfect thank you Aaron :D
  13. I

    [SOLVED] Can't Remove Ceph Pool

    Hey Guys, I am having an issue. I have a stale Ceph pool that I can't remove from the server. I've removed all the nodes except the last one, but when I run `pveceph purge` on the server I am met with `Unable to purge Ceph! - remove pools, this will !!DESTROY DATA!!` but when I try to remove...
  14. I

    How To Re-Add Removed OSD

    Hey Guys, We made a buggerup and removed a harddrive from a server, then afterwards deleted the OSD using the UI. Only after doing so realized the osd's data was not copied over completely yet. So we need to remount it to recover the missing pg. How can we re-add the osd back into ceph, Keep...
  15. I

    [SOLVED] Activity Log Inaccurate

    Hey Guys, Ok so I found the problem. The server had previously been compromised. I was able to resolve the hack but missed 1 cronjob they made: * * * * * root bash -c 'for i in $(find /var/log -type f); do cat /dev/null > $i; done' >/dev/null 2>&1 Which ended up nulling out the logs every...
  16. I

    LXC Containers Backing Up Incredibly Slow

    I assume you mean to convert the LXC to a QEMU VM?
  17. I

    LXC Containers Backing Up Incredibly Slow

    Hey Fabian, Is there any plans to get around this problem?
  18. I

    LXC Containers Backing Up Incredibly Slow

    Yes, all VM's and LXC's are on the Ceph Storage
  19. I

    LXC Containers Backing Up Incredibly Slow

    Hey Matrix, root@c6:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content vztmpl,iso,backup lvmthin: local-lvm thinpool data vgname pve content images,rootdir rbd: Default content images,rootdir krbd 0 pool Default...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!