Search results

  1. S

    VM disk takes more disk after migrating it to LVM-thin storage

    You need to have the discard flag set on the disk before you run the fstrim command. If you don't have it enabled the fstrim command will run but won't actually be passed through to the underlying disk. You need to make sure discard is enabled, VM stopped and started and then fstirm enable...
  2. S

    VM disk takes more disk after migrating it to LVM-thin storage

    If you enable discard and then run a fstrim on the VM after it’s enabled and you have applied it to the VM it will clear down the space. This is a limitation of QEMU live migration
  3. S

    Partition on ceph corrupt

    It will ignore any differences between the two copy’s the the PG’s will bring the PG online but the data within it will be corrupt and/or missing. But being you only have two copies of the data as was running replication of 2 you don’t have too many options.
  4. S

    Advice on Setting Up Redundant RAID with SATA SSD for Proxmox Server

    This is impossible to answer without knowing your setup. But there is 0 reasons why you wouldn’t be able to use the same IP. However going by your limited expirence and fact this is a hobby project ZFS may not be the best option due to the extra complexity you will need to learn and manage...
  5. S

    Advice on Setting Up Redundant RAID with SATA SSD for Proxmox Server

    You would need to wipe and start from fresh as the underlying setup of the Filesystem would need to change from what ever your currently running to ZFS. You'd need to backup your VM's wipe and run through the installer to setup ZFS: https://pve.proxmox.com/wiki/ZFS_on_Linux
  6. S

    Partition on ceph corrupt

    No, it will allow you to repair and or tell CEPH they are PG's with lost data and to continue. But I think at this point you have data that was inflight to the disks that has been lost, hence you have PG's in the state they are, one of the core downsides to a size of 2 vs the recommended 3.
  7. S

    Issues with HP P420i and SMART

    Why are you trying to open it?
  8. S

    Installation Proxmox (Dell r750 PowerEdge 24 disks) "Only shows my RAID"

    If you run with no raid if a disk fails your loose any files on that disk.
  9. S

    Installation Proxmox (Dell r750 PowerEdge 24 disks) "Only shows my RAID"

    No problem as such, only issue with making RAID 0's instead of mixed mode with HBA is your underlying OS won't get full access to the disk and it's health as they are hidden behind the RAID. Smartctl normally can connect via raid controllers but takes some extra steps. What RAID controller is...
  10. S

    Partition on ceph corrupt

    It's not quite clear exactly what you did at the start when this all happened? Did you physically remove a disk whilst CEPH was online, all it takes is a write to not have been completed when your running 2/1 and you have a corrupt PG, if you did something on one of the disks outside of CEPH...
  11. S

    Is compute node + backup node + Q node setup ok as somewhat HA cluster?

    One question I have you state your aiming for 99% uptime, this is around 3 days a year downtime. If that's the case sounds like a Active node with backups/data sync to a secondary source and the downtime of bringing up that secondary source is fine. If you was aiming for more 99's then your...
  12. S

    Installation Proxmox (Dell r750 PowerEdge 24 disks) "Only shows my RAID"

    What you want is a RAID card that supports mixed mode, where some disk's are put through to the OS as a HBA would and some are under RAID aka RAID5. If your current card does not support mixed mode then your need to do separate RAID0's for the remaining disks for them to appear in the OS.
  13. S

    Setting MTU to 9000 blocks access to node, but node can ping

    Ping by default wont be using the MTU9000 hence why it still works. Your need to enable MTU 9000/Jumbo Frames on the switch config also.
  14. S

    Ceph 17.2 Quincy Available as Stable Release

    I can see 17.2.4 is now live, however Ceph released 17.2.5 as they missed some patches into 17.2.4. Has Proxmox back ported these into the 17.2.4 or shall we expected a 17.2.5 soon?
  15. S

    Ceph 17.2 Quincy Available as Stable Release

    Any update for when a new release will come out, seems Proxmox still on 17.2.1 CEPH released 17.2.4 recently.
  16. S

    discard=on - why not default ?

    discard = on whilst as you can say can save quite alot of storage as the disks are constantly made thin, it can put quite a large amount of IO onto the underlying disks if a guest suddenly removes or writes lots of data, as this all needs to be flushed. So I imagine this is why it's off by...
  17. S

    Design - Network for CEPH

    CEPH can only do one network per front and back, so you won’t be able to do redundancy unless you can do LACP across both physical switches.
  18. S

    Install HomeAsisstant in Proxmox

    How much ram does the host machine have? How much ram are you assigning to the VM?
  19. S

    which storage for large VM's?

    If you want to use CEPH to storage the files you have the option of CEPHFS or RBD as you state. Thing with RBD or any FS once you get big you get negatives such as long FS scans and many other things if you ever have an issue down the road, the benefit of cephfs is all that stuff is running in...
  20. S

    Extremely slow PVE - Ceph - PBS datacenter. Looking for advice.

    Need some more information to help.. What do you have setup pool wise on the disks? Replication? Erasure Coding? How are you mounting the storage to the backup server? Is ceph in a fully healthy state? If you do tests locally on a box running Proxmox/CEPH what speeds do you get then?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!