Recent content by TecScott

  1. T

    Clear old tasks

    We have years worth of tasks stored in /var/log/proxmox-backup/tasks/ which is taking up a considerable amount of disk space on the OS drive. I set 'proxmox-backup-manager node update --task-log-max-days 365' about a week ago, but it doesn't seem to have made any difference, I can still see...
  2. T

    Ceph - RBD Mirror unlink error

    Looking back over the history of when this started. We introduced a new node to PVE, installed Ceph the following day, then we started creating OSD's on the node the following day again. The issue only appears to have been present after the OSD's were created (i.e. it was joined to the PVE...
  3. T

    Ceph - RBD Mirror unlink error

    Just to add - this seemed to start with about 10 of the 150 jobs and seems to have spread. I've tried disabling mirroring on the disk and re-enabling but it doesn't seem to make any difference (no error on initial mirror image enable or the first snapshot, but second snapshot throws same error...
  4. T

    Ceph - RBD Mirror unlink error

    Following the introduction of an additional node to the primary cluster, a lot of our snapshots appear to be failing to unlink the peer. RBD Mirroring has been configured and running without issue for almost a year, however more and more VM's appear to be throwing an error lately: Snapshot ID...
  5. T

    Nested ESXi Virtualisation

    Thanks - it is attached via SATA unfortunately - when attached via SCSI the disk doesn't appear at all on ESXi Current settings: 4GB, 4 core, host CPU, SeaBIOS, i440fx (tried q35), VMware PVSCSI SCSI Controller (tried default), SATA disk (tried scsi), vmxnet3 NIC's, Other OS Type (tried Linux...
  6. T

    Nested ESXi Virtualisation

    I've seen articles/posts regarding nested ESXi virtualisation but I seem to have an issue with purple screens when writing data to a second hard drive. If I run the ESXi host on its own it runs without any issues, however when copying data to the nested host it'll randomly purple screen...
  7. T

    Tape - Wrong Media Label

    Odd - I'm pretty sure the GUI prevents you from duplicating a label? We lost a tape and had to name the other tape 'Wednesday2' due to the fact we couldn't re-use Wednesday label until it was destroyed from proxmox-tape. Is there a command to remove certain tapes using UUID? If I wipe it using...
  8. T

    Tape - Wrong Media Label

    There is only one tape with a label for each day (physically) - so one Monday, one Tuesday, one Wednesday, one Thursday. The second tape that's been created was only created when the job run (from what I can understand), and has created a new tape with the same label but different UUID. I'm not...
  9. T

    Tape - Wrong Media Label

    Thanks - I would expect the tape to be marked as writable/empty when the job starts (as media pool allocation is set to always), where as it doesn't seem to be doing that and I assume because it's marked as full/expired it then does nothing?
  10. T

    Tape - Wrong Media Label

    See example of last nights job which I cancelled at 7:50AM today so I could reformat/relabel tape: - So it does seem to detect it as writable, but then thinks the media is wrong?
  11. T

    Tape - Wrong Media Label

    Thanks - Just an FYI, there's currently a backup running on tape labeled Tuesday (after formating and re-labeling I'm able to run the job that was supposed to run this morning). Pool list: - Media list: [ - ] What we would previously do is have a Mon-Thurs, enter the tape once a week and it...
  12. T

    Tape - Wrong Media Label

    What am I missing with the setup? We've got a single tape drive which multiple tapes used throughout the week. On the first occurrence, they work without any issue, however when they next run they determine the tapes the wrong label: Checking for media 'Tuesday' in drive 'LTO8' wrong media...
  13. T

    Ceph OSD

    Struggling to understand the Proxmox logic then. Their KB states recommendation is to put DB/WAL on SSD or NVRAM for better performance, so are they suggested an individual SSD for every OSD? Sounds like my best option is to stick with filestore...
  14. T

    Ceph OSD

    The SSD's are 400GB with 2 OSD's per node, I think around 100GB is allocated for OS + SWAP so should be fine for 30GB per OSD. Do you know the commands to be able to create this? As the Web UI won't allow me to select the SSD as the DB partition.
  15. T

    Ceph OSD

    Yeah we can tolerate a node failure so in the event an SSD died we'd look to replace the SSD or evict the node from the cluster and re-balance on the remaining OSD's. Although it's less of a benefit I'd imagine it's still worthwhile over pure slow disks, and I'd imagine the performance of...