Search results

  1. T

    Nested ESXi Virtualisation

    Thanks - it is attached via SATA unfortunately - when attached via SCSI the disk doesn't appear at all on ESXi Current settings: 4GB, 4 core, host CPU, SeaBIOS, i440fx (tried q35), VMware PVSCSI SCSI Controller (tried default), SATA disk (tried scsi), vmxnet3 NIC's, Other OS Type (tried Linux...
  2. T

    Nested ESXi Virtualisation

    I've seen articles/posts regarding nested ESXi virtualisation but I seem to have an issue with purple screens when writing data to a second hard drive. If I run the ESXi host on its own it runs without any issues, however when copying data to the nested host it'll randomly purple screen...
  3. T

    Tape - Wrong Media Label

    Odd - I'm pretty sure the GUI prevents you from duplicating a label? We lost a tape and had to name the other tape 'Wednesday2' due to the fact we couldn't re-use Wednesday label until it was destroyed from proxmox-tape. Is there a command to remove certain tapes using UUID? If I wipe it using...
  4. T

    Tape - Wrong Media Label

    There is only one tape with a label for each day (physically) - so one Monday, one Tuesday, one Wednesday, one Thursday. The second tape that's been created was only created when the job run (from what I can understand), and has created a new tape with the same label but different UUID. I'm not...
  5. T

    Tape - Wrong Media Label

    Thanks - I would expect the tape to be marked as writable/empty when the job starts (as media pool allocation is set to always), where as it doesn't seem to be doing that and I assume because it's marked as full/expired it then does nothing?
  6. T

    Tape - Wrong Media Label

    See example of last nights job which I cancelled at 7:50AM today so I could reformat/relabel tape: - So it does seem to detect it as writable, but then thinks the media is wrong?
  7. T

    Tape - Wrong Media Label

    Thanks - Just an FYI, there's currently a backup running on tape labeled Tuesday (after formating and re-labeling I'm able to run the job that was supposed to run this morning). Pool list: - Media list: [ - ] What we would previously do is have a Mon-Thurs, enter the tape once a week and it...
  8. T

    Tape - Wrong Media Label

    What am I missing with the setup? We've got a single tape drive which multiple tapes used throughout the week. On the first occurrence, they work without any issue, however when they next run they determine the tapes the wrong label: Checking for media 'Tuesday' in drive 'LTO8' wrong media...
  9. T

    Ceph OSD

    Struggling to understand the Proxmox logic then. Their KB states recommendation is to put DB/WAL on SSD or NVRAM for better performance, so are they suggested an individual SSD for every OSD? Sounds like my best option is to stick with filestore...
  10. T

    Ceph OSD

    The SSD's are 400GB with 2 OSD's per node, I think around 100GB is allocated for OS + SWAP so should be fine for 30GB per OSD. Do you know the commands to be able to create this? As the Web UI won't allow me to select the SSD as the DB partition.
  11. T

    Ceph OSD

    Yeah we can tolerate a node failure so in the event an SSD died we'd look to replace the SSD or evict the node from the cluster and re-balance on the remaining OSD's. Although it's less of a benefit I'd imagine it's still worthwhile over pure slow disks, and I'd imagine the performance of...
  12. T

    Ceph OSD

    Does the SSD not provide faster cache/journal still though? With filestore we had multiple 7.2k 4TB disks per node and a 20GB journal on SSD. If we then move to bluestore and put both OSD + DB on the slower disk I'd imagine performance would be impacted? Also any idea what the CLI commands...
  13. T

    Ceph OSD

    Currently have filestore OSD's on Proxmox with the journal partition on a SSD drive, this worked fine through the GUI and would create a new partition on the SSD. I'm trying to re-create the OSD's using bluestore now, so I've deleted one of the OSD's and in the GUI tried to create a new OSD...
  14. T

    SWAP Usage

    All hosts have an uptime of around 10 days (in the process of upgrading from 5.3 to 6.1). I suspect the swap would grow again given it's grown this much in only 10 days. The top process for all servers is 4 KVM processes followed by 4 pvedaemon processes.
  15. T

    SWAP Usage

    Any insight as to why SWAP is being used when there's as low as 16% RAM usage on the node? I have one host for example which is sat at 16% RAM usage and using 25MB SWAP, another at 62% RAM usage using 2.8GB SWAP, another at 64% RAM usage using 3.6GB SWAP, another at 55% RAM usage using 1GB...
  16. T

    Proxmox Upgrade

    Looking to perform the upgrade soon, any insight?
  17. T

    Proxmox Upgrade

    Currently looking to upgrade from 5 to 6 with Ceph hyper-converged environment. The pve5to6 script identifies a warning regarding the mon_host being bound to IP/port rather than without the port, however the ceph upgrade instructions state to do this after upgrading to Ceph 14 (after upgrading...
  18. T

    Ceph Slow Requests

    At all times, the SWAP seems to have been a result of the swappiness setting (at 40% it'll start using SWAP?). The IO delay is always around 10% on each of the 4 hosts, however. Any recommendation (i.e. what logs, debug logs, etc) to try to get to the bottom of what's causing this would be...
  19. T

    Ceph Slow Requests

    The IO delay on all nodes seems to sit around 10% too. The SWAP usage is also consistent (i.e. it's not spiking to 3GB, it's consistently sitting around 3GB) even though the RAM is 40-50%.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!