Search results

  1. J

    Proxmox unable to create new ceph OSDs

    accessing through the web dashboard:8007, I am unable to create new OSDs at this time. Here's the output of the GUI task: create OSD on /dev/sdad (bluestore) creating block.db on '/dev/sdi' Physical volume "/dev/sdi" successfully created. Volume group...
  2. J

    got unexpected end of tape

    Man, I am having terrible luck with PBS and tape archiving this week. When trying to catalog tapes, any which were full prior to my need to restore everything from tape, are erroring out at the end of the tape: 2022-12-18T08:31:53-07:00: File 640: chunk archive for datastore 'store'...
  3. J

    Unable to use Inventory function for tape library

    This appears to be unrelated to the other issue I am experiencing, wherein empty tapes which are part of a media set cause the inventory command to fail and stop: 2022-12-17T11:58:33-07:00: inventorize media 'JW0129L8' with uuid '7e2637fb-5bc9-4af8-9f09-417e58a6d80c' 2022-12-17T11:58:33-07:00...
  4. J

    PBS - Recovery of Tape Archives from failed PBS (Host Disk Failure)

    I have a PBS install associated with a PVE cluster. To make a long story short, I am reinstalling PBS due to some pretty catastrophic data loss which impacted both PVE and PBS (ceph failure). I have a tape library which has all of my archives, but need to reinstall PBS, and do not have access...
  5. J

    VZDump backups not removed when deleting VM/Container?

    I am migrating my backups over to PBS, and I am using the opportunity to clean up old containers+VMs. Previously I was using cephfs storage for backups, which has resulted in ~15TB /mnt/pve/cephfs/dump directory. I noticed while deleting old machines that the associated VZDump backups are not...
  6. J

    Bug/Feature Request - Multipath scsi devices cause snapshots to fail due to existing disk

    Snapshots are failing in the use case where a disk is added to a VM multiple times to increase throughput/parallelism via iothread and multipathd: Here is a relevant portion of the config Backups work well in this case, because the extra disk lines are set to backup=0, however, snapshotting...
  7. J

    When will proxmox be adding ceph quincy to the experimental repo?

    When will proxmox be adding ceph quincy to the experimental repo? I accidentally upgraded to quincy near the start of the pandemic, struggled to get things working, but did become operational after a few weeks of mucking about. Can't rollback due to the structure changes, and have been waiting...
  8. J

    Mellanox ConnectX-4 Priority Flow Control - Proxmox won't allow installation

    Hello, I am trying to enable Priority Flow Control (PFC) on my proxmox cluster with hopes to enable RoCEv2 on ConnectX-3 and ConnectX-4 HCA. This is a bit "in the weeds", but I am just a hobby user, so please bear with me as I am a little over my head here in some areas. Nvidia makes it...
  9. J

    pveceph mon create - failed -- pthread_mutex_lock?

    pveceph mon create monmaptool: ../nptl/pthread_mutex_lock.c:81: __pthread_mutex_lock: Assertion `mutex->__data.__owner == 0' failed. command 'monmaptool --clobber --addv rog '[v2:192.168.2.6:3300,v1:192.168.2.6:6789]' --print /tmp/monmap' failed: got signal 6 Haven't seen this error before...
  10. J

    Cannot start VM - /dev/rbd/rbd missing, but /dev/rbd0 present?

    /dev/rbd1 kvm: -drive file=/dev/rbd/rbd/vm-150-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap: Could not open '/dev/rbd/rbd/vm-150-disk-0': No such file or directory TASK ERROR: start failed: QEMU exited with code 1 I suspect this issue has...
  11. J

    Cephfs - MDS all up:standby, not becoming up:active

    Like a dummy I accidentally upgraded to the ceph dev branch (quincy?), and have been having nothing but trouble since. This wasn't actually intentionally, I was trying to implement a PR which was expected to bring my cluster back online after the upgrade to v& (and ceph pacific). --> It did...
  12. J

    2x 56GbE Optimization, slow ceph recovery, and MLNX-OS S.O.S.

    I had this as a comment in another thread, but moved it here to it's own thread. I have a three node Ceph cluster that I am diagnosing the presumed slow throughput of. Each node has a on a connectx-3 Pro 2 port qsfp+. While I was running them via ib_ipoib, in an attempt to get past the low...
  13. J

    Follow-up: Multiple iothreads per disk?

    With regard to: https://forum.proxmox.com/threads/ceph-read-performance.25785/page-3 I don't see on the roadmap parallelization of disk IO, but recognize that it would have a substantial benefit to small-medium sized ceph clusters, which tend to have low single threaded IO. Currently, I've...
  14. J

    Ceph Health Warning: Module 'telemetry' has failed dependency: No module named 'requests';

    Hey, I've sortof ignored this health warning for a long while, but decided to try to take a crack at fixing it. I recognize the super common python error, but upon review, my python installation(s) do indeed have requests installed, as shown below... I beleive my use of anaconda3 for virtual...
  15. J

    Proxmox hyper-converged (ceph) cascading failure upon single node crash/power loss

    Hello, I seem to be having troubles with failover / redundancy that I am hoping someone in the community might be able to help me understand. I have a four node cluster, which I am working to ensure high-availability of the vms and containers being managed. This is a hobby cluster in my...
  16. J

    Feature Request: Backup Staging Drive for SMR Capable Linear Writes

    Feature Request: SMR Capable Linear Writes Loving what proxmox is up to with PBS! One feature which would be immensely helpful would be the ability to safely use SMR drives without issue. To be transparent, I simply have some I would love to use, but the use-case for enterprise is clear...
  17. J

    [SOLVED] Unable to migrate container due to /29 prefix in command?

    See: Simply changing the noted command to 192.168.2.0/24 returns: ip: '192.168.2.22' However, this task error is appearing in the GUI, and I can't see how to update the command. Help!
  18. J

    Ceph Bluestore with Consumer SSD DBs?

    Hello, I am looking at reworking my ceph setup. I have 26 8TB SAS drives, and independantly I have 16 inexpensive consume SSD drives, all uniformly two models. Before I go through the setup, I wanted to double check if it makes sense to use these ssds as bluestore db+wal devices, or if I am...
  19. J

    Are SMR Drives acceptable as (no ZFS) Storage on Proxmox Backup?

    I have several 8TB seagate backup drives which were purchased both before the SMR news in the drive world, as well as prior to my knowledge of SMR drives. As I am about to install a proxmox backup server at my home cluster, I wanted to confirm if SMR drives would work as storage. Putting this...
  20. J

    Identify slow IO, even impacting ramdisk?

    Here's a google sheet full of my testing: https://docs.google.com/spreadsheets/d/1JJhZqxVbF7KsF_uLEOd7tOf-cCyV24mY_4Uc3vOeVeY/edit#gid=194361085 When copying files to or from anywhere, I am limited to ~350MB/s transfer speed. By anywhere, I mean: To and From a Hardware Raid - 8 (cheap...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!