accessing through the web dashboard:8007, I am unable to create new OSDs at this time. Here's the output of the GUI task:
create OSD on /dev/sdad (bluestore)
creating block.db on '/dev/sdi'
Physical volume "/dev/sdi" successfully created.
Volume group...
Man, I am having terrible luck with PBS and tape archiving this week.
When trying to catalog tapes, any which were full prior to my need to restore everything from tape, are erroring out at the end of the tape:
2022-12-18T08:31:53-07:00: File 640: chunk archive for datastore 'store'...
This appears to be unrelated to the other issue I am experiencing, wherein empty tapes which are part of a media set cause the inventory command to fail and stop:
2022-12-17T11:58:33-07:00: inventorize media 'JW0129L8' with uuid '7e2637fb-5bc9-4af8-9f09-417e58a6d80c'
2022-12-17T11:58:33-07:00...
I have a PBS install associated with a PVE cluster. To make a long story short, I am reinstalling PBS due to some pretty catastrophic data loss which impacted both PVE and PBS (ceph failure).
I have a tape library which has all of my archives, but need to reinstall PBS, and do not have access...
I am migrating my backups over to PBS, and I am using the opportunity to clean up old containers+VMs.
Previously I was using cephfs storage for backups, which has resulted in ~15TB /mnt/pve/cephfs/dump directory.
I noticed while deleting old machines that the associated VZDump backups are not...
Snapshots are failing in the use case where a disk is added to a VM multiple times to increase throughput/parallelism via iothread and multipathd:
Here is a relevant portion of the config
Backups work well in this case, because the extra disk lines are set to backup=0, however, snapshotting...
When will proxmox be adding ceph quincy to the experimental repo?
I accidentally upgraded to quincy near the start of the pandemic, struggled to get things working, but did become operational after a few weeks of mucking about. Can't rollback due to the structure changes, and have been waiting...
Hello,
I am trying to enable Priority Flow Control (PFC) on my proxmox cluster with hopes to enable RoCEv2 on ConnectX-3 and ConnectX-4 HCA.
This is a bit "in the weeds", but I am just a hobby user, so please bear with me as I am a little over my head here in some areas.
Nvidia makes it...
/dev/rbd1
kvm: -drive file=/dev/rbd/rbd/vm-150-disk-0,if=none,id=drive-scsi0,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap: Could not open '/dev/rbd/rbd/vm-150-disk-0': No such file or directory
TASK ERROR: start failed: QEMU exited with code 1
I suspect this issue has...
Like a dummy I accidentally upgraded to the ceph dev branch (quincy?), and have been having nothing but trouble since.
This wasn't actually intentionally, I was trying to implement a PR which was expected to bring my cluster back online after the upgrade to v& (and ceph pacific).
--> It did...
I had this as a comment in another thread, but moved it here to it's own thread.
I have a three node Ceph cluster that I am diagnosing the presumed slow throughput of.
Each node has a on a connectx-3 Pro 2 port qsfp+. While I was running them via ib_ipoib, in an attempt to get past the low...
With regard to:
https://forum.proxmox.com/threads/ceph-read-performance.25785/page-3
I don't see on the roadmap parallelization of disk IO, but recognize that it would have a substantial benefit to small-medium sized ceph clusters, which tend to have low single threaded IO.
Currently, I've...
Hey, I've sortof ignored this health warning for a long while, but decided to try to take a crack at fixing it.
I recognize the super common python error, but upon review, my python installation(s) do indeed have requests installed, as shown below...
I beleive my use of anaconda3 for virtual...
Hello,
I seem to be having troubles with failover / redundancy that I am hoping someone in the community might be able to help me understand.
I have a four node cluster, which I am working to ensure high-availability of the vms and containers being managed.
This is a hobby cluster in my...
Feature Request: SMR Capable Linear Writes
Loving what proxmox is up to with PBS!
One feature which would be immensely helpful would be the ability to safely use SMR drives without issue.
To be transparent, I simply have some I would love to use, but the use-case for enterprise is clear...
See:
Simply changing the noted command to 192.168.2.0/24 returns:
ip: '192.168.2.22'
However, this task error is appearing in the GUI, and I can't see how to update the command.
Help!
Hello,
I am looking at reworking my ceph setup. I have 26 8TB SAS drives, and independantly I have 16 inexpensive consume SSD drives, all uniformly two models.
Before I go through the setup, I wanted to double check if it makes sense to use these ssds as bluestore db+wal devices, or if I am...
I have several 8TB seagate backup drives which were purchased both before the SMR news in the drive world, as well as prior to my knowledge of SMR drives.
As I am about to install a proxmox backup server at my home cluster, I wanted to confirm if SMR drives would work as storage.
Putting this...
Here's a google sheet full of my testing:
https://docs.google.com/spreadsheets/d/1JJhZqxVbF7KsF_uLEOd7tOf-cCyV24mY_4Uc3vOeVeY/edit#gid=194361085
When copying files to or from anywhere, I am limited to ~350MB/s transfer speed.
By anywhere, I mean:
To and From a Hardware Raid - 8 (cheap...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.