Hello Everyone,
I've searched through forums here for a bit and have determined I am having a space issue on my proxmos host drive.
This is not an inodes issue as far as I've determined.
These drives are 2x 256 SSDs running in a ZFS pool - ever since I upgraded to a recent proxmox version to...
Hello All,
I'm doing things on proxmox since a few weeks. So treat me new on these things.
Coming to the point, whenever try to create a new LV or try to create a Snap or backup, I get warnings...
WARNING: You have not turned on protection against thin pools running out of space...
Hello,
I got 3 Drives, each 279GiB installed as a raidz1. So the usable drive space should be ~558GiB (2x278GiB), but I am only able to use around 393 GiB.
If I look at Storage -> My Storage -> Summary, then I can see that I am using almost 100% of the storage space, but there it says 536...
Dear,
We are running a Proxmox server on a HP Z600, their is a card with one M.2 5xx GB connected.
And still 3 slots free to add M.2, now i'm trying to put in the second one if i do the storage is not available in Proxmox and my VM's are running on that.
How can i resolve this issue?
Thanks in...
Hello everyone,
i have an issue regarding zfs and proxmox storage.
I have a 4x 4Tb sata disk as a ZFSZ1 raid installed. Proxmox tells me that 10.24TiB are available. However i cant create a a 10TiB VM Disk for this.
To show you were i think the problem might come from i figured out following...
now i am planning to buy a new server dedicated to pbs, to backup around 50-100 vms
we keep growing, and i would like to add storage on demand..
can i easily add storage to the main backup pool?
I have had some weird behaviour.
A couple times I have gottent Exclamation Marks on the Node and VMs the Proxmox Node which my PBS was running from. I think this had to do with exceeding the Hard drive space allocated, hence resulting in an I/O Error.
I freed up some space on that particular...
Hi everyone,
I am trying to figure out how to configure two separate clusters which I need to run. Each cluster will contain 5 nodes - 4 Dell R720s and a supermicro 36-bay server for the storage making it 8 Dells and 2 Supermicros in total. The Supermicros will house 20 x 14TB SATA drives per...
Hallo zusammen,
ich habe 3 Node Cluster mit je 3 OSD in jedem dieser Nodes.
Meine Ceph Version ist: 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable).
Der Pool hat Replica 3/2 mit 128pg. Sobald ich eine VM aus einem Backup, das auf einem NFS Share liegt, herstelle zeigt der...
Hello,
in my cluster consisting of 4 OSD nodes there's a HDD failure.
This affects currently 31 disks.
Each node has 48 HDDs à 2TB connected.
This results in this crushmap:
root hdd_strgbox {
id -17 # do not change unnecessarily
id -19 class hdd # do not change...
I just recently upgraded the drives in my zfs pool, and while in the shell zpool shows correct amount of available space (about 11tb 4x4tb in raidz2). Promox is only showing just under 7tb worth of available space in the pool. Is there a way to refresh this or reset it? Because now my virtual...
I have 3 nodes with 2 x 1TB HDD and 2 x 256G SSD's each.
I have the following configuration:
1 SSD is used as system drive (LVM partitioned so bout a third is used for the system partition and the rest is used in 2 partitions for the 2 x HDD's WALs.
The 2 x HDD are in a pool (the default...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.