Looking to redo storage configuration

Nov 19, 2021
2
0
6
Greetings!

I have been running my PVE setup for about two years now. As I've learned more about the setup, available options, and my overall lack of understanding, I've realized that a lot of initial configuration settings I chose to implement were not ideal or best practices.

I recently noticed that one of my drives is wearing out faster than the other (currently at 5% wearout while the other is 0%), even though both drives were bought and installed in the system brand new. A part of this is I'm sure do to the distribution of VMs, but I'm assuming a major contributor is the influxDB VM and its constant writes. I wanted to move that VM off to an older, smaller capacity, dedicated drive. However, I soon realized during my initial setup, I had rationalized using that NVME drive as a dedicated swap. This is where I think I've made incorrect decisions and configurations due to my early lack of experience and understanding.

I am looking to reclaim the NVME drive and reallocate it's space. Here is the info that I can think to readily provide:

Code:
lsblk

NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdd                             8:48   1 931.5G  0 disk
├─sdd1                          8:49   1  1007K  0 part
├─sdd2                          8:50   1   512M  0 part
└─sdd3                          8:51   1   931G  0 part
  ├─pve-swap                  253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                  253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta            253:2    0   8.1G  0 lvm 
  │ └─pve-data-tpool          253:32   0 794.8G  0 lvm 
  │   └─pve-data              253:33   0 794.8G  1 lvm 
  └─pve-data_tdata_corig      253:5    0 794.8G  0 lvm 
    └─pve-data_tdata          253:19   0 794.8G  0 lvm 
      └─pve-data-tpool        253:32   0 794.8G  0 lvm 
        └─pve-data            253:33   0 794.8G  1 lvm 
sde                             8:64   1 931.5G  0 disk
....
├─VMs-vm--145--disk--0        253:8    0    16G  0 lvm 
....
nvme0n1                       259:0    0 238.5G  0 disk
├─pve-CacheDataLV_cpool_cdata 253:3    0   225G  0 lvm 
│ └─pve-data_tdata            253:19   0 794.8G  0 lvm 
│   └─pve-data-tpool          253:32   0 794.8G  0 lvm 
│     └─pve-data              253:33   0 794.8G  1 lvm 
└─pve-CacheDataLV_cpool_cmeta 253:4    0  13.3G  0 lvm 
  └─pve-data_tdata            253:19   0 794.8G  0 lvm 
    └─pve-data-tpool          253:32   0 794.8G  0 lvm 
      └─pve-data              253:33   0 794.8G  1 lvm

Based on the lack of pve-swap associated with the drive, I'm pretty sure I failed in my original intentions of making the drive a dedicated swap. It appears the whole drive is part of a dedicated thin-lvm allocation. I'd like to remove the drive from that association and reconfigure it as a dedicated storage space for VMs-vm--145--disk--0 which is my influxdb LXC.

What would be the best way to pursue that? What things am I not considering before pursuing this reconfiguration?
 
I have been running my PVE setup for about two years now. As I've learned more about the setup, available options, and my overall lack of understanding, I've realized that a lot of initial configuration settings I chose to implement were not ideal or best practices.
Yes, we all have been there.

However, I soon realized during my initial setup, I had rationalized using that NVME drive as a dedicated swap.
it looks like you have used it as a caching device. That is also very good with respect to performance, yet introduces another single point of failure (SPOF).