Sorry for reposting as last one got "Awaiting approval before being displayed publicly."
I'm running proxmox on 2 NVME 512GB sticks with zfs raid1 pool (ashift=12). I have written a small script that will measure number of writes per hour (in MB) using smartctl command (src: Github).
I have a...
My new install of PBS (using latest 2.x) has been running fine for a few weeks, and in the last few days it suddenly becomes unavailable in PVE.
Upon rebooting the PBS, I realise my rpool is gone.
There has been no warnings about degrade, nothing about IO delays, nothing.
Please see screen...
I think everything runs smooth but i would like to get confirmation if it is really ok. Setup has flash devices (actually nvme samsung and kioxia) and they are directly running on pcie (i.e. without some kind of raid controller etc although raid controller setup + nvme seems to be rare case...
I feel like I'm not the first to need this, but I've searched forums, reddit, and the internet as a whole to figure this out and just can't.
I've got four disks I've just pulled from a TrueNAS system and put into my Proxmox box. I was able to import the filesystem using zpool...
today I found out that one of my HDDs in a zpool might be faulty. The scrub is running as we speak. The status shows 12 read and 9 checksum errors. I'd already replaced one of the drives a year ago because of this. Drives are Western Digital GOLD 8004FRYZ. SMART says passed and dmesg...
I have rebooted PVE and my pool was gone. I imported it with `zpool import tank`. But after that I have
root@pve:~# zpool status tank
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the...
Hello all, I am running a Proxmox server on version 7-1.2, few days ago i have experienced a power outage which causes system to fail at boot. When i connect a monitor to display, i saw that ZFS triggers kernel panic and system boot stucks.
After that i have booted into recovery kernel, and...
so (again) a disk went bad ("faulted") in a ZFS pool.
Server is a hp Proliant Microserver Gen 8 w. 4 x 4TB - the bad/new disk was/is the 1st/left one in the row, no hardware-RAID configured, not hot-plug.
I removed the faulted HD ("zpool offline").
Systems boots fine w. 3 HDs.
i have a Proxmox node running on version 7.1-7 with 4 SSD and 2 HDD storage configured with ZFS. Everything was working perfectly. Until yesterday, i rebooted node (whis was not my first reboot after setup), and now one of the SSD storages named 'zfs_ssd_1' is not activated. The GUI shows that...
Virtual Environment 7.2-3
I have a home server running Proxmox without any major issues for the past 1-2 years or so. I have two 4TB HDDs in a mirror raid using ZFS for a total of ~4TB usable storage and this is where I keep all of my VMs, ISOs etc. Proxmox itself is installed on a...
Hallo zusammen, beim Setup eines neuen Proxmox Nodes (7.3) ist folgendes Verhalten aufgefallen.
Der Proxmox Installer verwendet beim ZFS Pool erstellen idealerweise bereits die NVMe NGUID anstelle der Labels.
Anschließend werden nach der Installation zusätzliche Spare Devices ebenfalls via...
My current NAS setup has Proxmox VE running on bare metal and TrueNAS Scale running in a VM.
I created a zfs pool "appspool" from the UI: Datacenter -> Storage -> Add -> ZFS
I then created a TrueNAS scale VM and passed through the disk qm set 900 --scsi2...
I have old-ish HP ProLiant Microserver Gen8. It has 4x 3.5" bays (currently populated with 2x WD Reds = /dev/sda, /dev/sdb) + 1x old Laptop 2.5" HDD ( I installed Proxmox OS only = /dev/sdc )
My setup is that I have the Proxmox installed on the single 2.5" HDD, where the whole disk was...
Hello gents, I would like to ask you for an advice after hours of googling without success.
After restarting my home servers when plying with VPN, my 1 month old SSD disk which I am using as a NAS (i know, 1 disk, nas, zfs....who does this) stoped working.
I was unable to access the storage, so...
While working through this problem here, I realized that I needed to fix the layout of the partition in my rpool before proceeding.
/dev/sdb3 and /dev/sda3 are now mirrored partitions (correct) but sda2 should be a EFI boot partition. I'm trying to remove sda2 from the rpool so
What is the correct way to setup local ZFS Pools on each node? I am not looking to share them/ha across nodes, just want to use a local pool on each node for VM's, and used my shared storage for HA (NFS) across the pool.
I have the cluster setup, and when I add a zpool to a node, it looks...
I have a simple problem : i have a Windows server 2019 on a VM.
Space used on the disk of the VM : 179Go
Size of the VM disk : 1.65To
When i backup the VM with proxmox, my backup size is 919Go. Why ?
I cannot defrag the disk on the VM
For information, I use a zpool storage for the disk...
So what i wanted to achieve is i wanted one central Master VM disk and multiple VMS will be using the same as read only mode.
So my base vm let say vm 100 has windows 10 and lots of other softwares, and my other vms, vm101 to vm105 is using the same disk image of base windows vm...
here is a little sunday morning story with a question regarding Datastore emulation:
My homelab is growing - while I am evaluating options to shrink it down for power consumption reasons. Growing is not always linear and with different new (in this case: used) hardware come new...