Hello everyone
I been testing ZFS on NVME Drives.
Installed proxmox
on 2 NVME drives:
They are SN:
7VQ09JY9
7VQ09H02
Filesystem: ZFS Raid 1 (mirroring)
Reboot to installed proxmox and check:
zpool status -L
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE...
I'm new to ZFS on Proxmox.
one of nodes have:
- 250 Gb boot ssd (with /boot, local / local-lvm on it)
- 1 Tb SSD with ZFS (added post-install), with one
Can I add another disk to just extend available space on pool (without any extra redudancy, I have PBS-based backups)?
I'm correct that I...
Hey ich bin ein riesen Proxmox fan geworden und habe aktuell 2 Server bei denen einer Master und einer slave ist, diese sind nicht als HA Verbund konfiguriert.
Sie laufen beide unabhängig voneinande, jedoch sichert der Haupserver einmal in der Nacht die VMs und LXCs via ZFS Sync auf den slave...
Storage noob here. I am building a new single node proxmox server on a 2U server that has 9 3.5in 4TB HDDs and 6 2.5in 800GB SSDs beyond what is used for Proxmox's boot image. The server will be running a mix of stateful container workloads, databases for stateless containers, and VMs for...
I have a dl380p gen8 with a p420i controller in hba mode. 2 SSDs (OS) and 6 HDDs (DATA) are installed. I tried to install Proxmox on the two SSDs in RAID 1 (ZFS). I was able to select the two hard drives and started the installation, but I got the following error message: unable to create zfs...
Hello everyone,
I wanted to try the zfs-attach feature which allows to attach a disk to an existing raidz config:
If the existing device is a RAID-Z device (e.g. specified as "raidz2-0"), the new device will become part of that RAID-Z group. A "raidz expansion" will be initiated, and once the...
Hello there,
I have some raw template images (e.g. debian11.img) I want to import to my Proxmox VE instance to use them as templates to create KVMs. I use a ZFS volume, local-zfs.
When I searched the web, I found this thread from 2015...
Hi all,
We have a storage cluster running with NVME drives using ZFS to have a RaidZ2 pool.
The pool is 84TB and while it is nice to see 84TB scrubbing finished in 39 minutes while reaching speeds up to 2.1GB p/sec, i would like to slow it down a bit or at least doing something to not let VM's...
I've been skating by on a 4-disk raidz1 device for a few years, and recently had a disk die on me, which has led me toward migrating to raidz2 for the extra redundancy. My current zpool has the following configuration of disks:
root@r720-poweredge-1:~# zpool status
pool: SAS-7.2k
state...
I have two containers A and B.
Container A #101 has only a rootfs:
arch: amd64
cores: 1
hostname: A
memory: 2048
nameserver: 127.0.0.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=x.x.x.1,hwaddr=xx:xx:xx:xx:xx:xx,ip=x.x.x.7/24,type=veth
onboot: 1
ostype: alpine
rootfs...
HI, I have cluster with three nodes - pve19lvm)-pve2(lvm)-pve3(zfs). pve 3 installed on ZFS RAID10, I add rpool as zfs-data storage in Datacenter menu. Pve1 is a main cluster node. Both pve 1 and pve2 configured with LVM. When I try to migrate to pve3(zfs) from pve2(lvm), I see error...
Dear, hello everyone.
I have a storage server with truenas, on the other hand I have other servers with proxmox. What I need is for the disks of both the containers and the virtual machines to be stored on the NAS server.
But what happens, if the share is nfs I can create containers and not run...
Hallo zusammen
Hatte eine Proxmox 7 Kiste am laufen mit 64GB und 32GB ARC.
Um umzusteigen auf volblocksize 16k habe ich Proxmox neu installiert.
Nun scheint zfs_arc_max auf 8GB einstellt zu sein.
Wurde das was am default geändert mit Version 8?
Im manual steht noch immer noch 50%...
I would like to keep ZFS-based backups / snapshots in daily (1 month weekly), weekly (5 weeks), and monthly (6 months) intervals on servers so as to be able to readily rollback if need arises.
How is this accomplished in an automated fashion so as to both accomplish this and purge older...
Hi, i am trying to understund why the system behave this way to find out where the issue lays, system specifications and and steps taken in attempt to fix it bellow:
CPU: G4400T / I5 6500 (tried both)
RAM: 16GB (2x 8GB)
Motherboard: ASRock H110M-DGS R3.0 (BIOS: P7.4)
NetworkPVE: RTL8111 GbE...
Hallo,
ich habe einen ZFS Dataset. Genannt "tank/work". Diesen habe ich als Bind Mount in meinen LXC durchgereicht, dann kann ich im LXC unter "/srv/work" auf die Daten zugreifen und auch auf die Snapshots usw.
Es wird alle 15min ein Snapshot angelegt mit zfs-auto-snapshot. Super!
Nund möchte...
I upgraded my hosts and removed all SWAP's from the LXC clients and this is the result.
Now I wonder if the problem was the code or the SWAP. I'm betting LXC don't like ZFS with SWAP.
Hi all!
I want to understand on high level how to arrange mirroring RAID using ZFS in proxmox and how to do it step by step. So according to documentation:
RAID1
Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The...
Hello there.
Since I connected my nodes to cluster I noticed on some Linux VMs I'm getting this error:
I couldn't find any working solution for this. I suppose this has something to do with ZFS as one of my nodes where ZFS is not operating these VMs work without any issues.
Do you have any...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.