Hallo Zusammen,
ich denke ich weiß mittlerweile so ungefähr was ich tun müsste bin mir aber nicht 100%ig sicher und würde daher gerne noch den Forumsjoker ziehen.....
Ich habe viel rumgespielt die letzten Wochen mit Proxmox und ZFS und habe daher einen Pool wo ich jetzt am Anfang einen blöden...
Hi there,
Current Situation:
We have a Nextcloud install with about 1000 users (but many are inactive). Everything is currently running on a Debian bare metal server, with caddy as a reverse proxy and Nextcloud (and other services) in Docker behind. Our current server (4 cores, 32 GB RAM, no...
I have set up my volume to be thin-provisioned. I am trying to snapshot a VM to run updates but when I do, proxmox says I am out of space.
TASK ERROR: zfs error: cannot create snapshot 'VMStorage/vm-101-disk-2@Test': out of space
Here is the output of zfs list:
NAME...
Ich habe einen DL380p Gen8 HP Server mit einer H220 HBA Karte verbaut, da ich meine Festplatten direkt ins Betriebssystem durchreichen möchte. Das installieren von Proxmox im zfs RAID 1 funktioniert ohne Probleme. Es werden auch alle Festplatten erkannt, jedoch wenn ich den Server anschließend...
Situation:
- ZFS on NVME, 1GB/s nvme read speed
- ARC (in ram) size about 120GB - fully heated
pve-manager/8.1.4 (running kernel: 6.5.13-1-pve)
CPU: E5-2683 v4
This CPU has a very poor per-core performance. This leads to:
- Windows 10 VM, Virtio-SCSI Single, iothread, 128kb block, cache=none ->...
I am thinking about what zfs configuration to use. I think RAIDz2 would be great because its 2 drives for tolerance, but some people were saying not to use it with more than 10 drives, do you think 12 drives would still be fine?
Thank you!
I have 4 18TB Seagate IronWolf Pro drives and I was wondering if I could used Exos drives of the same size together? They seem to be basically the same kind of drives but I just want to make sure. My use for this is running Virtual Machines. Thank you
Hy Leute kurtze frage
Hab einen LXC (unprivilegiert) mit Nextcloud auf auf einem ZFS pool laufen da ist mir letztens aufgefallen das der Proxmox HOST direcken zugriff auf die daten im LXC container hat.
(keine mount points vorhanden)
Gibt es da eine möglichkeit das zu unterbinden?
mfg. Mike
Hello everyone
I been testing ZFS on NVME Drives.
Installed proxmox
on 2 NVME drives:
They are SN:
7VQ09JY9
7VQ09H02
Filesystem: ZFS Raid 1 (mirroring)
Reboot to installed proxmox and check:
zpool status -L
pool: rpool
state: ONLINE
config:
NAME STATE READ WRITE...
I'm new to ZFS on Proxmox.
one of nodes have:
- 250 Gb boot ssd (with /boot, local / local-lvm on it)
- 1 Tb SSD with ZFS (added post-install), with one
Can I add another disk to just extend available space on pool (without any extra redudancy, I have PBS-based backups)?
I'm correct that I...
Hey ich bin ein riesen Proxmox fan geworden und habe aktuell 2 Server bei denen einer Master und einer slave ist, diese sind nicht als HA Verbund konfiguriert.
Sie laufen beide unabhängig voneinande, jedoch sichert der Haupserver einmal in der Nacht die VMs und LXCs via ZFS Sync auf den slave...
Storage noob here. I am building a new single node proxmox server on a 2U server that has 9 3.5in 4TB HDDs and 6 2.5in 800GB SSDs beyond what is used for Proxmox's boot image. The server will be running a mix of stateful container workloads, databases for stateless containers, and VMs for...
I have a dl380p gen8 with a p420i controller in hba mode. 2 SSDs (OS) and 6 HDDs (DATA) are installed. I tried to install Proxmox on the two SSDs in RAID 1 (ZFS). I was able to select the two hard drives and started the installation, but I got the following error message: unable to create zfs...
Hello everyone,
I wanted to try the zfs-attach feature which allows to attach a disk to an existing raidz config:
If the existing device is a RAID-Z device (e.g. specified as "raidz2-0"), the new device will become part of that RAID-Z group. A "raidz expansion" will be initiated, and once the...
Hello there,
I have some raw template images (e.g. debian11.img) I want to import to my Proxmox VE instance to use them as templates to create KVMs. I use a ZFS volume, local-zfs.
When I searched the web, I found this thread from 2015...
Hi all,
We have a storage cluster running with NVME drives using ZFS to have a RaidZ2 pool.
The pool is 84TB and while it is nice to see 84TB scrubbing finished in 39 minutes while reaching speeds up to 2.1GB p/sec, i would like to slow it down a bit or at least doing something to not let VM's...
I've been skating by on a 4-disk raidz1 device for a few years, and recently had a disk die on me, which has led me toward migrating to raidz2 for the extra redundancy. My current zpool has the following configuration of disks:
root@r720-poweredge-1:~# zpool status
pool: SAS-7.2k
state...
I have two containers A and B.
Container A #101 has only a rootfs:
arch: amd64
cores: 1
hostname: A
memory: 2048
nameserver: 127.0.0.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=x.x.x.1,hwaddr=xx:xx:xx:xx:xx:xx,ip=x.x.x.7/24,type=veth
onboot: 1
ostype: alpine
rootfs...
HI, I have cluster with three nodes - pve19lvm)-pve2(lvm)-pve3(zfs). pve 3 installed on ZFS RAID10, I add rpool as zfs-data storage in Datacenter menu. Pve1 is a main cluster node. Both pve 1 and pve2 configured with LVM. When I try to migrate to pve3(zfs) from pve2(lvm), I see error...
Dear, hello everyone.
I have a storage server with truenas, on the other hand I have other servers with proxmox. What I need is for the disks of both the containers and the virtual machines to be stored on the NAS server.
But what happens, if the share is nfs I can create containers and not run...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.