Reading on this, i've done a
zpool clear rpool
and got this
ZFS has finished a resilver:
eid: 478916
class: resilver_finish
host: node01
time: 2020-04-14 08:38:33+0200
pool: rpool
state: ONLINE
scan: resilvered 118M in 0 days 00:00:41 with 0 errors on Tue Apr 14 08:38:33...
Hi, on my Proxmox Cluster one node throw this email.
The number of I/O errors associated with a ZFS device exceeded acceptable levels. ZFS has marked the device as faulted.
impact: Fault tolerance of the pool may be compromised.
eid: 445648
class: statechange
state: FAULTED
host...
Hi, we have three servers, running Proxmox 6.
The three nodes are identical.
2 x 24 x Intel(R) Xeon(R) CPU E5645
More than 96 GB RAM per node.
1 x Samsung SSD 860 EVO 250GB (Proxmox installation)
1 x NVME Samsung SSD 970 EVO 250GB (4 x 48 GB for DB/WALL)
4 x 2 TB 7200 rpm Western Digital...
Thanks. Upgraded with non-subscription packages and it works ok! :)
We're using spinning disks (4 x 2TB) and a 250 GB NVME for DB disk and using Bluestore.
Can we use filestore (using commands) with Proxmox 6?? We have read filestore have more power with non ssd disks.
Is this still valid? https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#block-and-block-db
Can be done with Proxmox commands??
We're testing Proxmox 6 with ZFS and CEPH. We have three nodes, all with four 2 TB disks, one SSD for Proxmox and a NVME 250G disk for DB/WALL
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda...
It's no longer necessary. We created three Ubuntu VMs and shared a glusterfs volume with samba (on other vm, four in total and out from the cluster for performance). Moved all data to this shared glusterfs volume, formated NAS with FreeNAS and moved again from glusterfs to NAS.
Thanks.
We have a three node cluster. Each node has four 2 TB disks and created a ZFS Raid 10 making a usable 3.63 TB Pool on each node.
We need format and install FreeNAS on our actual samba server. We need a shared folder with about 4 TB free space.
How can i make a shared SMB folder with this...
Hi, I've used Proxmox 5.3 with zfs-zed receiving events generated by the ZFS kernel module with no problems at all.
I've reinstalled all nodes with Proxmox 5.4 and during zfs-zed I got this.
apt-get install zfs-zed -y
Reading package lists... Done
Building dependency tree
Reading...
This is hoy we are doing in a server with two different disks but can help you.
https://forum.proxmox.com/threads/different-size-disks-for-zfs-raidz-1-root-file-system.22774/#post-208998
Late ;). Yesterday, deleted json and lock file, deleted all replication jobs and scheduled again.
Apparently, everything is working properly. I'll apply the patches manually just in case.
Thank you very much.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.