I've installed PVE 6.3 in a Dell R620 with this disk config.
- 2TB SATA 7k OS Disk as Ext4 with LVM, with 0 GB Swap (we have 256 GB RAM) and 0 GB VMs
- 2TB SATA 7k disk unused and formatted with LSI Utility and "sgdisk -Z"
- 4 x 2 TB SSD Enterprise disk (new disks not ever used)
Once installed...
Hi, we are running a Proxmox 6.1 cluster with five nodes. On a node with low memory, running ok for more than 100 days, i've limited ZFS max memory setting this max size to /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=4294967296
update-initramfs -u
reboot
Then i've rebooted server and...
Hi, we have a cluster with five nodes. All nodes have Proxmox installed on SSD and 4 x 2 TB SATA (3.5" 7200 RPM) ZFS Raid 10. All nodes have between 90 GB and 144 GB RAM.
On nodes 1 to 4, we have about 30-40 LXC container with Moodle on each node. All databases are on external server.
All...
Hi, on my Proxmox Cluster one node throw this email.
The number of I/O errors associated with a ZFS device exceeded acceptable levels. ZFS has marked the device as faulted.
impact: Fault tolerance of the pool may be compromised.
eid: 445648
class: statechange
state: FAULTED
host...
Hi, we have three servers, running Proxmox 6.
The three nodes are identical.
2 x 24 x Intel(R) Xeon(R) CPU E5645
More than 96 GB RAM per node.
1 x Samsung SSD 860 EVO 250GB (Proxmox installation)
1 x NVME Samsung SSD 970 EVO 250GB (4 x 48 GB for DB/WALL)
4 x 2 TB 7200 rpm Western Digital...
We're testing Proxmox 6 with ZFS and CEPH. We have three nodes, all with four 2 TB disks, one SSD for Proxmox and a NVME 250G disk for DB/WALL
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda...
We have a three node cluster. Each node has four 2 TB disks and created a ZFS Raid 10 making a usable 3.63 TB Pool on each node.
We need format and install FreeNAS on our actual samba server. We need a shared folder with about 4 TB free space.
How can i make a shared SMB folder with this...
Hi, I've used Proxmox 5.3 with zfs-zed receiving events generated by the ZFS kernel module with no problems at all.
I've reinstalled all nodes with Proxmox 5.4 and during zfs-zed I got this.
apt-get install zfs-zed -y
Reading package lists... Done
Building dependency tree
Reading...
Hi, we have a three nodes cluster with ZFS storage.
We are using replication for our LXC servers. On node 1, we have about 67 LXC containers, using replication to another server.
We have scheduled another replication and we got this errors on webui.
file...
Hi, i'have a cluster with five nodes. Nodes are running CTs.
If we enable a replication between two nodes, replication is too slow. Weeks ago was instantly. Now when replication starts, saids.
2019-03-11 08:21:00 501-1: start replication job
2019-03-11 08:21:00 501-1: guest => CT 501, running...
Hi, we have two DELL servers running Proxmox.
- DELL R610 with four SAS 15k drives and PERC H700
- DELL PE1950 with two SAS 10k drives and PERC 6/i
Actually, as we can't pass disks directly to Proxmox, we have
- 4 x RAID 0 drives and Proxmox with ZFS Raid 10
- 2 x RAID 0 drives and Proxmox...
Hi, we have four servers running SolusVM. We want to migrate to Proxmox. All four servers have four 2TB 7k drives and motherboard is Intel Soft Raid ready.
In terms of performance and security what's the best?
- Independent four ACHI drives and install Proxmox and make a ZFS Raid 10 with the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.