Search results

  1. J

    ZFS device fault for pool BUT no SMART errors on drive

    Reading on this, i've done a zpool clear rpool and got this ZFS has finished a resilver: eid: 478916 class: resilver_finish host: node01 time: 2020-04-14 08:38:33+0200 pool: rpool state: ONLINE scan: resilvered 118M in 0 days 00:00:41 with 0 errors on Tue Apr 14 08:38:33...
  2. J

    ZFS device fault for pool BUT no SMART errors on drive

    Hi, on my Proxmox Cluster one node throw this email. The number of I/O errors associated with a ZFS device exceeded acceptable levels. ZFS has marked the device as faulted. impact: Fault tolerance of the pool may be compromised. eid: 445648 class: statechange state: FAULTED host...
  3. J

    (Hardware) - Upgrade Proxmox Ceph network from 1G to 10G

    but, will this hardware work together well? i've never used 10G and SFP and DAC cables.
  4. J

    (Hardware) - Upgrade Proxmox Ceph network from 1G to 10G

    Thanks aaron, switch is necessary, we will use it for other network equipment and keeps network config more easy. no?
  5. J

    (Hardware) - Upgrade Proxmox Ceph network from 1G to 10G

    Hi, we have three servers, running Proxmox 6. The three nodes are identical. 2 x 24 x Intel(R) Xeon(R) CPU E5645 More than 96 GB RAM per node. 1 x Samsung SSD 860 EVO 250GB (Proxmox installation) 1 x NVME Samsung SSD 970 EVO 250GB (4 x 48 GB for DB/WALL) 4 x 2 TB 7200 rpm Western Digital...
  6. J

    Shared crontab between nodes

    We're using puppet on all vms and servers on our network, but we wan't keep our Proxmox nodes most simple possible. Thanks.
  7. J

    Shared crontab between nodes

    yes. It helps. Thank you.
  8. J

    Shared crontab between nodes

    It's possible have a shared crontab between nodes in the cluster??
  9. J

    Can't create OSDs with shared disk for DB/WALL (NVME) in same host.

    Thanks. Upgraded with non-subscription packages and it works ok! :) We're using spinning disks (4 x 2TB) and a 250 GB NVME for DB disk and using Bluestore. Can we use filestore (using commands) with Proxmox 6?? We have read filestore have more power with non ssd disks.
  10. J

    Can't create OSDs with shared disk for DB/WALL (NVME) in same host.

    we have pve-manager 6.0-4. Can we patch manually or upgrade using pve-no-subscription packages?? or create using command line parameters ..
  11. J

    Can't create OSDs with shared disk for DB/WALL (NVME) in same host.

    Is this still valid? https://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#block-and-block-db Can be done with Proxmox commands??
  12. J

    Can't create OSDs with shared disk for DB/WALL (NVME) in same host.

    We're testing Proxmox 6 with ZFS and CEPH. We have three nodes, all with four 2 TB disks, one SSD for Proxmox and a NVME 250G disk for DB/WALL NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda...
  13. J

    Large shared SMB folder

    It's no longer necessary. We created three Ubuntu VMs and shared a glusterfs volume with samba (on other vm, four in total and out from the cluster for performance). Moved all data to this shared glusterfs volume, formated NAS with FreeNAS and moved again from glusterfs to NAS. Thanks.
  14. J

    Large shared SMB folder

    We have a three node cluster. Each node has four 2 TB disks and created a ZFS Raid 10 making a usable 3.63 TB Pool on each node. We need format and install FreeNAS on our actual samba server. We need a shared folder with about 4 TB free space. How can i make a shared SMB folder with this...
  15. J

    License incompatible installing zfs-zed on Proxmox 5.4

    No, it's not configured. You don't recommend for production use. It's safe to run in production?
  16. J

    License incompatible installing zfs-zed on Proxmox 5.4

    Hi, I've used Proxmox 5.3 with zfs-zed receiving events generated by the ZFS kernel module with no problems at all. I've reinstalled all nodes with Proxmox 5.4 and during zfs-zed I got this. apt-get install zfs-zed -y Reading package lists... Done Building dependency tree Reading...
  17. J

    Proxmox ZFS mirror install with one disk initially?

    This is hoy we are doing in a server with two different disks but can help you. https://forum.proxmox.com/threads/different-size-disks-for-zfs-raidz-1-root-file-system.22774/#post-208998
  18. J

    Bulk migration will not work

    With Proxmox 5.4 bulk migrate runs in restart mode (other versions can't check) and i can migrate multiple CTs to other node with replicated data.
  19. J

    pve-replication-state.json too long - aborting

    Late ;). Yesterday, deleted json and lock file, deleted all replication jobs and scheduled again. Apparently, everything is working properly. I'll apply the patches manually just in case. Thank you very much.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!