Search results

  1. I

    Issues with HP P420i and SMART

    Just to mention my own experience with P420i, I had been running Ceph with OSDs on Raid0 for a couple of years with no problem. When building a new cluster I recently I decided to experiment with HBA mode. I can confirm that with PVE 8 the hpsa driver is used by linux automatically if the...
  2. I

    [SOLVED] issue with ZFS on HP smart arraw p420i

    I have a a few of these dl380p, currently running a ceph cluster with osds on raid0 and very happy with it. I am about to build a new cluster and was planning to use zfs replication rather than ceph. I was encouraged to read your first post and less so your second post. One question in your...
  3. I

    Ceph mirror network requirements

    I have been experimenting with ceph RBD mirror, and had some initial difficulty understanding the network requirements and how to meet them in my specific environment. Since first posting I have made some progress so I am updating this post to reflect what I have learned in the hope that it...
  4. I

    ssacli raid error - with hp DL360P

    If you are using the P420i as RAID controller. Then you must create logical drives for any disk's to be detected by Linux. So your RAID 1 is fine, but if you want additional individual disk's you must create a RAID0 for each. Maybe this is already understood ? it wasn't clear to me.
  5. I

    ZFS in Guests

    "As others told you, putting zfs in a guest is not a "supported" solution" Nowhere on this thread has anyone said any such thing. Other than yourself the only other responses are both from members who see it as a viable solution, and one of whom is actually also doing the same thing. Anyway...
  6. I

    ZFS in Guests

    With respect, this is one of the primary use cases for virtualization, i.e being able to leverage diverse best in class applications, and preserve legacy investments, with the advantage of benefitting from consistent approach to backup and high availability. So it really depends what you mean by...
  7. I

    ZFS in Guests

    Yes though ZFS would be more typical and provides features that facilitate for example, as you hinted already, shadow copies on the SMB shares.
  8. I

    ZFS in Guests

    I have had two apps that use ZFS, (both because that are running on BSD) :- pfsense and truenas. I have since moved pfsense to dedicated redundant bare metal because it's hard to do remote diagnosis on a sick cluster when you are logged in via a firewall running on said cluster - lol I am...
  9. I

    ZFS in Guests

    Just to update this after running for six months with a multi virtual disk Raid Z. Its NOT a good idea. Online backups leave the Raid Z in an inconsistent state, which means backup restore always involves subsequently having to fix a corrupted ZFS pool. Backup with the VM shutdown is fine...
  10. I

    Proxmox VE 8.0 released!

    Installed this today, initially on top of clean deb 12 install ( since in the past with pve 6/7 I never succeeded with the pve installer iso). This time however I found deb 12 was generating an arbitrary hdd enumeration, a real PITA when you come to setup Ceph OSDs, so I decided to give the...
  11. I

    [SOLVED] kvm_nx_huge_page_recovery_worker message in log...

    me too, clean install of pve 8 today, and seeing the same error. I am on non-subscription repo, not too keen on building kernel, and reading this thread it seems noexec=off is not a viable interim workaround ? FYI I have hyper converged setup 3 identical nodes (dl380p gen8), all with GPU...
  12. I

    ZFS in Guests

    Hi, I am running a cluster with Ceph Bluestore and some guest VMs that use ZFS file system. To date I thought it prudent to set up a virtual raidZ in these VMs i.e. provide min three virtual disks to the guest. The primary reason for using ZFS is features such as compression and snapshots and...
  13. I

    corosync stability issue

    so it looks like every time the links toggle it is after a pmxcfs received log May 5 10:28:15 mfscn03 pmxcfs[2044]: [status] notice: received log May 5 10:29:06 mfscn03 corosync[2067]: [KNET ] link: host: 1 link: 0 is down May 5 10:29:06 mfscn03 corosync[2067]: [KNET ] host: host: 1...
  14. I

    corosync stability issue

    Just to follow up on this, after adding one of the other networks into the corosync as suggested things are considerably improved. Now I only very occasional messages indicating only that an altenative corosync network path has been selected. e.g. May 4 16:15:47 mfscn03 corosync[2067]: [KNET...
  15. I

    corosync stability issue

    Thanks for response PBS is a physical server hanging off the 10G nexus switch, each node has a 10G connection to the switch, same with server hosts PBS and NFS, but problem appeared with PBS before I added NFS. The corosync network was previously also used for occasional remote management, so...
  16. I

    corosync stability issue

    Hi , I have a classic three node hyper-converged cluster. Each node is identical HP DL380p with 96GB memory. 12 disk bays as follows:- 1 x sata 320 GB HDD (PVE host OS) 1 x sata 120GB SSD (Ceph WAL) 10 x 1TB HDD (OSDs) Network and interfaces are as follows: 1 Gbe corosync (single port on...
  17. I

    Add cluster network to existing system.

    Thanks, yes it was quite straightforward in the end. I modified the global section to redefine the cluster network. I destroyed and recreated the monitors one at a time (maybe this wasn't necessary?) but it was only after restarting the OSDs that I saw traffic starting to flow on the cluster...
  18. I

    Add cluster network to existing system.

    Hi, I have a 3 node hyper-converged setup which was created with a combined cluster and public network. I would like to add a separate cluster network, but am unsure if and how this can be done on an existing installation. Any tips would be appreciated.
  19. I

    NFS Unknown symbol errors

    OK solved. For the benefit of anyone else, I had upgraded all nodes but the upgrade for this particular mode must have failed. The following dirs were missing /lib/modules/5.3.18-3-pve/kernel/kernel /lib/modules/5.3.18-3-pve/kernel/lib /lib/modules/5.3.18-3-pve/kernel/mm...
  20. I

    NFS Unknown symbol errors

    I have a four node cluster, 3 Ceph nodes, plus a node hosting NFS for backups. One of the three Ceph nodes has broken such that it cant access the NFS share. The other two are fine. All the nodes are identical hardware and software is pve install on top of a debian buster minimal net...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!