Search results

  1. N

    PBS with Expander Backplanes and JBOD

    The problem with Raidz and PBS is really low performance, that is why you always go with raid10.
  2. N

    PBS with Expander Backplanes and JBOD

    No,im talking about special device in zfs. It can speed up garbage collection. It is always good to have them, if your zfs is 10tb or 200tb in some cases.
  3. N

    PBS with Expander Backplanes and JBOD

    With those backplanes sometimes it works sometimes not. Really you need to test it with zfs. I had some 40disks backplanes work,and some not. But zfs raid10, maybe 2 nvme/ssd for caching and you're good to go.
  4. N

    Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

    I have one strange error: Kernel 6.14 ,lxc with mysql on it. /var/log/kern.log:3651:2025-04-10T10:35:40.646702+02:00 sp19 kernel: [236703.525309] audit: type=1400 audit(1744274140.640:2244): apparmor="DENIED" operation="create" class="net" namespace="root//lxc-1193_<-var-lib-lxc>"...
  5. N

    Hetzner Storage Box as datastore

    If i'm not mistaken Hetzner box doesnt allow acl?
  6. N

    Migrate ZFS to Ceph?

    Yes, if you have on the same machines zfs and ceph, just livemigrate storate from one to another. This is it
  7. N

    How to make a linux Xorg based os recognize or use multiple screens without a graphic card?

    Could it be that it is from ubuntu or whatever linux virtual workspaces?
  8. N

    How to make a linux Xorg based os recognize or use multiple screens without a graphic card?

    Virtual screens are usually created in the background with tigervnc server or xvfb ,depending on use-case
  9. N

    Most painless way to mass migrate Windows VMs from Vmware to proxmox?

    There is also clonezilla through the network,look at that.
  10. N

    Proxmox Server setup business - 3 nodes cluster - Suggested storage type

    There are two ways: 1. adding more disks to current nodes. 2. adding more nodes with more disks. Second one is always better.
  11. N

    Ceph RBD Storage Shrinking Over Time – From 10TB up to 8.59TB

    And maybe you are using cephfs to store something
  12. N

    Best practice for replacing all OSDs in CEPH cluster

    Destroy all osds per node, that would be easiest.
  13. N

    New Proxmox Cluster with Ceph

    This is okay setup for me, usually we don't have more than 4xdisks per one nvme for caching . If you need more than that we usually recommend adding more nvme caching drives. Yes, if this nvme dies, all osds which are cached on it die, but this is acceptable in CEPH.
  14. N

    Can I run vmware in proxmox?

    I have esxi running under proxmox for more than 5 years, currently version 7x
  15. N

    [SOLVED] Help! Ceph access totally broken

    Thats okay if the backups are right. Who cares. Just demolish ceph cluster and bring it up again.
  16. N

    [SOLVED] Help! Ceph access totally broken

    Can you do the backups of the current VMs?
  17. N

    [SOLVED] Help! Ceph access totally broken

    reinstall ceph. Not whole proxmox, but just ceph inside it. And restore backups from PBS or whatever you have. This is usually procedure for it; systemctl stop ceph-mon.target systemctl stop ceph-mgr.target systemctl stop ceph-mds.target systemctl stop ceph-osd.target rm -rf...