Search results

  1. A

    Proxmox with 48 nodes

    so your entire purpose for any cluster is "because I can?" doesnt really matter how you configure your clusters for that.
  2. A

    Proxmox with 48 nodes

    so before any answer would be applicable... why? what is your use case? Also, 40g for cluster traffic is effectively the same as 10g (same latency.) you should be fine, but depending on the REST of your system architecture its will likely not be enough as your cluster gets larger (more prone...
  3. A

    One zfs pool slows down all other zfs pools

    Not at all certain your conclusion backs the evidence. How many cores did you assign to the vm generating the job? urandom isnt free.
  4. A

    Shared storage over Fibre-Channel / iSCSI with thin provisioning and snapshots

    Stupid question, but have you discussed this with Datacore?
  5. A

    Dumb CEPH questions

    its way too dumb ;) since your crush rule requires an osd on three different nodes, and you have two nodes with excess capacity... the excess capacity is unused. but I think you're going about this the wrong way. What is your REQUIRED usable capacity? might need to replace one server. or get...
  6. A

    New pve9 node - Storage set up help

    Read what it says carefully. pve can use mdadm easily and with full tooling support BUT YOU SHOULDNT. I agree with their assessment ;) There are usecases for it. the converse of my original assertion is that there are cases where you MUST use it or not meet your minimum performance criteria...
  7. A

    fc shared storage for all nodes

    https://forum.proxmox.com/threads/help-needed-shared-iscsi-storage-for-pve-cluster-multi-node-access.167382/post-777930
  8. A

    New pve9 node - Storage set up help

    the "fastest" would be lvm-thick on mdraid10 or directly to individual devices. the thing you need to realize is if the speed is above what you actually NEED you are losing out on other features that may be of much more importance, namely snapshots, inline compression, active checksum, etc etc...
  9. A

    LVM over shared iSCSI (MSA 2050) – Deleted VMs do not free up space – Any remediation?

    Correct. those are not supported options. Its not a bug. PVE doesnt have a mechanism to "talk" to your storage device. the only way to make sure this doesn't happen is to NOT thin provision your luns on the storage.
  10. A

    it isn't working

    @bbgeek17 you're a saint.
  11. A

    Low Budget Proxmox Server

    In what sense? the only real "issue" with this generation of cpu is their atrocious performance/watt, not performance in general. Intel's product portfolio is available tall (core speed) and wide (core count) to suit a wide variety of need. More to the point, size the hardware to the...
  12. A

    Installing doca-ofed on Proxmox 9.0

    you have a normal NIC, not a bluefield DPU. you dont need DOCA.
  13. A

    iSCSI multipath with active/active dual-controller SAN

    you're right of course- I misspoke (miswrote?) What I mean is one vlan per physical interface on your host.
  14. A

    iSCSI multipath with active/active dual-controller SAN

    Any of the paths can be used, but as @bbgeek17 pointed out only one controller will ACTUALLY be serving a given lun, so any traffic pointed to the other controller simply gets handed over the internal communication bus inside your SAN. This is not ideal, and your SAN documentation should give...
  15. A

    TrueNAS Storage Plugin

    The problem with that statement is that there are too many variables to consider. When dealing with Truenas, you're dealing with their ZFS stack passing zvols to iscsitgt; this is a known quantity- but the zpool organization, makeup (number/type of devices,) ram allocated, cpu speed, network...
  16. A

    Messed up my hypervisor when attempting to reduce ZFS writes

    Lesson for the future: dont mess with your production machine. It probably wont. you just wont be able to access various services depending on the stuff you broke.
  17. A

    Messed up my hypervisor when attempting to reduce ZFS writes

    yeah thats a bad way to go. keep logs where they are and mount your ramdisk as a unionfs at boot. I would still do a periodic commit. dont do that. backup /etc/pve/storage.cfg, /etc/pve/lxc, and /etc/pve/qemu-server and reinstall as new- you can just write those back on.
  18. A

    [SOLVED] Ceph not working / showing HEALTH_WARN

    osd.1 appears to be unreachable; in case this is a networking issue, since you are paranoid about posting your actual IP addresses (or at least within their respective subnets) I cant really help you. That said, check the host of osd.1 to see if there are in-host issues (eg problems with the...
  19. A

    performance of SAN nvme storage

    So it is. apologies. your hypothesis is flawed. For one thing, what makes you think you SHOULD get a specific number on THIS specific test? who are these "others" that you are comparing to? maybe ask them what they are doing different.