Search results

  1. N

    [Tutorial] Setting Netflow/Sflow on Proxmox

    Thank you :) Since i cannot edit first post, the ovs blog is now migrated to this page: https://www.netvizura.com/blog/netflow-analyzer/open-vswitch-netflow-configuration
  2. N

    Consumer grade SSD's

    Or don't use zfs,use something that is unsupported like mdadm :D
  3. N

    I'd like to sell VPS (VMs) to the public. Currently using Proxmox has a hypervisor but I have a few questions.

    Usually there is whmcs ,and ofc there is now open-source solution: https://github.com/The-Network-Crew/Proxmox-VE-for-WHMCS
  4. N

    Is a cluster still beneficial if you have a very lopsided configuration?

    If you don't plan on using CEPH, i would recommend doing it, and learning migrations, failovers etc.
  5. N

    Ceph SSD disk performance Kingston vs Samsung

    Rarely i've found kingstons that are good, on the other hand pm893 are something of an industry standard for entry-enterprise disks. Yes they are read-oriented but work really great with everything proxmox.
  6. N

    Training for indy consultant

    My only recommendation (since i've held a few quick-courses in Serbia) is to get yourself familiar with core components of PVE + zfs,ceph, replication, backups etc. This is usually enough for 1 or 2 level courses.
  7. N

    Minimal Ceph Cluster (2x Compute Nodes and 1x Witness -- possible?)

    ZFS will also lose data in some cases, but more important thing is to get ssds with PLP, than anything else.
  8. N

    [SOLVED] high latency clusters

    Corosync, eg cluster networking, not HA directly.
  9. N

    high IO delay

    Aha you are probably thinkin about HA. That one is with shared storage,so CEPH,NFS,etc. ZFS and hardware raid don't support HA in any way.
  10. N

    high IO delay

    Who says raid controller is not supported in Proxmox? It works fine maybe last 10 years. It just doesn't have the features similar to zfs, but everything works. And snapshots are better than zfs's
  11. N

    Feature Request - Cinder

    Proxmox devs are wary of taking outside projects, my opinion is since the whole drbd license problem,where they changed the license overnight.
  12. N

    Feature Request - Cinder

    The main problem is here, you bought a vmware-endorsed san, which is maximally locked, and then you ask proxmox guys(100% open source) to fix this problem. So yes, it is hard to please someone who's left with locked-in tech.
  13. N

    Minimal Ceph Cluster (2x Compute Nodes and 1x Witness -- possible?)

    If you want to learn, i would always recommend going with 3-node ceph,and lets say 2.5gbit card in every node just for CEPH communication. It will work okayish,and you will learn a lot from it. You can always add zfs disk-per-node, parallel to CEPH, if you want to learn zfs-sync/replication and...
  14. N

    Minimal Ceph Cluster (2x Compute Nodes and 1x Witness -- possible?)

    You could bring up ceph on pbs, just install whole pve on it :)
  15. N

    Virtualise mdadm raid with LVM

    If i'm not mistaken clonezilla doesn't support mdadm?
  16. N

    Special Device on Existing PBS

    Move the data from the pool and to the pool and you will rebuild the metadata onto special device.
  17. N

    Special Device on Existing PBS

    zpool list -v , and then look if there is any data on special device.
  18. N

    Degraded pool as a result of several power losses

    Restore from backup,the pool is gone.
  19. N

    [SOLVED] Cehp degraded and did not add back automatically.

    Did you wipe the disk before adding it back?
  20. N

    MAKOP Ransomware attack on PVE 6.4.15 from within a VM guest because it had full access to my vdisks (qcow), allowed ransomware to encrypt my vdisks

    Don't allow smb shares, atleast unauthenticated ones. First rule of anti-ransomware. Second, do the backups and snapshots,