Search results

  1. pvps1

    Epyc 7402P, 256GB RAM, NVMe SSDs, ZFS = terrible performance

    we use evo ssd in large numbers and there is no specific speed problem with it. you just cannot use it in host with high io. I ve seen evos going to 99% wearout within 1 month under certain workloads (dbs mainly). others run for years without problems. and of course there are fast pieces around
  2. pvps1

    Replace PVE boot drive with larger drive

    one method: boot your host with clonezilla, clone to external drive, restore to new nvme. iirc cloezilla ask if you want to resize partitions to max (if not this can be done later)
  3. pvps1

    Private server cluster configuration suggestion to use with Proxmox VE with SDS and NAS

    even it's true technically speaking, I feel the urge to say that md-raid and ext4/xfs etc served very well and reliable over the last 20 years. the point is, that ceph and ZFS are everything else but slim. so we use depending on the usecase and available power/budget all of these technics. and...
  4. pvps1

    Private server cluster configuration suggestion to use with Proxmox VE with SDS and NAS

    _if_ you are going with NAS, I'd prefer a simple linux host with nfs and md raid. this way you have total control (eg can just take a disk and access data everywhere in case of failure, no prorietary controllers) and get more performance:price another advantage is simplicity which makes it...
  5. pvps1

    PMG vs. Kommerzielle Anti-Spam Produkte

    Danke, hilft mir sogar sehr...
  6. pvps1

    Verständnisfrage zu Cluster?

    das ist nicht richtig. du brauchst nicht zwingend einen netzwerk storage für einen Cluster. manche dinge wie ha oder live Migration funktionieren dann nicht, aber einen custer hast du ja trotzdem.
  7. pvps1

    Tausch aller OSDs

    wir haben das auch so gemacht um die Kapazitäten zu erhöhen.
  8. pvps1

    PMG vs. Kommerzielle Anti-Spam Produkte

    die Frage würde ich gerne um Antivirus ergänzen. PMG unterstützt nach erster kurzer Recherche nur Avast. Deren Websites sind bzgl Linux Unterstützung aber nicht sonderlich vertrauenserweckend. Grund für die Frage ist die Erkennungsrate von clamav
  9. pvps1

    Using multiple ethernet ports for VMs as opposed to one?

    one usecase: bonding is the way to get redundancy in production scenarios you would have your nics connected to different switches (within a stack) so parts can fail or reboot (e.g. for firmware upgrades) without interruption of services
  10. pvps1

    Deleting a VM by mistake - How to recover it

    for future forensic/recovery it is important, that you stop write access to the disks involved. so stop the storage or pve host. depending on your storagetype and filesystems there are different tools to recover deleted files. if the data is important you should consult professional...
  11. pvps1

    If you are unsure why 'proxmox-ve' would be removed.....

    remove all debian kernel there are dependencies (firmware iirc) that trigger the removal of pve
  12. pvps1

    Disable Root Account

    you can not imo. what are you trying to achieve by deleting?
  13. pvps1

    lvm

    you have to resize your vg, your lv and finally the filesystem. there are a lot of howtos about lvm on the net. search for resize lvm...
  14. pvps1

    Proxmox Ceph Cluster - No Raid

    if you have a hw raid, use it. its not recommended as stated above. the official way to go with proxmox is zfs. one other possibility is mdraid I share the opinion of t. that hw raids are pain. with zfs and ceph you must not use it other than jbods/hba mode. and if course you can mix the...
  15. pvps1

    Proxmox Ceph Cluster - No Raid

    I dont understand the question.... you have a raid system. so if a disk fails, you have to replace it of course.
  16. pvps1

    Proxmox Ceph Cluster - No Raid

    https://wiki.debian.org/DebianInstaller/SoftwareRaidRoot
  17. pvps1

    Proxmox Ceph Cluster - No Raid

    md raid or software raid is a default ability of the debian installer
  18. pvps1

    Proxmox Ceph Cluster - No Raid

    you can install debian with software raid and proxmox on top of it.
  19. pvps1

    Cluster mit Ceph auf neue NTP Server umziehen

    das würde ich jetzt nochmals nachlesen, aber so aus dem Gedächtnis setzt auch systemd-timesyncd aus Gründen die Zeit nicht hart in grossen Sprüngen. Der drift der erlaubt ist, ist sicher auch konfigurierbar. ohne Gewähr... ich würde mal nachlesen ob o.g Verhalten auch bei service restart...
  20. pvps1

    Linux system time temporally jumps

    ntp on (redundant, non virtualized) firewall. they sync to ntp pool. internal firewall only allowed ntp source