Search results

  1. pvps1

    VM wiederherstellen, Daten bestehen lassen

    sorry, Windows habe ich übersehen.... Wenn du NFS als Datastore mit einbindest, ist das im Snapshot mit dabei. Jetzt bin ich bei den Windows-Interna etwas überfragt, wie man Platten ein/aushängt, aber du kannst eine Platte (egal ob "local" oder "datastorage") vor dem Snapshot natürlich...
  2. pvps1

    VM wiederherstellen, Daten bestehen lassen

    mountpoint via nfs entweder vom db server oder den nfs auch als extra Maschine
  3. pvps1

    Looking for help to wipe sas drives

    I dont think that you have to zero out the whole disks (taking a long time). I guess your problems are old fs signatures like lvm or zfs. you can wipe those very fast by zeroing the first view bytes on this (please google for details) or probably wipefs the installer formats your partitions...
  4. pvps1

    ENG/GER Importing an physical Ubuntu-Server into Proxmox

    you can do this. clonezilla is more effective (just copying existing data) and makes misc changes for you. dd can take much longer than clonezilla. rsync method is with least downtime. you can even run several rsyncs in advance with running services. only the last/final rsync must be done with...
  5. pvps1

    ENG/GER Importing an physical Ubuntu-Server into Proxmox

    principally thats easy, if your linux knowledge is fine. * create new machine in pve * boot this machine into a live system like grml and create and mount the filesystems (lets say to /target) * give the machine a temporary ip, set password for root and start ssh service * on the physical...
  6. pvps1

    Partitioning on install

    install debian and pve on top of it. the debian installer lets you preserve windows. but debian doesnt support zfs root fs
  7. pvps1

    Epyc 7402P, 256GB RAM, NVMe SSDs, ZFS = terrible performance

    we use evo ssd in large numbers and there is no specific speed problem with it. you just cannot use it in host with high io. I ve seen evos going to 99% wearout within 1 month under certain workloads (dbs mainly). others run for years without problems. and of course there are fast pieces around
  8. pvps1

    Replace PVE boot drive with larger drive

    one method: boot your host with clonezilla, clone to external drive, restore to new nvme. iirc cloezilla ask if you want to resize partitions to max (if not this can be done later)
  9. pvps1

    Private server cluster configuration suggestion to use with Proxmox VE with SDS and NAS

    even it's true technically speaking, I feel the urge to say that md-raid and ext4/xfs etc served very well and reliable over the last 20 years. the point is, that ceph and ZFS are everything else but slim. so we use depending on the usecase and available power/budget all of these technics. and...
  10. pvps1

    Private server cluster configuration suggestion to use with Proxmox VE with SDS and NAS

    _if_ you are going with NAS, I'd prefer a simple linux host with nfs and md raid. this way you have total control (eg can just take a disk and access data everywhere in case of failure, no prorietary controllers) and get more performance:price another advantage is simplicity which makes it...
  11. pvps1

    PMG vs. Kommerzielle Anti-Spam Produkte

    Danke, hilft mir sogar sehr...
  12. pvps1

    Verständnisfrage zu Cluster?

    das ist nicht richtig. du brauchst nicht zwingend einen netzwerk storage für einen Cluster. manche dinge wie ha oder live Migration funktionieren dann nicht, aber einen custer hast du ja trotzdem.
  13. pvps1

    Tausch aller OSDs

    wir haben das auch so gemacht um die Kapazitäten zu erhöhen.
  14. pvps1

    PMG vs. Kommerzielle Anti-Spam Produkte

    die Frage würde ich gerne um Antivirus ergänzen. PMG unterstützt nach erster kurzer Recherche nur Avast. Deren Websites sind bzgl Linux Unterstützung aber nicht sonderlich vertrauenserweckend. Grund für die Frage ist die Erkennungsrate von clamav
  15. pvps1

    Using multiple ethernet ports for VMs as opposed to one?

    one usecase: bonding is the way to get redundancy in production scenarios you would have your nics connected to different switches (within a stack) so parts can fail or reboot (e.g. for firmware upgrades) without interruption of services
  16. pvps1

    Deleting a VM by mistake - How to recover it

    for future forensic/recovery it is important, that you stop write access to the disks involved. so stop the storage or pve host. depending on your storagetype and filesystems there are different tools to recover deleted files. if the data is important you should consult professional...
  17. pvps1

    If you are unsure why 'proxmox-ve' would be removed.....

    remove all debian kernel there are dependencies (firmware iirc) that trigger the removal of pve
  18. pvps1

    Disable Root Account

    you can not imo. what are you trying to achieve by deleting?
  19. pvps1

    lvm

    you have to resize your vg, your lv and finally the filesystem. there are a lot of howtos about lvm on the net. search for resize lvm...
  20. pvps1

    Proxmox Ceph Cluster - No Raid

    if you have a hw raid, use it. its not recommended as stated above. the official way to go with proxmox is zfs. one other possibility is mdraid I share the opinion of t. that hw raids are pain. with zfs and ceph you must not use it other than jbods/hba mode. and if course you can mix the...