Search results

  1. pvps1

    annoying forum spam

    i am sure team is aware of the problem but the mass of spam in the forum is annoying. i use rss feed and i see it all :)
  2. pvps1

    [SOLVED] IMAP Port?

    pmg is not an imap server. if you are sesrching for a full featured 'mailserver' take a look for mailcow as example pmg has another purpose
  3. pvps1

    Migration von VMware auf Proxmox und Cluster mit bestehendem Server

    ich würde dem Kunden für die Migration ein NAS (nfs) zur Verfügung stellen und danach die VMs auf ceph migrieren
  4. pvps1

    FreeSwitch Latency on Proxmox vs VMWare

    as I said before, we are running quite large freeswitch installations on "default" settings. guest os is debian. we dont have latency problems.
  5. pvps1

    FreeSwitch Latency on Proxmox vs VMWare

    we are running several freeswitch installations without any special configurations. normaly with "host" cpu type
  6. pvps1

    Request for Consideration: New Support Tier

    I think the forum does not reflect the position of proxmox within the hypervisor market very well. you could think that many users are homelab or very small..many questions about networking e.g. are very very low profile... so. i think that pve is a player in a quite professional and wealthy...
  7. pvps1

    Recommend a partner in the GMT / GMT +1 / GMT +2 timezone

    probably, send more details plz: ioeekjlk@duck.com (one time email)
  8. pvps1

    Firewall crap !

    we dont use it at all. we deploy nftables with ansible
  9. pvps1

    Does anyone know how to loop scripts

    why prevent? there is probably a reason for them to start up take a look, e.g. why your CPU has high load from time to time..i guess this releates to the fans spinning up
  10. pvps1

    Package update notification

    that's the tool used by pve to send you update notifications afair. maybe its not installed on proxmox backup by default
  11. pvps1

    Package update notification

    apt install apt-listchanges
  12. pvps1

    Hardware selection and compatibility problems

    take a look at thomaskrenn.com (if from europe), you can choose "proxmox compatible" there when configurating servers. we use supermicro over decades and would never go back to dell, hpe or ibm (been there, done that)
  13. pvps1

    AMD EPYC and Intel Xeon CPU's in Same Cluster - Migration?

    in our experience, no. we switched from intel to amd some years ago and nearly 100% of live migrations between different archs ended in segfaults.
  14. pvps1

    Multi-region Ceph + mixed host HW?

    not an expert but my thoughts: 5 of 6 down is never possible, you need a valid qorum that means 50%+1 up if you heavily need storage high available in 3 dc areas, you better invest a little bit more (then you rarly get the 5/6 situation) if I imagine the bandwith costs between east-west (is...
  15. pvps1

    Stupid Mistake! Installation on main Node

    the installscript of fusionPBX installs a bunch of software. e.g. nginx, php etc and compiles freeswitch from source. take a look at the install script and remove these programms (apt remove or apt purge). remove freeswitch and it systemd units than install pve again. i guess nothing is lost...
  16. pvps1

    Portable VM IBN

    probably by cloning (clonezilla e.g.) it and restore it with a desktop virtualization (virtualbox e.g.)
  17. pvps1

    How to migrate 500 VMs from VMWare to Proxmox, with as little downtime as possible

    its all Linux? if yes you can migrate with nearly zero downtime by just using rsync and some scripting. we "once" migrated from xen to kvm did it that way. it takes time but no downtime
  18. pvps1

    Optimal Network Setup for a 13-Node Proxmox Cluster with NFS and 10Gbps/1Gbps NICs

    you can separate management and migration network (and should)
  19. pvps1

    Optimal Network Setup for a 13-Node Proxmox Cluster with NFS and 10Gbps/1Gbps NICs

    i'd prefere redundancy. migrations are done rarly, so 95% of the time the bond would be used for storage only. depends on your situation but we dont do any network without redundant link
  20. pvps1

    Optimal Network Setup for a 13-Node Proxmox Cluster with NFS and 10Gbps/1Gbps NICs

    i'd definitly go for bonding the 2 x 10gbs interfaces and run vmmigration etc and storage in vlans within. 2 x 1gbs for corosync, 2 x 1gbs bonding for non speed relevant vlans, uplinks, etc