Search results

  1. M

    PM VE on Minisforum HX90 issues

    Here's some basic ref data for my installation: /etc/network/interfaces. on the host source /etc/network/interfaces.d/* auto lo iface lo inet loopback auto eno1 iface eno1 inet manual iface enx00249b68ea1d inet dhcp auto vmbr0 iface vmbr0 inet static address 192.168.175.38/24...
  2. M

    PM VE on Minisforum HX90 issues

    So received a new Minisforum HX90 box with 4TB disk, 64GB memory, Ryzen, etc. and attempted to install PM 7.1 on it. Fail. For some reason, it would not detect the inbuilt 2.5Gb/s NIC, so I tried with a USB connected Ethernet adapter and it kinda worked. But then discovered that I could...
  3. M

    Epyc 7402P, 256GB RAM, NVMe SSDs, ZFS = terrible performance

    I got two of my PVE boxes fixed, but I left ZFS and went back to hardware RAID cards (Dell H700s), and all problems went away immediately. Also I did notice a massive improvement based on the types of HDDs being used - SAS 15K drives with ZFS was bearable, although not optimal. The same drive...
  4. M

    SMTP Whitelisted Inbound Domains still slow to pass emails to mail server

    That's exactly what it was. I thought it was the greylisting in PMG, even though I had SMTP whitelisted the domains. I thought that the whitelisting wasn't working. Of course it was working. The problem was that the receiving mail server wanted to do redundant things like SPF checking...
  5. M

    SMTP Whitelisted Inbound Domains still slow to pass emails to mail server

    Actually I think I have resolved the issue. It was not a problem on the PMG side - after I managed to find the live logs and ran it alongside the live logs of our mail server (tail -f /var/log/mail.log), I could see in real time the flow, and that showed me that PMG is super fast at processing...
  6. M

    SMTP Whitelisted Inbound Domains still slow to pass emails to mail server

    I'm not exactly sure what my expectations should be here, but I have setup a PMG in front of our mail server, and all is working fine for bi-direction integration. The one thing that caught me off guard was the time delay that was occurring between receiving a legitimate email from an external...
  7. M

    Epyc 7402P, 256GB RAM, NVMe SSDs, ZFS = terrible performance

    Interesting... I have almost identical configuration, and pretty much the same problems. One of my VMs monitors CCTV IP cameras, so it's writing constantly and the culprit definitely appears to be ZFS. I'm going to install hardware RAID to the SSDs and re-install PM 6.2 to the server, and...
  8. M

    Replace/Clone cluster node SATA SSD to NVMe?

    We are considering something similar to this, specifically to support high I/O nodes such as database servers. What was your overall experience with performance after you made the jump to NVMe?
  9. M

    Help with creating LXC for Centos 5

    Hi there, I'm provisioning a PVE 5.2 system to be deployed to a remote colocation facility. The system has to support some older, legacy software that only will run on CentOS 5 (32 or 64 bit). I have been able to install CentOS 6 successfully on this PVE version using templates, but I don't...
  10. M

    Best practice to secure single hypervisor colocated

    Thank you. That was the direction I needed. I see I can secure at the node level, so I think this will work. Much appreciated.
  11. M

    Best practice to secure single hypervisor colocated

    I don't understand. What rules are you referring to? How can I restrict the NIC and not have that subjected to the virtual hosts on that box? I need the virtual hosts to be unencumbered, but only the management network be restricted. And it all has to be done on one NIC. Your answer gives...
  12. M

    Best practice to secure single hypervisor colocated

    Yes, but wouldn't that be overwritten with a PM upgrade later? And also I need to be able to not have the virtual bindings to the network ports not be affected by iptables. Only access to the management of the server should be affected. Not sure how to achieve that if iptables is restricting...
  13. M

    Best practice to secure single hypervisor colocated

    I have to install a single server at a colocated facility, and it will be running PM 5.2. I need to be able to restrict the IP addresses that can access the hypervisor, but I cannot put the server behind an external firewall as the provider is only giving me 1U and 5 network IPs. Is there a...
  14. M

    iscsi LVM fails after boot of the host

    I have the identical problem with PM 3.1 on a cluster with 3 nodes. Using FreeNAS iSCSI as my server, any storage that has PM as the initiator, fails on restart of the hypervisor. However it is definitely a timing issue - like you, if I manually umount and mount -a the devices as listed in...
  15. M

    PM 3.3 and Dahdi/VoIP

    Before I dive in and try this, and at the risk of destroying a HN in the process, I wanted to know if anyone had tried the generally published Dahdi modifications for Asterisk on PM 3.3? This appears to have been a standard thing with PM 1.x in the past, but a lot has changed and I don't want...
  16. M

    Clustering between data centers

    I have had this working before with PM 1.8 servers, and we are just completing updating to PM 3.1. We have two data centers, each with about 3-4 PM servers in there. I want to be able to have all servers as part of the one cluster. Right now we can do this within the data centers with ease...
  17. M

    Migration failed, Cleaned up but left with zombie VMs

    Thank you! This worked perfectly. Now everything appears to be cleaned up. Thank you for the advice. Myles
  18. M

    Migration failed, Cleaned up but left with zombie VMs

    Thanks for the tip, but it didn't work. Gives a stat (...) : No such file or directory and similar errors on realpath and can't umount ... afterwards. Myles
  19. M

    Migration failed, Cleaned up but left with zombie VMs

    OK, that makes sense. However there are about 5 node members to this cluster. Does this change have to be done to each node member, or only the original one that the VMs were on? Myles
  20. M

    Migration failed, Cleaned up but left with zombie VMs

    I was migrating a number of OpenVZ VMs between two servers and there was an unexpected server crash. This resulted in the servers not migrating properly but it left them showing on the source server. I was able to eventually get them to the target server by way of a restore from backup, but...