Search results

  1. S

    Greylisting problem with wide range ip senders

    I think that is for the normal use of SPF, not to whitelist from graylisting the ip addresses retrievable from SPF. All external servers, even with a correct SPF record, get graylisted. That is normal. My proposal is to add an "IP Address from Domain SPF (Sender)" here: Adding a domain using...
  2. S

    Greylisting problem with wide range ip senders

    As more and more companies go office365, gsuite, or other "big farms", greylisting is much more problematic. Can I suggest a function to retrieve whitelist ip from a sender's domain spf?
  3. S

    Default Web-Interface language

    As with ticket-based Spam Reports there is no login screen, the user has no way to select the Quarantine interface language.
  4. S

    IDE vs SCSI

    Hi Alwin,thanks for your answer. I'll push for the update. without cache, performance are the worst: IDE without cache SCSI without cache
  5. S

    IDE vs SCSI

    I have a client with an old installation: PVE4.4 on an OLD HP server, with HW RAID and working BBU. Waiting to replace it, I'm trying to find ways to speed up VMs. I saw that most of virtual disks were created with IDE controllers adn writethrough enabled, thus I thought it was an easy win to...
  6. S

    Separate Cluster Network

    Ok, maybe it's better to update that old wiki article, adding the link to the new documentation. thanks Stefano
  7. S

    Separate Cluster Network

    Is this wiki article still valid with Proxmox 6? https://pve.proxmox.com/wiki/Separate_Cluster_Network
  8. S

    Ceph in Mesh network, fault tolerance

    Just an update: bond with balance-rr works perfectly if I disconnect a cable. I had problems when keeping down an interface with if-down (I did lose 50% of pings) but if I disconnect the cable, all traffic is correctly routed to the still connected interface of the bond.
  9. S

    Ceph in Mesh network, fault tolerance

    After creating a script in if-up.d and if-post-down.d, and banging my head on various exception, I settled to use balance-rr bonding, even if it doesn't work very well when bringing down an interface. I'll see how it works when fisically disconnecting a cable. This is my final configuration...
  10. S

    Ceph in Mesh network, fault tolerance

    I'm almost there. Once all links are up, I can issue these commands in NODEA and NODEB: root@NODEA:~#ip route add 10.10.2.11 nexthop dev enp24s0f0 weight 2 nexthop via 10.10.2.12 root@NODEB:~#ip route add 10.10.2.10 nexthop dev enp24s0f0 weight 2 nexthop via 10.10.2.12 And everything...
  11. S

    Ceph in Mesh network, fault tolerance

    It doesn't work: root@NODEA:/etc/network# ifup enp24s0f0 root@NODEA:/etc/network# ifup eno1 RTNETLINK answers: File exists ifup: failed to bring up eno1 It seems I can't add a second nexthop from the same ip source (even if with different dev).
  12. S

    Ceph in Mesh network, fault tolerance

    This is the routing table of NODEA: root@NODEA:/etc/network# ip r default via 10.10.1.254 dev vmbr0 onlink 10.10.1.0/24 dev vmbr0 proto kernel scope link src 10.10.1.10 10.10.2.0/24 dev eno1 proto kernel scope link src 10.10.2.10 10.10.2.0/24 dev enp24s0f0 proto kernel scope link src 10.10.2.10...
  13. S

    Ceph in Mesh network, fault tolerance

    Hi Alwin, thanks for your answer. I tried balance-rr, balance-alb, active-backup. But I don't want to insist on this, as the routing method seems more elegant to me, and I use only two NIC ports per server. I did already enabled it in NODEC (the "middle" one), I tried enabling it in NODEA (the...
  14. S

    Ceph in Mesh network, fault tolerance

    I'm following the Full Mesh guide, method 2 (routing, not broadcast), and everything works. I want to add faul tolerance, to handle cable/nic port failures. At first, I thought to use bonding: I have 3 nodes, with 4 10Gb ports each. I connected each node with each other with 2 bonded cables. It...
  15. S

    Proxmox VE 6.0 released!

    I see that corosync v3 does not actually support multicast. AFAIK unicast, with corosync v2, was suggested only for a maximum o 4 nodes. Is that true with v3, too? Which are the new limits?
  16. S

    PROXMOX and Windows ROK license for DELL

    I added two bugs in bugzilla: allow spaces in smbios settings: https://bugzilla.proxmox.com/show_bug.cgi?id=2190 allow all smbios types supported by qemu: https://bugzilla.proxmox.com/show_bug.cgi?id=2191
  17. S

    PROXMOX and Windows ROK license for DELL

    Yes, sure, I will do. Yes, it's what I did but it is removed if I enter in smbios webgui after that. I post it in bugzilla.
  18. S

    PROXMOX and Windows ROK license for DELL

    Sorry for the wording: Not "I need patches" but "I kindly ask proxmox developers to consider these patches" :)
  19. S

    PROXMOX and Windows ROK license for DELL

    With a bit of luck, I managed to make it work but I need a patch in PVE. I explain everything, hoping to be helpful to others. After mounting Dell's Win2016 ISO, and mounting sources/install.wim with wimtools, I opened the windows\system32\rok.vbe file using https://master.ayra.ch/vbs/vbs.aspx...