Search results

  1. F

    HEAD /: 400 Bad Request

    Hi, I tried this, for a time where at least 10 error messages was loggued in syslog. Unfortunatly, nothing appears in 8007.log. You noticed that the error messages contains the process id ("client [::ffff:127.0.0.1]:53468"), never the same, and these IDs never appear in 8007.log :( The strange...
  2. F

    HEAD /: 400 Bad Request

    Hi, I change my command by : while true; do lsof -i :8007 > /tmp/400.log; ss -antop | grep -v LISTEN | grep -v TIME-WAIT >> /tmp/400.log; tail -1 /var/log/syslog | grep HEAD; if [ $? -eq 0 ]; then break; fi; done removing "sleep 1" to be sure to have the last results. But found nothing :/...
  3. F

    HEAD /: 400 Bad Request

    Good tips to try, thanks. The process 2536 is proxmox-backup-proxy, listening on port 8007. There is a proxmox-backup-api listening on port 82. So I tried this command : while true; do ss -antop | grep -v LISTEN > /tmp/ss.log; tail -1 /var/log/syslog | grep HEAD; if [ $? -eq 0 ]; then break; fi...
  4. F

    HEAD /: 400 Bad Request

    Hi there, on a PBS server I notice this errors in /var/log/syslog : proxmox-backup-proxy[2536]: HEAD /: 400 Bad Request: [client [::ffff:127.0.0.1]:53468] request failed This is caused by something locally (client [::ffff:127.0.0.1]), but I don't know what. And it occurs every 5 minutes...
  5. F

    [SOLVED] New node without ZPOOL, just for quorum

    Oh perfect ! Thank you, I opened this screen but didn't see this option. I need glasses ;)
  6. F

    [SOLVED] New node without ZPOOL, just for quorum

    Hi all, I have a 2-nodes cluster with ZFS replication, so a pool "ZDATA" is created on each host and handled by the cluster. Now, I've added a third host, without a pool "ZDATA", just for quorum and, on a dedicated storage (not ZDATA, all disks in RAIDZ in "rpool"), doing a PBServer. The host...
  7. F

    POLL: Current Firewall Design, what is your ...

    I see, it's another way to do it... But not the one we've chosen ;)
  8. F

    POLL: Current Firewall Design, what is your ...

    Here is the best I could do try to illustrate the problem. Which is, for recall : VMs are protected, how to protect WAN interface of nodes ?
  9. F

    POLL: Current Firewall Design, what is your ...

    Sorry I'm not sure to understand how a physically separated Management LAN can help me here ? About PFSense, I confirm that it is already a cluster of 2 PFsense, each VM on one separate node and sync together. My second question was, if I use this PFSense VMs as firewall for all other VM, how to...
  10. F

    POLL: Current Firewall Design, what is your ...

    Hi guys, searching how to harden Proxmox nodes, I wonder if my need can be considered as a 'model of firewall' here :) Unlike j.io, I don't use a firewall on top of the cluster, but into it : this is a (2 in fact, in cluster) VM with PFSense, connected for WAN to the public interface of the...
  11. F

    Proxmox 3.4 migrate all vm-s (HA)

    Ah, not sure to see the link between migration_unsecure and the "migrate all", but I'll try. I reinstall with fresh 4.2 and tell if it works. Thanks !
  12. F

    kernel: ipmi_si ipmi_si.0: Could not set the global enables: 0xcc

    Hi Proxmox VE Staff, I've installed PVE4 on two Dell R620 + two R310, all works except a small bug on the R620 : the syslog is filled each second with a message "kernel: ipmi_si ipmi_si.0: Could not set the global enables: 0xcc". It seems to be a known bug with the ipmi_si module ...
  13. F

    Proxmox 3.4 migrate all vm-s (HA)

    Hi Spirit, a little UP to know if you know how, and when, this can be resolved ? Thanks in advance.
  14. F

    Proxmox 3.4 migrate all vm-s (HA)

    Hi there, I've installed for testings a v4.0 cluster, and it seems that this bug still exists : "Migrate all" only migrate non-HA VM (just register a task "migrate all", and nothing appends). Removing a VM from HA ressources is enough to see it migrating with this feature. Have you found any...
  15. F

    Auto migrate VM when a node has network fails or down

    Well. It seems to me so incongruous that a system designed to take a decision about "High Availability" if a host is down, does nothing because... The host is down O_o But OK, I don't have probably the same understanding of HA, no matter, but now I try to imagine what can be do in this situation...
  16. F

    Auto migrate VM when a node has network fails or down

    Ah I've not indicate that it was fence_ipmi but it is the same thing. I really don't understand what fence does with the non-migration when a node crash. Sure, fence need network, its goal is to detect that a node is down through network, isn't it ? To try to reboot it, or participate to the...
  17. F

    Auto migrate VM when a node has network fails or down

    Sorry ? Thank you for responding quickly Dietmar but can you develop ? I don't understand what you mean by "your fence device on works" ?
  18. F

    Auto migrate VM when a node has network fails or down

    Hi, sorry to dig this post, but I've this problem actually, and I'm not sure to understand your answer, Dietmar. In my mind, the principle of HA is to propose an automatic failover solution if some machine on a cluster dies. So cutting network, IMHO, is the same as unplugging the power cable...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!