Search results

  1. J

    [CAN BE DELETED - ERROR] After upgrade from 6.4 to 7.0, 3 node cluster is not ready (corosync)

    [SOLVED] After the update the cluster network card was not working properly. Sorry. Hi! In a working cluster I have updated to the latest version and since the restart I have no quorum. Here I show you the execution of some commands: pvecm -status shows this info. Cluster information...
  2. J

    Cluster Proxmox Install with two servers with one bridge with no CIDR configuration

    I have a cluster with 3 nodes. One of them, runs perfectly, but the in the other, the VM's have a lot of problems with network cards over this bridge. I have detected that on these servers I cannot configure the CIDR feature on the bridge. Can anybody help me?
  3. J

    Very basic configuration with two servers

    Do you mean to assemble a cluster with two nodes and replicate the partition with the virtual machines with pvesr or pve-zsync?
  4. J

    Very basic configuration with two servers

    There is documentation of this type of configuration?
  5. J

    Very basic configuration with two servers

    Redundancy The hypervisor will have at most 5 virtual machines and 28 GB of RAM consumed among all of them
  6. J

    Very basic configuration with two servers

    Hello! I am considering a very basic configuration with servers with 32GB RAM and 2x2TB hard disk. In addition, they have 4x1 Gbps NICs. What do you think is the best configuration for these servers? Until now I had only one hypervisor and I had the cards in bond. On the first hard disk, the...
  7. J

    Perl script can't open '/etc/pve/priv/authkey.key'

    I'm trying to write one script to shutdown started virtual machines as other user than root. When I launch this command get "can't open '/etc/pve/priv/authkey.key' - permission denied (only root and www-data can read this file) Which is the problem? I thought that having a connection as root...
  8. J

    Trying to shutdown VM through API with curl

    Thanks! Is not possible because the user that launch this command is not capable to perform this.
  9. J

    Trying to shutdown VM through API with curl

    Hi! I'm trying to shutdown a virtual machine with, but doesn't With developer tools and API DOC reference i get this valuable info: * DEV Tools https://------:8006/api2/extjs/nodes/12345678HV1/qemu/2254/status/shutdown Request Method:POST Status Code:200 OK Remote Address:------:8006...
  10. J

    PVE HA Cluster and iAMT

    Then even if the motherboard is vpro/iAMT, I must continue using the setting that is available on the website LINK? NOTE: I'm using nowdays iTCO Watchdog (module "iTCO_wdt") This is an hardware watchdog, available in almost all intels motherboard (ich chipset) since 15 years. Best regards...
  11. J

    PVE HA Cluster and iAMT

    Sorry... from pve-kernel changelog... pve-kernel (4.4.16-62) unstable; urgency=medium * watchdog: mei_wdt: implement MEI iAMT watchdog driver The module name is iamt_wdt or mei_wdt? . I'm trying to use it. Do I need to change the file /etc/default/pve-ha-manager, to...
  12. J

    PVE HA Cluster and iAMT

    Hello! Nowdays, i have iAMT systems running on clusters with high availability without any problem. I've seen that new versions of the kernel (> = 4.4.16-62) implement a new module watchdog, "mei_wdt". Previously had used the iTCO_wdt and put on the blacklist to mei and mei_me modules. Does...
  13. J

    CEPH problems. No boot

    failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0 --keyring=/var/lib/ceph/osd/ceph-0/keyring osd crush create-or-move -- 0 1.82 host=46024953-HV1 root=default' ceph-disk: Error: ceph osd start failed: Command '['/usr/sbin/service', 'ceph', '--cluster', 'ceph', 'start'...
  14. J

    CEPH problems. No boot

    We have alredy configured Proxmox Ha Cluster with Ceph storage. The cluster was stopped without any problems and now (after summer), none of our proxmox servers boot the ceph storage. Seems to be a problem with OSD's but I cannot find a solution. Thanks. SRV Ceph start Messages === osd.0 ===...
  15. J

    Proxmox 4.2 DRBD-Cluster. Vms HA managed doesn't start

    Hi, I'm trying to setup an HA cluster, using a DRBD for the storage. I configured everything on the 3 nodes, (DRBD9 Proxmox and Proxmox 4 HA Cluster) and installed a VM for testing. I can do offline migration between the hosts and start VMs, but if the VM is HA managed I can't do that. I...
  16. J

    Ceph cluster with ssd tiering.

    I'm using HA cluster with CEPH for the storage. I configured everything on the 3 nodes, (CEPH Server and Proxmox 4 HA Cluster) without any problems. My setup includes one SSD as dedicated SSD journal disk for in each OSD (one SATA drive) (one per hypervisor). Now, I'm trying to test SSD...
  17. J

    Minimal setup for HA.

    I am trying to create a cluster with high availability at the lowest possible cost. My hardware base configuration for server nodes is: ● Single i7-4790 ● 32 GB RAM ● Motherboard with iAMT (one integrated network card) eth3 used as vmbr0. ● 2 HardDisks SATA3 with 2TB (sda, sdb) ● 1 SSD 120GB...
  18. J

    Proxmox 4.1-15 HA Cluster with CEPH Storage

    [SOLVED] I had made a pveceph purge... and reconconfigure ceph cluster and client... and now is working correctly Thanks

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!