Search results

  1. E

    [SOLVED] Watchdog fence for physical nodes

    I changed restricted tag so my group looks like this: group: vm16 nodes vm16 nofailback 0 restricted 0 This *might* work except I only have local storage. VM16 node got fenced, HA tries to start the VM on VM17 node but fails because it does not have the disks. Then HA...
  2. E

    [SOLVED] Watchdog fence for physical nodes

    In Proxmox 3.x I setup fencing using apc pdus. I did not have any HA VMs setup but if one of the Proxmox nodes locked up or crashed the node would fenced. Is it possible to replicate this behavior In 4.x and 5.x? I'm fine with the watchdog as the method of fencing just don't see a way to make...
  3. E

    Live migration with local storage failing

    Once the fixes hit pve-test so I can install them easily I'll be happy to provide feedback.
  4. E

    Live migration with local storage failing

    boot: dc bootdisk: ide0 cores: 2 ide0: local-zfs:vm-102-disk-1,format=raw,size=512M ide2: none,media=cdrom memory: 1600 name: name.com net0: virtio=54:B7:B7:56:4C:6E,bridge=vmbr0,tag=30 numa: 0 onboot: 1 ostype: l26 smbios1: uuid=c4d8009b-7ca8-4126-9c76-e85354fab637 sockets: 1 virtio0...
  5. E

    Live migration with local storage failing

    Is the online live migration with local storage supposed to work or are there still some known bugs to work out? Command to migrate: qm migrate 102 vm1 -online -with-local-disks -migration_type insecure Results in error at end of migration: drive-virtio0: transferred: 34361704448 bytes...
  6. E

    Proxmox 5.0beta1 Installation problem - Package install

    Connected network cable, then the installer worked without error.
  7. E

    Proxmox 5.0beta1 Installation problem - Package install

    I installed using ext4 got exact same error
  8. E

    KVM crash during vzdump of virtio-scsi using CEPH

    @spirit When I setup the VM to use virtio-scsi without discard the backup completed successfully scsi0: ceph_rbd:vm-105-disk-2,cache=writeback,size=4T scsihw: virtio-scsi-single
  9. E

    Proxmox 4.2 DRBD: Node does not reconnect after reboot/connection loss

    Linbit now provides DRBD repo for Proxmox, maybe switching to that will resolve your problem. https://www.drbd.org/en/doc/users-guide-90/s-proxmox-install
  10. E

    drbdmanage license change

    When linbit changed the license Proxmox removed the DRBD stuff. Linbit then took over development of the Proxmox plugin and created a repository for it. You need to switch over to using linbit repo for DRBD components. https://www.drbd.org/en/doc/users-guide-90/s-proxmox-install I've not seen...
  11. E

    Bring back DRBD8 kernel module

    @dietmar Will Proxmox provide a DRBD8 kernel module or will we be forced to compile our own on every kernel upgrade?
  12. E

    KVM crash during vzdump of virtio-scsi using CEPH

    I can switch back to virtio-scsi without discard and let you know what happens.
  13. E

    KVM crash during vzdump of virtio-scsi using CEPH

    After changing from virtio-scsi back to virtio vzdump no longer causes crash. So whatever the problem is it seems limited to virtio-scsi
  14. E

    Bring back DRBD8 kernel module

    Exactly! Because some people are using DRBD9 kernel module I'm asking that you make both available by having one in the kernel and the other in a package. That way we can choose stability vs bleeding edge.
  15. E

    Bring back DRBD8 kernel module

    The DRBD9 kernel module is precisely what I have an issue with, linbits documentation says "running in dual-primary is not recommended" http://www.drbd.org/en/doc/users-guide-90/s-dual-primary-mode I don't want to use drbdmanage and the feature incomplete plugin. I want to run manually...
  16. E

    Bring back DRBD8 kernel module

    @dietmar My only problem with DRBD in Proxmox 4.x is that it includes non-production ready DRBD9. I should not have to compile code myself to get enterprise stability and features out of the Proxmox enterprise repo. @udo is spot on CEPH is too slow because of the single threaded IO in KVM...
  17. E

    Bring back DRBD8 kernel module

    I have 38 DRBD volumes totaling around 50TB of useable storage across sixteen production nodes. We have held off upgrading to Proxmox 4.x in hopes DRBD9 and its storage plugin become stable and after a year I'm still waiting. I need to upgrade to 4.x but the non-production ready DRBD9 makes...
  18. E

    backup limit per vm

    See maxfiles setting in /etc/vzdump.conf http://pve.proxmox.com/pve-docs/vzdump.1.html If you need something more complex use hook scripts. Some of my vms are so large I need to delete the old backup before making a new one. My script uses backup start hook, checks the vmid and if matched...
  19. E

    Storage model enhacement

    Done: https://bugzilla.proxmox.com/show_bug.cgi?id=1265
  20. E

    Installed 4.4 Proxmox but not able to move forward

    Did you try pressing enter and see if the prompt comes back? Proxmox uses Ubuntu kernel, looks like adding "iommu=soft" to the kernel params will prevent those messages. http://askubuntu.com/questions/805008/errors-showing-while-booting-16-04-amd-vi-event-logged-io-page-fault