Search results

  1. F

    FOSS ZFS over iSCSI options for use with PM (2018)

    I'm looking for something FOSS that would work as a storage appliance with a web based gui that can be used for SMB. OMV appears to have ZFS iSCSI target support. Barring any currently unforeseen issues, this seems like it would make a good choice. The only downside is that it is not intended...
  2. F

    Starting VMs with attached GPUs w/ qm causes code 43 and crashes. Manually invoking kvm doesn't.

    Additional info: I looked at the output of ps ax |grep kvm and I see that the invocation of kvm is identical to what I'm running manually, yet somehow is causing the code 43 on the GPU and occasional crashes. (probably more than occasional if I were to keep using the vm with no HW accelerated...
  3. F

    Any way to leverage ZFS clustering for HA

    Thanks! Appreciate the suggestion. I'm looking into it now.
  4. F

    Any way to leverage ZFS clustering for HA

    Thanks for the response. I should have phrased my question more clearly. ZFS has a "clustering" feature whereby volumes can be replicated and failed over to other nodes, creating an active-active "cluster". https://docs.oracle.com/cd/E37831_01/html/E52872/godgc.html (I'm not sure if this can...
  5. F

    Any way to leverage ZFS clustering for HA

    I realize ZFS is not a true clustering fs, in that it doesn't support locks, but that's also true of lvm-based ha as implemented by proxmox. As ZFS doies support active-active failover/takeover, I'm wondering if integration is in the works or currently possible in a way I'm not seeing. Thanks...
  6. F

    Starting VMs with attached GPUs w/ qm causes code 43 and crashes. Manually invoking kvm doesn't.

    I don't mean manually as in 'qm start (vmid)'. Instead I mean: 'qm showcmd (vmid) > foo; source foo'. To put it another way, I can take the output of qm showcmd (vmid), paste it into my CLI, and successfully start the VM. However if I start it from the gui or using qm start, the GPU driver...
  7. F

    Deleted

    This is a double post.
  8. F

    Troubleshooting pve-zsync errors: no emails received via ssmtp

    Thanks for the reply and sorry for the delay. My server has no rDNS so I set up ssmpt instead of postfix. The sendmail command works. Google search results suggest this should not be a problem. However, the syncs still fail and no email reaches me. Any help is appreciated.
  9. F

    Installing over Debian 8 on GCE node fails

    Fabian, sorry for the delay, I didn't see that you had responded. In the end I decided to simply not install proxmox on that VM. I don't plant on starting instances on that server anyway, so since pve-zsync works without proxmox on the remote side, I'm happy enough.
  10. F

    Troubleshooting pve-zsync errors: no emails received via ssmtp

    Bump. I'd like to use this in production eventually. Any help, pointers, or suggestions are appreciated.
  11. F

    Troubleshooting pve-zsync errors: no emails received via ssmtp

    I'm trying to sync a few disks with a remote server, and while the initial sync seems to go fine, the job status always reports an error: # pve-zsync list SOURCE NAME STATE LAST SYNC TYPE CON rpool/data/vm-105-disk-2 rgir-daily-backup...
  12. F

    Installing over Debian 8 on GCE node fails

    I don't intend to launch VMs on this server, so I only allocated a node with 0.6 GB of ram. Could that be causing the install to fail?
  13. F

    Installing over Debian 8 on GCE node fails

    I was wondering that. At least that makes the worst case scenario less bad. I did install zfs before triggering the fail. I could start from scratch and not attempt to install pm, but it would be nice to have proxmox installed with a working package manager.
  14. F

    Console breaks after installing systemd in container

    Sorry for the delayed response. The CT is Debian 8. I don't have access to it right now, but I'll verify the rest later. I do recall, however, that the process was. 1) apt-get update 2) apt-get dist-upgrade 3) reboot 4) verify it still works 5) install systemd and systemd-sysv 6)...
  15. F

    Installing over Debian 8 on GCE node fails

    I'd like to set up Proxmox on a GCE node, for use with pve-zsync, in order to create offsite backups from the synced datasets. I'm following this guide. When I attempt to install pm, apt-get fails as follows: update failed - see /var/log/pveam.log for details Job for pveproxy.service failed...
  16. F

    Console breaks after installing systemd in container

    When I install systemd in my container, the console becomes inaccessible. Steps to reproduce: -Create New Container -Inside Container: --apt-get update && apt-get dist-upgrade && reboot (console works after reboot) --apt-get install systemd-sysv && reboot (console no longer works) In the...
  17. F

    Unable to boot from CDROM with OVMF BIOS since 4.3

    I was not able to get the Windows 8.1 VM working with OVMF. I'm not sure what I did differently before. Maybe more importantly, I realize now the problem is not isolated to cd rom disks. VMs in this state are also unable to boot virtual hard disks. Switching to SeaBios solves the problem...
  18. F

    Unable to boot from CDROM with OVMF BIOS since 4.3

    I can't imagine what I might be doing wrong. I just installed using SeaBios, switched to EFI, and was surprised to see the CD boot. However, the disk now would not. I reinstalled the OS, and everything works again. I'm going to try and repeat the process, and update this thread. I didn't...
  19. F

    Unable to boot from CDROM with OVMF BIOS since 4.3

    pveversion -v proxmox-ve: 4.3-71 (running kernel: 4.4.21-1-pve) pve-manager: 4.3-10 (running version: 4.3-10/7230e60f) pve-kernel-4.4.6-1-pve: 4.4.6-48 pve-kernel-4.4.21-1-pve: 4.4.21-71 lvm2: 2.02.116-pve3 corosync-pve: 2.4.0-1 libqb0: 1.0-1 pve-cluster: 4.0-47 qemu-server: 4.0-94 pve-firmware...