Recent content by ayufan

  1. ARM Support

    I updated docker images for arm64 and amd64 to 2.1.2 if anyone is interested. There is a need for `tmpfs` mounted on `/run` for 2.1.2. Client libs for arm32 still stay at 1.1.9 for time being since it works just fine. The base functionality works fine, but there's no shell, zfs (no packages...
  2. ARM Support

    I pushed support for running server on 2.0.4.
  3. ARM Support

    It was not easy, but I was able to compile client for arm32 and run it on: Raspberry PI 2 / 4 (both using armv7l) and Turris Omnia. https://github.com/ayufan/pve-backup-server-dockerfiles/releases/download/v1.1.9/proxmox-backup-client-v1.1.9-arm32v7.tgz Patchset here...
  4. ARM Support

    Well, it is not hard to compile, and all details how to do it are public. I primarily use it as a server, and actually have pretty long history of upgrading packages ;)
  5. ARM Support

    Great. For Debian I just copy a `.deb` from a container image :)
  6. ARM Support

    I did not try it on RPI 3. I run it all the time on `RockPro64`. It will not work on 3b+, as it requires ARM64 (it is only compiled for this). So, Rock64, RockPro64, RPI4, RockPi4, etc. About logs, they can be redirected to `tmpfs` (via docker compose tmpfs volume) to avoid write wear.
  7. ARM Support

    I did. I pushed 1.1.5 yestarday.
  8. ARM Support

    I maintain (infrequently) PBS running on ARM64: https://github.com/ayufan/pve-backup-server-dockerfiles. I use it for my personal PVEs.
  9. ARM Support

    You can try this. I use it on my arm64 NAS. https://github.com/ayufan/pve-backup-server-dockerfiles
  10. Data-loss when using `lvm-thin`

    I was migrating data between hosts. One of the lvm-thins was simply too small, and I noticed that post-factum when trying to start machine on external node: 2020-04-01 11:56:24 starting migration of VM 300 to node 'home-PC' (192.168.88.176) 2020-04-01 11:56:24 found local disk...
  11. How would we share one hard disk using shared storage and 2 proxmox hosts

    Yes, you can. But you have to use latest kernel (for ex. 3.10) not the pve-kernel. Next enable nested VT-x (enable nesting in kvm_intel or kvm_and kernel module), set CPU model of VM to host. That way you should have HW virtualization in VM. I tried it, it works.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!