Search results

  1. A

    MIGRATE VMWARE ESXI VIRTUAL MACHINE TO PROXMOX VE

    I believe they are referring to creating software TPMs, there are docs on Proxmox describing the process. Also, welcome to Proxmox!! :)
  2. A

    [SOLVED] Broken apt servers? ftp.se.debian.org

    I did think about getting my own mirror as I am up to ehh 9 total Proxmoxes now, thank you for the tip. And hejsan to you too gentle stranger :)
  3. A

    [SOLVED] Broken apt servers? ftp.se.debian.org

    Had these in my /etc/apt/sources.list deb http://ftp.se.debian.org/debian bookworm main contrib deb http://ftp.se.debian.org/debian bookworm-updates main contrib Got root@pve22-31-cephzfs:~# apt-get install -f net-tools Reading package lists... Done Building dependency tree... Done Reading...
  4. A

    Proxmox 7 + Ubuntu LXC + Docker - Error ALWAYS

    Yeah I got these problems as well :(
  5. A

    Docker doesn't work after upgrade to Debian Bullseye - cgroup problem

    Have the same problem, swithing apt-repos didnt work :(
  6. A

    [Solution] CEPH on ZFS

    Updated the guide a bit: Ceph on ZFS Rationale: Wanting to get the most of my Samsung PM983 Enterprise NVMEs, and more speed in ceph I wanted to test ceph on top of a non-raidz ZFS to make use of the ARC, SLOG and L2ARC Prerequisites: Proxmox (or Debian) Working ceph installation (MON, MGR)...
  7. A

    [Solution] CEPH on ZFS

    I am adding this to my system right now to try out the CEPH performance with ZFS as the underlying system and a enterprise SSD to get the special/slog and cache AND the ZFS ARC to see if it improves the ceph performance and was wondering if you can see an improvement there.
  8. A

    [Solution] CEPH on ZFS

    This is interesting indeed! Do you by any chance have any performance data ?
  9. A

    3 hosts bricked due to apt upgrade

    MIght be a longshot, but I have had this message popping out on me, turned out to be RAM/CPU channel that was broken so had to replace motherboard. Saying that because the only thing that the last download touches is RAM.
  10. A

    [SOLVED] Proxmox 7.0-11 Headless boot with Gigabyte X570S UD

    This should be made into a tutorial. Very useful!
  11. A

    3 hosts bricked due to apt upgrade

    Is your ifdownup2 working? Upgrade failed writing that package for me and others..
  12. A

    Move VM with ZFS disk to another Proxmox Server

    "Managed" this to apply Gluster on it to give me the HA option, but please be aware that speeds will go down quite a bit.
  13. A

    Ceph performance with simple hardware. Slow writing.

    Hi Adriano, I have invested in 10gb networking, 3 hosts (not even your 7 ones),Samsung NVME PLPs, bigger HDDs, NVME DBWAL, Cache pools etc and wound up getting disappointed with the numbers at 100mb/sec read and 30mb/sec write cold/warm data with my expensive 10gb networking barely breaking a...
  14. A

    ZFS + Gluster - HA Working when hosts are up, stops when one host is down

    Hi! In my constant chase to get out of my PLP NVMes and 10gb network I came to realise that ceph wasn't cutting it when it came to performance while reliability and having a proxmox GUI absolutely rocks. (Thats not even mentioning being able to SEE the files in Gluster even with a broken MDS/MON...
  15. A

    Ceph performance with simple hardware. Slow writing.

    That looks pretty evenly, are the disks evenly spread over the hosts you have?
  16. A

    Ceph performance with simple hardware. Slow writing.

    Check ceph osd df and look at the standard deviation, should be as close to zero as possible. This indicates whether your disks are evenly distributed. Check how many PGs you have. Also check this out: https://www.youtube.com/watch?v=LlLLJxNcVOY
  17. A

    PVE upgrade 6.4 > 7 network completely down

    I had a similar problem on 2 of my nodes failing to download packages and updating them https://forum.proxmox.com/threads/lost-both-1gb-and-10gb-network-after-7-0-upgrade.95114/ I'd say this would indicate an issue with that apt update ?
  18. A

    Ceph and Monitors and Managers

    The maximum I have ever heard being mentioned is 7 monitors, the reason of not being higher than that was with the update speed might slow down or in rare cases corrupt the small monitor db..
  19. A

    How to recover from loosing all but 1 ceph monitor

    I would first ensure the network hasnt changed MTUs or so. Then you should be able to move the monitor db on your other broken nodes, get qurorm and wait for them to sync with the primary working one.
  20. A

    [TUTORIAL] How to run PVE 7 on a Raspberry Pi

    This is absolutely awesome, thank you sir! Curious if you are able to get the RPi to mount ceph mounts/rbds ? Thinking of setting up LXCs/dockers for RPi in a proxmox cluster.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!