Search results

  1. Q

    No Networking After Upgrade to 8.2

    Yes, I would use Vim, but that's up to you. :)
  2. Q

    No Networking After Upgrade to 8.2

    We have just had the same problem. The inferface names were renamed after the update. Compare your /etc/network/interfaces with the output of ip a.
  3. Q

    Fortschritt beim LXC Snapshot rollback?

    Hi, ich habe aus versehen ~4TB in einem Container gelöscht. Jetzt stelle ich einen Snapshot wieder her. Das läuft schon 1d 11h. Gibt es eine Möglichkeit rauszufinden wie der Fortschritt ist? Verwende ZFS als Dateisystem. Danke für Hilfe.
  4. Q

    Ceph recovery of HDD cluster slow

    ceph tell 'osd.*' injectargs '--osd-max-backfills 16' ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4' I waited several hours, at least a whole night for something to happen. There is a replica 3 pool and a ec pool on the OSDs. Ceph distributes across hosts. 10G-Network Running 3 days...
  5. Q

    Ceph recovery of HDD cluster slow

    root@pve1:~# ceph --version ceph version 17.2.7 (e303afc2e967a4705b40a7e5f76067c10eea0484) quincy (stable) Samsung 990 PRO 1TB as db/wal disks.
  6. Q

    Ceph recovery of HDD cluster slow

    Hi, I have a PVE cluster with 7 hosts of which each host has 2 16tb HDDs. The HDDs all use NVMEs as DB discs. There are no running VMs on the HDDs. They are only used as cold storage. A few days ago I had to swap 2 of these HDDs on PVE1. And since I already had the server open, I added two...
  7. Q

    Proxmox cluster reboots on network loss?

    Is there a way to turn this off? It f*ed up our entire cluster yesterday when we were making updates to our network infrastructure.
  8. Q

    CEPH: uneven storage allocation on OSDs?

    Looks like I found the problem here (german). The cluster is now rebalancing and the PG number is much higher. Thanks for the push in the right direction.
  9. Q

    CEPH: uneven storage allocation on OSDs?

    Thanks, it looks like I need to enable the pg_autoscaler module first: How can i do that? The pool was created with "PG Autoscale Mode on": Edit: or is the module automatically activated when I set a target ratio for a pool? Edit2: Modul is enabled: root@pve1:~# ceph mgr module ls | grep...
  10. Q

    CEPH: uneven storage allocation on OSDs?

    Hello together, I'm just experimenting with CEPH and wondering why the OSDs are so unevenly allocated. There are 7 PVE servers each with a 2TB and a 4TB NVMe. I have ec 4+3 with the hosts configured as failure domain. Does anyone have any idea if this is normal or if I should try to distribute...
  11. Q

    Ceph HDDs slow

    I installed some Optane P4801X I had lying around and now use them as DB/WOL disks for the spinner OSDs. Now i have write speeds that are much much better. Thanks for the little push in the right direction!
  12. Q

    Ceph HDDs slow

    Hi, I am currently experimenting with Ceph on a PVE cluster with 7 hosts. Each of the hosts has two OSDs as 16TB SATA hard drives. Using dd to the HDDs I can write with speeds up to 270MB/s. The storage and client network are both connected with 10GBit/s, which I have also tested with iperf3. I...
  13. Q

    [SOLVED] SSH doesn't work as expected in LXC

    Uff, you are right! My Ansible has only checked whether it is enabled. Thank you!
  14. Q

    [SOLVED] SSH doesn't work as expected in LXC

    Sorry, I edited my last post with additional info. Usually I restart the container via Proxmox with "Reboot". SSH is enabled, yes. root@foundry:~# systemctl status sshd * ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset...
  15. Q

    [SOLVED] SSH doesn't work as expected in LXC

    Well, that's right. But when I reboot the container, shouldn't the SSH settings I configured in the configuration be used? This is not the case until I restart the SSH server after the container restart. How to reproduce: Start latest LXC container with Debian 11 Connect to container with SSH...
  16. Q

    [SOLVED] SSH doesn't work as expected in LXC

    Same here. The problem for me is, that all changes in the container's /etc/ssh/sshd_config are completely ignored unless I restart the SSH server by hand. ~# pct config 108 arch: amd64 cores: 4 features: nesting=1 hostname: foundry memory: 2048 nameserver: 2620:fe::fe net0...
  17. Q

    [SOLVED] SSH doesn't work as expected in LXC

    I have also the same problem. Tested on Debian 11 and Ubuntu 20.04 LXC containers. I'm running PVE 7.1-8.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!