Search results

  1. H

    proxmox metric only see what has been actually allocated, Ram ?

    "mem" returns used and "maxmem" returns allocated memory. SELECT mean("maxmem") FROM "system" WHERE ("host" = 'vm_name')
  2. H

    [SOLVED] Docker in LXC läuft nicht mehr

    Da docker nur wegen forwarding meckert, versuch es mal auf dem host zu aktivieren. sysctl -w net.ipv4.ip_forward=1 Wenn der container danach startet den Eintrag zu /etc/sysctl.conf hinzufügen. cat << 'EOF' >> /etc/sysctl.conf net.ipv4.ip_forward=1 EOF Hab bei mir nur docker in debian lxc...
  3. H

    leider wiedermal PBS Performance

    ssd mirror mit zfs über 1gb nic auf pbs zfs hdd. 105: 2021-02-04 01:03:19 INFO: Starting Backup of VM 105 (qemu) 105: 2021-02-04 01:03:19 INFO: status = running 105: 2021-02-04 01:03:19 INFO: VM Name: xxxxx 105: 2021-02-04 01:03:19 INFO: include disk 'scsi0'...
  4. H

    Debian LXC Container with Docker Fails to restore

    I just tried a restore of a lxc container running docker unprivileged. Works without any issues for me.
  5. H

    Durch den Einsatz von Sudo die Sicherheit erhöhen?

    Es geht primär darum das arbeiten als root eine gefahr darstellt. Root wird nur bei modifikationen benötigt, neue pakete installieren, konfigurationen ändern etc. Alles andere geht mit normalen benutzer rechten.
  6. H

    Durch den Einsatz von Sudo die Sicherheit erhöhen?

    Es wurde erst vor einer Woche ein exploit veröffentlicht CVE-2021-3156 der jedem Benutzer erlaubt zu root zu wechseln ohne sudo rechte zu haben. Der exploit reicht 10 Jahre zurück... Root zu deaktivieren ist nicht wirklich möglich auf PVE. Wird für cluster, gui shell, etc. verwendet...
  7. H

    leider wiedermal PBS Performance

    Wie viel Speicher ist auf dem pbs denn eigentlich belegt ? Performance ist abhängig von IOPS, da alle chunks referenziert werden müssen. Da du 300TB in verwendung hast sicherst du wahrscheinlich auch viel. Die Anzahl der chunks kannst du mit "find .chunks -type f | wc -l" beim backup...
  8. H

    PBS nach Backup herunterfahren

    Per vzdump hook script kannst du ihn nach den backups via ssh direkt runterfahren.
  9. H

    leider wiedermal PBS Performance

    Htop alleine sagt nichts aus, ich meinte eher atop mit disk Auslastung unter "busy" und iops unter "avio" auf Quell Server sowie Ziel pbs. Da nur 1 Kern zu 100% ausgelastet ist, gehe ich davon aus dass hier etwas single threaded läuft, da kann man dann noch so viele Kerne haben, bringt halt...
  10. H

    Error: no space left on device when I try to update.

    Okay it's not backups then. Go to "/var/log/" and delete some file >= 5mb (or move it) Then run "apt update && apt install -y ncdu" In case this fails you have some process constantly writing to your disk and filling it in no time. Run "ncdu -x /" and it will show you where the files are.
  11. H

    leider wiedermal PBS Performance

    Interessantes setup. Aus eigenem interesse, was läuft denn da so alles drauf ? Hast du mal atop nebenbei laufen lassen ? "atop 1" und schauen wie die auslastungen sind. Bei mir laufen die backups nach wie vor performant.
  12. H

    Error: no space left on device when I try to update.

    Removing backups should free the space. Make sure there are no backups under /var/lib/vz/dump left. Please post the output of "du -hs /var/lib/vz/*"
  13. H

    Error: no space left on device when I try to update.

    This usually happens if users create backups on the local storage. See if there are any backups on local storage. Move them to a proper storage.
  14. H

    Cluster nodes uses too much RAM

    It depends on the pool size and workload how low you can go. Minimum would be 1GB for 1TB pool size so 6GB to cache only metadata. To cache frequently read files and metadata I would recommend you 16-24GB. More arc is always better for performance. cat << 'EOF' > /etc/modprobe.d/zfs.conf #...
  15. H

    Cluster nodes uses too much RAM

    You have to configure ZFS, by default arc uses 50% of total RAM. You can add as much RAM as you like it will still use 50% if not limited. drop_caches clears arc so it temporarily goes down. Has nothing to do with proxmox.
  16. H

    Container creation takes ages

    The dedup table takes 20GB RAM for 1 TB pool size, this includes unused storage. If the table does not fit into ram it's constantly read/written from disk.
  17. H

    Container creation takes ages

    It's most likely due to deduplication. But it could also be a bad disk in your 3 way mirror, given that they have to sync. You only enabled dedup on the vdev so you can simply recreate them without dedup using send/receive and rename. In general don't enable dedup, in most cases the overhead...
  18. H

    High I/O wait with SSDs

    Nah seems fine given that raidz splits data across drives. Your S3710 200GB do 300mb/s seq write x4 = 1200mb/s. S3700 excluded for parity. keep numjobs=1
  19. H

    High I/O wait with SSDs

    You are running the test for less then 3 seconds, you are only testing the cache. Technically your drives can't exceed 400mb/s in a 3 way mirror. I would like you to test with the following and report back: fio --name=seqwrite --filename=fio_seqwrite.fio --refill_buffers --rw=write --direct=1...
  20. H

    Is it possible to generate "one-file" images from the deduplicated storage ?

    You can use the pbs backup client. Using the FUSE option you can mount the raw block device. What you do with it is up to you, you can read single files or stream the whole disk. https://pbs.proxmox.com/docs/backup-client.html

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!