Search results

  1. M

    proxmox graphs stopped working

    after restart of those processes, it worked.
  2. M

    proxmox graphs stopped working

    Hi, the graphs doesn't work anymore. restarted pvestatd: service pvestatd status pvestatd.service - PVE Status Daemon Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled) Active: active (running) since Thu 2017-05-04 23:44:56 CEST; 1s ago Process: 19281 ExecStop=/usr/bin/pvestatd...
  3. M

    replacing disk from mirrored root ZFS-Pool

    Hi, I would like to replace one SSD from the mirrored root ZFS-Pool. How do I do it? I only found a guide for a RaidZ1 in the wiki: https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks#Replacing_a_failed_disk_in_the_root_pool does it work the same?
  4. M

    CT restores backup to wrong disks?

    Do I have to set the size everytime? it works when I move the files to the local-zfs, but I can't do Backups.
  5. M

    local-lvm (pve) Ganze SSD nutzen

    und nicht vergessen in der VM das Dateisystem nachträglich zu vergrößern
  6. M

    LXC SSD Volume ? LXC Container-File

    verstehe ich das richtig, dass bald ein verschieben der container per GUI möglich ist? Ein restore hat bei mir die root-disk (lag früher auf local-zfs) und den mp0 (lag auf ZFS-RaidZ1) zusammen auf den ZFS-RaidZ1 gepackt. Ich würde die aber gerne wieder trennen, also root-disk auf local-zfs...
  7. M

    CT restores backup to wrong disks?

    Hi, I got an Ubuntu 16.10 container running with Root disk stored on the SSD-ZFS-Raid1 (local-zfs) pool of Proxmox and one mp0 stored at the ZFS-RaidZ1 Pool. I make Backups to a single HDD. After restoring a Backup, both mountpoints are now on the single Disk?? how can I move them back to both...
  8. M

    Erster Homeserver: Passthrough, ZFS, SSD Cache, Partionierung - Hilfe gesucht

    Ich bin dazu übergegangen alle VMs in LXC-Container zu migrieren. Die Weiterleitung der HDD-Datenraten an die VMs ist einfach grottig schlecht im Vergleich. VM: 5-30mb/s (egal welche cache-Einstellungen) LXC: 250-300mb/s. Ich habe "nur" ein ZFS-Raidz1 am laufen mit 3 HDDs. Kein SSD-Cache.
  9. M

    LXC Container doesn't work anymore

    Hi, I installed a LXC-Container with Ubuntu and installed nextcloud in it. It worked like a charm until I rebooted it. It seems that NO service is loaded (no apache2, no sshd, nothing; it consumpts only 7mb of RAM!). And it tells me "The TERM environment variable is unset". Even if I set it...
  10. M

    Homeserver in LXC

    Ich will es ja nicht als NAS nutzen - Ich habe einen weiteren, reinen Fileserver. Ich finde nur die Verwendung der Plugins so schön einfach :)
  11. M

    Homeserver in LXC

    Hi, ich würde gerne eine NAS-Software wie Openmediavault in einem LXC-Container laufen lassen. Wie mache ich das am besten? Grüße
  12. M

    Erster Homeserver: Passthrough, ZFS, SSD Cache, Partionierung - Hilfe gesucht

    Zu Punkt 4 hätte ich auch eine Frage: Wie sollte man am besten 3 gleichgroße HDDs arrangieren, damit man eine hohe Verfügbarkeit und Geschwindigkeit hat? Aktuell habe ich die im ZFS-RAIDZ1. Die Datenrate im Host ist super, aber die Weiterleitung zu den Gästen ist extrem langsam.
  13. M

    Poor vDisk performance

    why does it run for other people with 3*3TB and 8GB Ram? What alternatives do I have? RAM is too expensive at the moment and if I spend more money, the WAF will decrease very rapidly.. should I switch to Raid 5? Or Raid1 and a single HDD? or ZFS Raid1? I got Nextcloud running in an Ubuntu 16...
  14. M

    Poor vDisk performance

    I don't think that the cache is bigger than or big as the 1.8GB iso file. This Opteron has a Turbo of 2.8ghz and should have enough power for this and three disks with 128mb cache each and a transferrate of about 180mb/s should be fast enough, too. And more disks can't be added, the server is...
  15. M

    Poor vDisk performance

    Sry, but I don't believe that. According to other Tests, the performance of a 3-drive Raidz1 should be still ok enough and at the minimum as fast as at least one drive. If I enable writethrough, the reading speed is as high as it should be. If I copy an ISO from the SSD-Pool to the HDD-Pool I...
  16. M

    Poor vDisk performance

    Hi, first my sys: Opteron 3280 16GB ECC 2*250GB SSD in ZFS-Raid1 (for Host- and Guest-OS) 3*1TB HDD in ZFS-RaidZ1 (for Data-vDisks) I got some problems with the performance of the disks inside my VMs (both Win and Linux). with dd or crystaldiskmark ("CDM") i get very good values. (around...
  17. M

    vDisk are increasing until HDD is full

    Yes, but I managed to use TRIM now. Now it is better. Thx!
  18. M

    vDisk are increasing until HDD is full

    Hi, I got some problems with my Proxmox-Setup. I got two SSDs in a ZFS Raid1 for VM and Host and some HDDs where I create some vDisk and insert them to some guest. The problem is: If I delete stuff out of the vDisk in the VM, the Guest tells me "empty", but the Host says that the vDisk is...
  19. M

    NFS: storage is not online (500)

    Hi, I set up a NFS Server on my Openmediavault3 Server, but I can't get a connection from my Proxmox-Server. /etc/export of the OMV3 Server is: /export/PoolBackup 192.168.0.0/24(fsid=1,rw,subtree_check,secure,no_root_squash) /export/Test...
  20. M

    High Cpu Load on idle KVM host

    I have the same problem, but i can't install linux-tools 4.4