Search results

  1. F

    API Zugriff für Spice

    Bin jetzt auf 8.1.4 und das Problem bleibt.
  2. F

    API Zugriff für Spice

    Ich komme hier nicht weiter ... hat niemand eine Idee ?
  3. F

    API Zugriff für Spice

    Hallo, ich versuche wie hier: https://forum.proxmox.com/threads/spice-timeout-cut-paste.144123/ beschrieben Spice-Session per API zu starten. Das gelingt mir aber nicht, da ich mutmasslich mit den Berechtigungen etwas falsch mache. Ich bin im Moment noch immer auf PVE 7.4-17...
  4. F

    Spice Timeout / cut&paste

    Wo nimmt man den her? Ich kenne nur "virt-viewer" Das einzige was ich gefunden habe was in die Richtung zeigt ist: https://gitlab.com/pawlakm/pve-spice-connect Ist aber einigermassen kompliziert da hin zu kommen. Bei mir mit "virt-viewer" als Client nicht, deshalb frage ich :)
  5. F

    Spice Timeout / cut&paste

    Hallo, ich habe dazu zwar schon einiges gelesen aber kaum etwas verstanden :-) Für einen VDI-Ansatz experimentiere ich mit Spice. An sich funktioniert das sehr gut. Dazu habe ich aber ein "Problem" und eine Frage: Problem: Timeout Wenn eine Session länger inaktiv ist schliesst sich der...
  6. F

    PVE8: /var/lib/vz mounten

    Danke, ich erinnere mich nicht das Volume je gelöscht zu haben. Egal, dann versuche ich das mal so. Gibt es eigentlich einen guten Grund warum per Default der thin pool angelegt wird? Kann man das beim Setup beeinflussen? Edit: besser, funktioniert jetzt.
  7. F

    PVE8: /var/lib/vz mounten

    pveversion -v proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve) pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916) pve-kernel-6.2: 8.0.2 pve-kernel-6.2.16-3-pve: 6.2.16-3 ceph-fuse: 17.2.6-pve1+3 corosync: 3.1.7-pve3 criu: 3.17.1-2 glusterfs-client: 10.3-5 ifupdown2: 3.2.0-1+pmx2...
  8. F

    PVE8: /var/lib/vz mounten

    Hallo, ich habe mein ersten PVE8 System installiert und scheitere mit etwas, was mit allen Vorgängerversionen immer funktioniert hat. Folgender fstab Eintrag führt dazu, dass das System nicht bootet: /dev/pve/data /var/lib/vz xfs defaults 0 0 Ich bekomme dann: Found volume group “pve" using...
  9. F

    [SOLVED] fstab - mount local volume: timeout

    https://forum.proxmox.com/threads/timed-out-waiting-for-device-dev-mapper-pve-data.58510/ Removed /dev/pve/data, recreated everything, works.
  10. F

    [SOLVED] fstab - mount local volume: timeout

    Hi, for good reasons I want file system storage. As usual I removed the LVM thin storage an then issued: mkfs.xfs /dev/pve/data and then, as usual (did this more than once with PVE) added the following line to /etc/fstab: /dev/pve/data /var/lib/vz xfs defaults 0 0 then tested this with: mount...
  11. F

    poor CEPH performance

    Any suggestions how to tune my Ceph setup?
  12. F

    poor CEPH performance

    No, I've just followed the instructions in the PVE wiki for the upgrade.
  13. F

    poor CEPH performance

    Sadly not. And yes, the 50 MB/s are from the Win10 install. I had a look at my Nagios graphs and they proof me wrong: Perhaps it's just me, but compared to singel nodes with RAID5 my Ceph cluster is slow.
  14. F

    poor CEPH performance

    Different brands and models of 500 GB SATA disks. None, just the usage. Off course not. It runs in JBOD mode. Again: The problem popped up after upgrade from PVE4 to 5 and got even worse by switching to blue store.
  15. F

    poor CEPH performance

    Poor means W10 setup takes about 30 minutes, instead of less than 10 minutes due to slow disks. VM's are slow. With my old PVE4 setup with Ceph and without blue store on the same hardware the problem did not exist. The old system was slower than singel nodes with a RAID controller too but not...
  16. F

    poor CEPH performance

    Hi, I've a Ceph setup wich I upgraded to the latest version and moved all disks to bluestore. Now performance is pretty bad. I get IO delay of about 10 in worst case. I use 10GE mesh networking for Ceph. DBs are on SSD's and the OSD's are spinning disks. Situation while doing a W10...
  17. F

    Ceph OSD stoped and out

    Hmm, now scrubbing errors are gone by doing nothing. Now I get: ~# ceph health detail HEALTH_WARN 1 osds down; 44423/801015 objects misplaced (5.546%) OSD_DOWN 1 osds down osd.14 (root=default,host=pve03) is down OBJECT_MISPLACED 44423/801015 objects misplaced (5.546%) # systemctl status...
  18. F

    Ceph OSD stoped and out

    Hi, I've a problem with one OSD in my Ceph cluster: # ceph health detail HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 1 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 7.2fa is active+clean+inconsistent, acting [13,6,16] #...
  19. F

    [SOLVED] Update: openrc / sysv-rc

    Hi, lately I've done (as usual) updates on my nodes and got: ********************************************************************** *** WARNING: if you are replacing sysv-rc by OpenRC, then you must *** *** reboot immediately using the following command: *** for file in...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!