Search results

  1. F

    poor CEPH performance

    Poor means W10 setup takes about 30 minutes, instead of less than 10 minutes due to slow disks. VM's are slow. With my old PVE4 setup with Ceph and without blue store on the same hardware the problem did not exist. The old system was slower than singel nodes with a RAID controller too but not...
  2. F

    poor CEPH performance

    Hi, I've a Ceph setup wich I upgraded to the latest version and moved all disks to bluestore. Now performance is pretty bad. I get IO delay of about 10 in worst case. I use 10GE mesh networking for Ceph. DBs are on SSD's and the OSD's are spinning disks. Situation while doing a W10...
  3. F

    Ceph OSD stoped and out

    Hmm, now scrubbing errors are gone by doing nothing. Now I get: ~# ceph health detail HEALTH_WARN 1 osds down; 44423/801015 objects misplaced (5.546%) OSD_DOWN 1 osds down osd.14 (root=default,host=pve03) is down OBJECT_MISPLACED 44423/801015 objects misplaced (5.546%) # systemctl status...
  4. F

    Ceph OSD stoped and out

    Hi, I've a problem with one OSD in my Ceph cluster: # ceph health detail HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 1 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 7.2fa is active+clean+inconsistent, acting [13,6,16] #...
  5. F

    [SOLVED] Update: openrc / sysv-rc

    Hi, lately I've done (as usual) updates on my nodes and got: ********************************************************************** *** WARNING: if you are replacing sysv-rc by OpenRC, then you must *** *** reboot immediately using the following command: *** for file in...
  6. F

    3-node Proxmox/Ceph cluster - how to automatically distribute VMs among nodes?

    Exactly. It is ... I use Qnap or Synology devices for this.
  7. F

    ata2 kernel message in KVM VM

    My fault ... the messages are sent by rsyslog from an other device. My Linux VMs user virtio and XFS and not SATA and EXT4 ...
  8. F

    3-node Proxmox/Ceph cluster - how to automatically distribute VMs among nodes?

    No. I'm waiting for this feature for about 2 years now :-) I use a shared NFS storage for templates and ISOs
  9. F

    ata2 kernel message in KVM VM

    One of many Debian 9 KVM VMs logs once a day: Mar 13 22:34:37 - kernel ata2: hard resetting link Mar 13 22:34:37 - kernel ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 13 22:34:37 - kernel ata2.00: configured for UDMA/133 Mar 13 22:34:37 - kernel sd 1:0:0:0: [sda] tag#9 FAILED...
  10. F

    CEPHS NFS-Ganesha

    Ist Nutanics auch nicht und trotzdem kann man den HA-Storage dort auch anderweitig nutzen. Ich fände es gut, wenn Ganesha wieder mit in die Ceph-Packages käme, für die, die es nutzen möchten. Im Moment geht nämlich "apt-get install nfs-ganesha-ceph" nicht. Noch besser fände ich gleich in dem GUI...
  11. F

    Proxmox VE Ceph Benchmark 2018/02

    My Setup: Initially setup with PVE4, Ceph Hammer and a 10 GE mesh network. Upgraded to 5.3. OSDs are 500GB spinning disks. Data: rados bench -p rbd 60 write -b 4M -t 16 --no-cleanup Total time run: 60.752370 Total writes made: 1659 Write size: 4194304 Object...
  12. F

    Cluster traffic

    Thank, for pointing me to the documentation I was unable to find.
  13. F

    Cluster traffic

    Hi, in my 3 node cluster I've a 10GE mesh network for Ceph traffic. I've read about "moving cluster traffic" to the Ceph network. How to do this? TIA
  14. F

    Ceph upgrade

    Hi, there are a few things wich IMHO should be add to the Ceph upgrade pages in the wiki. Can I do this and how to get an account? Or should I wirte my additions down here? TIA
  15. F

    W2016 shuts down with EventID 109 - Kenel-Power

    Hi, I run several Windows 2016 VMs on several hosts an clusters. All work as expected but one singel W2016 server VM. About every week it shuts down with EventID 109, source: Kernel-Power. When I google this I can find many posts about faulty power supplies but nothing else. IMO it's pretty...
  16. F

    NoVNC Interface

    Hi, is it possible to configure NoVNC to get back the old interface with the status line and buttons at top? TIA Matthias
  17. F

    Manually remove snapshot

    When you look, at my last post - it IS "Thin-LVM" and removing the snapshot is no problem, right ?
  18. F

    Manually remove snapshot

    Thanks ... it looks like this: --- Logical volume --- LV Path /dev/pve/vm-301-disk-1 LV Name vm-301-disk-1 VG Name pve LV UUID 2AL3Iz-zjdx-Hsfl-3Yz3-zTyO-wYGT-d3woTM LV Write Access read/write LV Creation host...
  19. F

    Manually remove snapshot

    Hi, on a LVM setup I did some thing stupid. I manually (edit conf file ...) added LVM disks of VM A to VM B. Now VM B is in production and I've noticed that there is a snapshot on VM A. My question is: Can I safely use "lvremove" to remove the LVM snapshots? I think the answer is: yes ...