Recent content by Max P

  1. Max P

    Ceph OSD creation fails

    Would it then be better to first upgrade to the newest ceph version (I think v15 is also supported on proxmox right now) and then create all OSDs new one by one ? I am not sure which changes are in v15, but if there are also default disk layout changes then this way we wouldn't have to do it...
  2. Max P

    Ceph OSD creation fails

    I initially also thought that there were partition table inconsistencies, but that's why I rebooted the node, thinking that it would fix it since it has to reread the tables again. The cluster was initially set up with proxmox 5 (including ceph) and we upgraded it last year (including ceph)...
  3. Max P

    Ceph OSD creation fails

    Hi, I am trying to create an OSD on one of our nodes in our 4 node cluster and I am getting this error: command 'ceph-volume lvm create --cluster-fsid e9f42f14-bed0-4839-894b-0ca3e598320e --block.db '' --data /dev/sdi' failed: exit code 1 System state before trying to create the OSD (via the...
  4. Max P

    CEPH WAL/DB monitoring/measurements

    Hi, we already have a 4 node proxmox cluster running ceph and are thinking about expanding it. We are trying to reevaluate our hardware choices by observing the performance of our current cluster and are now trying to find out how much the WAL and DB is used on our system. Each node has one...
  5. Max P

    Error adding (existing) CephFS

    Yesterday I didn't see that the syslog entries were from pvedaemon, not pvestatd. So I restarted both after patching the perl script and now it works. CephFS is mounted and can be used via the webinterface. @Alwin, I assume once the fix is released on your repos my quick and dirty fix (i only...
  6. Max P

    Error adding (existing) CephFS

    I got rid of the mount error by patching the perl script on all nodes (but I had limited the storage to only pve1 on my first try). The mount succeeded and I can see the content of the cephfs in /mnt/pve/cephfs, but the webinterface and syslog errors are the same as before. The error in the...
  7. Max P

    Error adding (existing) CephFS

    root@pve1:~# pveversion -v proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve) pve-manager: 5.3-5 (running version: 5.3-5/97ae681d) pve-kernel-4.15: 5.2-12 pve-kernel-4.15.18-9-pve: 4.15.18-30 pve-kernel-4.15.18-5-pve: 4.15.18-24 pve-kernel-4.15.18-1-pve: 4.15.18-19 pve-kernel-4.15.17-3-pve...
  8. Max P

    Error adding (existing) CephFS

    @Alwin, I found a bug in the PVE/Storage/CephTools.pm perl script. It still doesn't work, but at least I get a different error now. This is the fix: +++ /usr/share/perl5/PVE/Storage/CephTools.pm 2019-01-07 16:31:05.170790597 +0100 +++ /usr/share/perl5/PVE/Storage/CephTools.pm 2019-01-07...
  9. Max P

    Error adding (existing) CephFS

    Yes: root@pve1:~# cat /etc/pve/ceph.conf [global] auth client required = cephx auth cluster required = cephx auth service required = cephx bluestore_block_db_size = 21474836480 cluster network = 10.10.1.0/24 fsid =...
  10. Max P

    Error adding (existing) CephFS

    root@pve1:~# cat /etc/pve/storage.cfg dir: local path /var/lib/vz content vztmpl,iso,images,backup,rootdir maxfiles 1 shared 0 lvmthin: local-lvm thinpool data vgname pve content rootdir,images rbd: rbd_hdd_vm content images...
  11. Max P

    Error adding (existing) CephFS

    Yes, after adding cephfs via the webinterface the mount folder was created and contained the data that is on cephfs, so the mount was successfull. But pvestatd still spammed the syslog with the mount errors and the webinterface showed the grey question mark.
  12. Max P

    Debian 9.4 not starting on nested ESXi

    This is were I got the idea from to set the machine type specifically. So I ran this command: qm set 109 -machine pc-i440fx-2.11 I have now removed the config entry specifying the machine type again and stopped and started the ESXi VM again but the behaviour hasn't changed. Here is the complete...
  13. Max P

    Debian 9.4 not starting on nested ESXi

    Hi, We have a 4 node proxmox cluster (5.3) where we have a nested ESXi (mostly for migrating old vmware VMs to ceph). On this nested ESXi a Windows 10 VM (uefi) starts and runs fine. A Debian 9.3 VM runs and starts fine too. But a Debian 9.4 VM does not start (when booting a 9.4 netinstall you...
  14. Max P

    Error adding (existing) CephFS

    Hi, We have a 4 node proxmox cluster that I just updated to proxmox 5.3 (from 5.2) without any problems. Now I want to test the new CephFS support in Proxmox 5.3 , but after I add it via the storage menu in the webinterface the cephfs storage entry only has a grey question mark on it. The...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!