Search results

  1. M

    Only 3 nvme disks available

    root@pve1:~# lspci | grep ADA 84:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03) 85:00.0 Non-Volatile memory controller: ADATA Technology Co., Ltd. XPG SX8200 Pro PCIe Gen3x4 M.2 2280 Solid State Drive (rev 03)...
  2. M

    Only 3 nvme disks available

    Hello, I've created a new proxmox environment with 4 nmve disk. But in the GUI and on fdisk I see only 3 disks. Syslog shows all 4 disks. I changed the location of the disk and the setup on the pcie / nvme card and can see all serial numbres depending on the position of the nvme. But the GUI...
  3. M

    Backup to multiple tapes without a changer

    Yes that's the way. Thank you. - create Media Pool - format Tapes - Label more than one Tape and assign them to the Media Pool. - Run Backup Job 2023-01-14T00:41:59+01:00: wrote 199 chunks (835.19 MB at 53.06 MB/s) 2023-01-14T00:42:02+01:00: allocated new writable media 'xyz_2'...
  4. M

    Backup to multiple tapes without a changer

    Hello, I do have a tape drive, not a changer and yesterday I did a backup which needed multiple tapes to backup too. After 15.5 hours of backing up chunks I got the error message "TASK ERROR: alloc writable media in pool 'TapeBackup' failed: no usable media found". The media was full ... I...
  5. M

    Reinstall CEPH on Proxmox 6

    Thanks. That helped me one step further. Now 3 monitors are running. 2 managers are configured and running. Not able to start the managers and not able to configure the 3rd manager. Following Error Message: /var/lib/ceph/mgr/ceph-pve-node3/keyring.tmp.2235958 Now I've two weeks holiday. I...
  6. M

    Reinstall CEPH on Proxmox 6

    Did following steps: pveceph purge | on all nodes rm -r /var/lib/ceph | on all nodes rm /etc/pve/ceph.conf reboot of one node Why does the log file still have entries and it looks like ceph will be started? cat /var/log/ceph/ceph-mon.pve-node3.log 2019-09-06 23:24:06.667 7f69a14cc3c0...
  7. M

    Reinstall CEPH on Proxmox 6

    I found following log entry after reboot of the node: cat /var/log/ceph/ceph-mon.pve-node3.log 2019-09-06 20:55:17.263 7fd6ce7fa3c0 0 set uid:gid to 64045:64045 (ceph:ceph) 2019-09-06 20:55:17.263 7fd6ce7fa3c0 0 ceph version 14.2.2 (a887fe9a5d3d97fe349065d3c1c9dbd7b8870855) nautilus (stable)...
  8. M

    Reinstall CEPH on Proxmox 6

    Installed the latest Updates. Same result. pveceph init --network 10.1.1.0/24 was working But afterwards I get following error: pveceph createmon unable to get monitor info from DNS SRV with service name: ceph-mon Could not connect to ceph cluster despite configured monitors cat...
  9. M

    Reinstall CEPH on Proxmox 6

    pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve) pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7) pve-kernel-5.0: 6.0-6 pve-kernel-helper: 6.0-6 pve-kernel-5.0.18-1-pve: 5.0.18-3 pve-kernel-4.15.18-20-pve: 4.15.18-46 pve-kernel-4.13.13-2-pve: 4.13.13-33 ceph: 14.2.2-pve1...
  10. M

    Reinstall CEPH on Proxmox 6

    Hello, After the upgrade to Release 6 I tried instead of upgrading CEPH to reinstall CEPH. I used a page which showed to delete several directories. ( rm -Rf /etc/ceph /etc/pve/ceph.conf /etc/pve/priv/ceph* /var/lib/ceph ) pveceph init --network 10.1.1.0/24 was working But afterwards I get...
  11. M

    SMART error (CurrentPendingSector) detected on host

    Ich bekomme seit ca. 10 Tagen täglich eine Warnmeldung bzgl. Smart Error. The following warning/error was logged by the smartd daemon: Device: /dev/sdb [SAT], 1 Currently unreadable (pending) sectors Device info: WDC WD30EFRX-68N32N0, S/N:WD-WCC7K5NJY8V8, WWN:5-0014ee-26483534b, FW:82.00A82...