Search results

  1. V

    Remove and re-add cepf OSD

    ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.09357 root default -3 0.03119 host pve1 0 os 0.03119 osd.0 down 1.00000 1.00000 -5 0.03119 host pve2...
  2. V

    Remove and re-add cepf OSD

    No. pveversion -v proxmox-ve: 8.1.0 (running kernel: 6.5.11-4-pve) pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15) proxmox-kernel-helper: 8.0.9 proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4 proxmox-kernel-6.5: 6.5.11-4 ceph: 18.2.2-pve1 ceph-fuse: 18.2.2-pve1 corosync: 3.1.7-pve3...
  3. V

    Remove and re-add cepf OSD

    I'm trying to familiarize myself with problematic ceph situations. I can't find the solution to a situation that seems simple enough. The problem is this: Once I have inserted various OSDs I delete them, by Stop -> Out -> Destroy. Now I try to add them again. The problem is that the re-added...
  4. V

    TAPE connection

    Removed the comma, no improvement. cat /proc/cmdline: BOOT_IMAGE=/boot/vmlinuz-5.15.102-1-pve root=/dev/mapper/pbs-root ro quiet intremap=off
  5. V

    TAPE connection

    If you intend modify the file /etc/default/grub like this: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_TIMEOUT=5...
  6. V

    TAPE connection

    Here the dmesg output.
  7. V

    TAPE connection

    I'm afraid the server (a ProLiant DL360p Gen8) doesn't have this option. :(
  8. V

    TAPE connection

    So the most likely thing is that the SAS card is not compatible with the driver (in the particular case ULT3580-HH8). root@pbsPROVA:~# lspci | grep SAS 07:00.0 Serial Attached SCSI controller: PMC-Sierra Inc. PM8018 Adaptec SAS Adaptor ASA-70165H PCIe Gen3 x8 6 Gbps 16-lane 4x SFF-8644 (rev 06)...
  9. V

    TAPE connection

    I have a PBS server 2.4-1 that I'm trying to connect to a TAPE. This is why I purchased a SAS card, specifically the SAS Adaptor ASA-70165H. The TAPE is a TS4300 The TAPE is already connected to another server running Veeam. The TAPE works normally with the Veeam system, the cables work, the...
  10. V

    [SOLVED] CEPH without switch

    Totally missed that page. I used the "Routed Setup (Simple)" approach. Note that multicast is not possible with this method Can you explain to me what the downsides of this are?
  11. V

    [SOLVED] CEPH without switch

    We are thinking of buying a high-density server with four nodes. I would like to create a ceph cluster without having a switch. The idea is to connect all the nodes to each other. In the specific case with 25Gbits connections. However, with a test system I can't get the system to work. The...
  12. V

    [SOLVED] Gap in the graphs

    The server runs without problems. All activities cause no problems. Maybe I solved it, the problem seems to be that the influx server to which it sent the metrics had crashed, once reactivated everything seems to have returned to normal.
  13. V

    [SOLVED] Gap in the graphs

    In my PBS installation (2.4-1) I have this situation: I tried restarting but it didn't work. How can I solve it? the solution in this case: https://forum.proxmox.com/threads/gap-in-the-graphs.135880/post-601516
  14. V

    Proxmox VE 8.0 released!

    I point out that using proxmox 8 I can't mount SMB/CIFS shares.
  15. V

    BTRFS usage space

    root@Artico:~# btrfs filesystem df /BTRFS Data, RAID1: total=5.65TiB, used=5.47TiB System, RAID1: total=8.00MiB, used=848.00KiB Metadata, RAID1: total=15.00GiB, used=9.93GiB GlobalReserve, single: total=512.00MiB, used=0.00B
  16. V

    Enable AVX

    Usually the solution is to set as host as CPU or create a new virtual CPU by enabling the avx and avx2 flag (but this last solution never worked for me), but the processor you have doesn't have AVX instructions, I don't think you can virtualize them.
  17. V

    BTRFS usage space

    Snapshots are included in du --block-size=G /BTRFS.
  18. V

    BTRFS usage space

    In the same sperit of this post, where can I check the files (in this case the disk images) present in the BTRFS filesystem? In this case the total space reported by proxmox i 6.02TB, If i run the command btrfs filesystem usage /BTRFS i get 5.65TiB = 6,21TB. Data,RAID1: Size:5.65TiB...
  19. V

    Handle broken disks with BTRFS

    I think I've found a better solution. In the screen initramfs insert: mount -o degraded /dev/sda3 /root -t btrfs Where "/dev/sda3" is the healthy disck. And press Ctrl-D. For non-boot disks in the proxmox shell enter: mount -o degraded /dev/nvme0n1 /BTRFS Where "/dev/nvme0n1" is the healthy...
  20. V

    Handle broken disks with BTRFS

    I followed this guide to temporarily change the GRUB. The change was to add rootflags=degraded to the end of the line starting with linux. This made it possible to overcome the problem of boot disks. EDIT: There is a strange thing, or rather (in my opinion) wrong. Reported BTRFS volume capacity...