Recent content by digidax

  1. D

    [SOLVED] After update from 6.4 to 7.2 the tape drive can't be accessed

    Everything is working well. Installing 7.2 from ISO instead of upgrade from 6.4 to 7.2 solves the problem with accessing the tape drive.
  2. D

    [SOLVED] After update from 6.4 to 7.2 the tape drive can't be accessed

    Have make a clean install of 7.2 from ISO, now I can handle the tape drive from the node: root@pve4:~# mt -f /dev/st0 status drive type = 114 drive status = 1543503872 sense key error = 0 residue count = 0 file number = 0 block number = 0 At the moment I add the node back to the cluster and...
  3. D

    [SOLVED] After update from 6.4 to 7.2 the tape drive can't be accessed

    I will do now a clean install of 7.2 on this node and if the tape can not be used from the node, I will reinstall 6.4 to get working my Bareos Backup System inside the CT.
  4. D

    [SOLVED] After update from 6.4 to 7.2 the tape drive can't be accessed

    Hi there, I have done the upgrade from 6.4 to 7.2 with a node, where a tape drive was used. After upgrade, I can't access it (QUANTUM ULTRIUM-HH7) from the node (LXC CT not tested because node must working first): root@pve4:~# lsscsi -g [1:2:0:0] disk LSI RAID 5/6 SAS 6G 2.13...
  5. D

    Storage attached via NFS: maintenance procedure

    Hi there, I'm using a NFS export as storage for VM/CT backups. Working well. Now, for maintenance, the server which provides the NFS export hast to be reboot. What would be the best way, to tell the nodes, that this export is temporary not available? A backup is not planned during this time...
  6. D

    [HOW-TO] Proxmox 7 cgroupv2 - Centos 7 upgrade systemd without systemd.unified_cgroup_hierarchy=0

    Will this procedure make it possible to run Centos 7 in LXC containter under Proxmox 7?
  7. D

    Different files space in container mount point

    Brining this question up again, I'm planning to upgrate to 7.0 but want to have clearness, that this problem will not destroy anything. Are additional information needed? Thanks.
  8. D

    Different files space in container mount point

    Hello, I have add a mount point, based on a hardware RAID-0 (Stripe) to a CT: Inside of the LXC container, I get: # df -h Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf rpool/data/subvol-211-disk-0 200G 133G 68G 67% / /dev/loop0 6,8T 1,3T...
  9. D

    ASUS B450 thermal sensors ITE IT8665E

    Ok, done. Test after update-grub # cat /boot/grub/grub.cfg | grep "lax" linux /ROOT/pve-1@/boot/vmlinuz-5.4.106-1-pve root=ZFS=rpool/ROOT/pve-1 ro root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet acpi_enforce_resources=lax linux /ROOT/pve-1@/boot/vmlinuz-5.4.106-1-pve...
  10. D

    ASUS B450 thermal sensors ITE IT8665E

    Thanks Dominik, The git of a1wong doesn't support the IT8665E. But this https://github.com/frankcrawford/it87 has it included. Cloning the git and making install was done successful, but can't load the module: The new module is available: # ls -l...
  11. D

    ASUS B450 thermal sensors ITE IT8665E

    Hi there, I want to monitor the temperature and voltage of the mainboard. sensors-detect says: Driver `to-be-written': * ISA bus, address 0x290 Chip `ITE IT8665E Super IO Sensors' (confidence: 9) Note: there is no driver for ITE IT8665E Super IO Sensors yet. Check...
  12. D

    [SOLVED] Replication error. Broken pipe on 2nd send

    Thanks, you're right (last line): zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 39.3G 410G 104K /rpool rpool/ROOT 11.6G 410G 96K /rpool/ROOT rpool/ROOT/pve-1 11.6G 410G 11.6G /...
  13. D

    [SOLVED] Replication error. Broken pipe on 2nd send

    Thanks fabian, but here is on the target node (pve3) not such volume: root@pve3:~# ls -l /rpool/data/ total 59 drwxr-xr-x 18 root root 23 Dec 22 07:54 subvol-172-disk-1 drwxr-xr-x 18 root root 23 Feb 1 17:47 subvol-183-disk-0 drwxr-xr-x 18 root root 23 Feb 1 17:47 subvol-184-disk-1...
  14. D

    [SOLVED] Replication error. Broken pipe on 2nd send

    Replication of container works: from pve4 to pve1 and pve2 to pve3 not. The log: 2021-04-15 09:50:00 211-2: start replication job 2021-04-15 09:50:00 211-2: guest => CT 211, running => 0 2021-04-15 09:50:00 211-2: volumes => pve_zfs:subvol-211-disk-0,pve_zfs:subvol-211-disk-1 2021-04-15...
  15. D

    Can't stop LXC container

    pve-manager/6.3-2/22f57405 5.4.73-1-pve #1 SMP PVE 5.4.73-1 There is no HA membership, the node is only part of a cluster with one CT (211) on it. Inside the LXC container, Bareos (bakula fork) is running with a Postgresql DB. Command: command 'lxc-stop -n 211 --nokill --timeout 60' failed...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!