Search results

  1. M

    CentOS 8 Container?

    Read repository setup docs.
  2. M

    CentOS 8.1 LXC unsupported centos release

    But without community license I'm also useful to Proxmox when reporting issue like this (this was just small) with fix. Think about some Proxmox beginner will discover this issue, report it here asking for help. So I saved some Proxmox stuff time = money ;)
  3. M

    CentOS 8.1 LXC unsupported centos release

    Same as above comment. I'm using Proxmox for home virtualization. Anyway, it should be no problem to support you with community license, but I've two small systems and one laptop for quorum, so it's 3 licenses, around a half of current price of my hardware, this is little but too much for me. My...
  4. M

    CentOS 8.1 LXC unsupported centos release

    Hi, please fix version for CentOS 8.1 in /usr/share/perl5/PVE/LXC/Setup/CentOS.pm # pveversion pve-manager/6.1-5/9bf06119 (running kernel: 5.3.13-1-pve) lxc-start 320 20200116195154.278 INFO conf - conf.c:run_script_argv:372 - Executing script "/usr/share/lxc/hooks/lxc-pve-prestart-hook"...
  5. M

    HPE Proliant doesn't boot after install

    Hi, I'm running DL360 G8 and /boot is on SD card because of same problem with no boot from card in HBA mode (P822). Try to follow this: https://www.reddit.com/r/homelab/comments/ap9usf/proxmoxzfs_installed_on_hp_dl360p/ I followed another link, but cannot find it now. This one looks also good.
  6. M

    ZFS drives FAULTED 'was /dev/sd*'

    Hey, I wasn't to patient, so I already destroyed the pool. This machine is not yet used, so I just did backup, now restoring VMs. I just wanted to try fix it, if something similar will happen when it's used. Anyway now pool is configured using by-id, so this problem should not happen again...
  7. M

    ZFS drives FAULTED 'was /dev/sd*'

    And ZFS still knows thos devices, they've same id: # zpool replace -f dpool 14009361960905333539 /dev/disk/by-id/wwn-0x55cd2e4150b65e5b invalid vdev specification the following errors must be manually repaired: /dev/disk/by-id/wwn-0x55cd2e4150b65e5b-part1 is part of active pool 'dpool'
  8. M

    ZFS drives FAULTED 'was /dev/sd*'

    This didn't help, it was reimported with same current layout where sdf and sdg has been switched: # zpool status -x pool: dpool state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a...
  9. M

    ZFS drives FAULTED 'was /dev/sd*'

    Hi, I created zpool in PVE webgui, didn't realize it can be problem later, because it used /dev/sd* instead of /dev/disk/by-id/*. Now after reboot my RAID-Z2 is DEGRADED. # zpool status -x pool: dpool state: DEGRADED status: One or more devices could not be used because the label is missing...
  10. M

    CentOS 8 Container?

    Strange, I had no issue like this. I'm on PVE 6.0-7 and used CentOS8 cloud image from Oct 12 and 13.
  11. M

    CentOS 8 Container?

    Yeh, I realized I need to use cloud image, then everything is working fine.
  12. M

    CentOS 8 Container?

    LXC image available: https://uk.images.linuxcontainers.org/images/centos/8/ Anyway I'm not getting IP/route in container, seems it needs network manager installed...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!