> Error writing to file - write (28: No space left on device) [IP: 217.196.149.233 80]
what does `df -h /etc/apt` say?
> The repository is not updated and the previous index files will be used. GPG error: http://ftp.de.debian.org/debian stretch-updates
what does `cat /etc/apt/sources.list` say?
These are all filesystems IN rpool/ROOT/pve-1
zfs set canmount=off rpool/ROOT/pve-1/014579f53cfb31770b23418d4e9f6c62a44052ffa4d08386752d085ff5f58eec
zfs set canmount=off rpool/ROOT/pve-1/0bbb0c12579736d6363a87311f0d092805cd97bb4baedc8ffb5f64e8fb9fe83d
zfs set canmount=off...
Thanks for the follow up! But I'm not sure we can blame the hybrid iso installer when your computer's uEFI bios won't enumerate the embedded cdrom image.
two other tricks are using a 2.0 USB port for the stick (low chance of happiness) or Legacy boot. (higher chance of happiness, but you might...
as stated above, verify the port in pve:/etc/pve/status.cfg and influxdb:/etc/influxdb/influxdb.conf match 8089
one gets use to typing port 8086 that looks the same but isn't the UDP default port.
GRUB2 gets confused. While the legacy boot (grub-install /dev/sda) seems simple, it isn't when it doesn't work. a good read is http://www.rodsbooks.com/efi-bootloaders/
When I had an issue with a nvme ZFS boot device, I switched to uEFI boot and used...
> We can import/export/scub a pool via proxmox installer without problem.
sorry my note above wasn't clear, but choose emergency boot in the PVE install iso, or your USB stick to get the box up. (or zpool import ; exit to continue the stuck boot)
zpool remove rpool sdc1 # simplify pool design...
I bet
GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=1"
in /etc/default/grub
then run update-grub (or add rootdelay=1 to the linux line of /boot/grub/grub.cfg )
would fix everything. the kernels in pve 5.x tries to load vmlinuz-*-pve faster than 4.x and faster than they might be ready. I hit this with...
There is no 'old-disk' in the current top vdev, for the zfs replace.
if you want to use your current booting vdev containing sda2 you can zfs attach rpool sda2 new-disk
if you want the data on new-disk, boot it and zfs replace rpool sda2
Sounds like the new disk has been part of an rpool in its past life. One of there ways:
1) figure out a way to plug in the new disk while pve is running.
2) erase the front of the disk
3) in single user mode on the pve failed boot, zpool import by ID, and exit to continue the boot
Thanks for the `ceph osd df tree` command. Went from a pg_num=512 assumed from the ceph calculator
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
-1 5.45755 - 5588G 86552M 5503G 1.51 1.00 - root default
-5 1.81918 - 1862G 28875M 1834G...
Thanks for your lxc.apparmor.profile, all I have is "mount fstype=cifs," and it worked when I added a mount option of vers=1.0
for google:
No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1...
I have a container with an AppArmor profile containing mount fstype=cifs, and included the profile in
/etc/pve/lxc/<ID>.conf as
lxc.aa_profile: lxc-container-default-with-cifs
when I start the container in pve 5.1 I get:
lxc.aa_profile is deprecated and was renamed to lxc.apparmor.profile...
for google:
dist-upgrade from 5.0b2 to 5.0:
Unpacking ceph-common (12.1.0-pve2) over (12.0.3-pve2) ...
dpkg: error processing archive /tmp/apt-dpkg-install-iWnrun/02-ceph-common_12.1.0-pve2_amd64.deb (--unpack):
trying to overwrite '/etc/bash_completion.d/ceph', which is also in package...
are they on ZFS? I did this last week:
pvebak# pve-zsync sync -source 101 -name 101pve2 -dest pve2:rpool/data -verbose
pve2# cp `ls /var/lib/pve-zsync/data/101* | head -1` /etc/pve/nodes/$HOSTNAME/lxc/101.conf
pve2# rcp pvebak:/var/lib/rrdcached/db/pve2-vm/101 pve1:/var/lib/rrdcached/db/pve2-vm...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.