I guess you change to root via `su`? In that case this is a change in Debian Buster:
https://wiki.debian.org/NewInBuster
hope this helps!
I am root, via su. This is a Buster bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=918754#15
I guess you change to root via `su`? In that case this is a change in Debian Buster:
https://wiki.debian.org/NewInBuster
hope this helps!
- The su command in buster is provided by the util-linux source package, instead of the shadow source package, and no longer alters the PATH variable by default. This means that after doing su, your PATH may not contain directories like /sbin, and many system administration commands will fail. There are several workarounds:
- Use su - instead; this launches a login shell, which forces PATH to be changed, but also changes everything else including the working directory.
- Use sudo instead. sudo still runs commands with an altered PATH variable.
- To get a regular root shell with the correct PATH, you may use sudo -s.
- To get a login shell as root (equivalent to su -), you may use sudo -i.
- Put ALWAYS_SET_PATH yes in /etc/login.defs to get an approximation of the old behavior.
- Put the system administration directories (/sbin, /usr/sbin, /usr/local/sbin) in your regular account's PATH (see EnvironmentVariables for help with this).
Why would the disk replace commands change? Those are native ZFS commands, check how your disks are named in your pool and you know what replace commands to run.Hi guys! I need some help. I installed proxmox 6 beta root on zfs raid1 on 2 sata ssd (uefi boot). Everything seems fine, but how is disk replace procedure now? Is it same as in previous version withoun uefi boot? Second question is about trim. Trim works automaticaly or need to be set up, for autotrim?
Sorry for my english =)
I meant boot partitions when using ZFS root via UEFI (proxmox 6 using systemd-boot instead of grub when using uefi boot on zfs). How to make new disk bootable? If possible, write some manual pleaseWhy would the disk replace commands change?
I figured it out. (zpool set autotrim=on rpool)For trim, read: https://github.com/zfsonlinux/zfs/releases and search for Trim.
I meant boot partitions when using ZFS root via UEFI (proxmox 6 using systemd-boot instead of grub when using uefi boot on zfs). How to make new disk bootable? If possible, write some manual please
For some reason, after Hetzner inserted a USB Stick directly the installation also came up in UEFI mode. (Seems there are timeouts when using virtual media on uefi)
I can now either boot via uefi and efibootmgr works and /sys/firmware/efi is there or i can boot without, then it boots via grub. Is that intended?
if I understand correctly, can be done via proxmox interface ?Initial documentation patches regarding this all have been already sent and are expected to be applied in their final form soon.
There will be a small tool which can assist one in replacing a Device and getting the Boot stuff to work again for the new one, which is naturally also possible to do by hand.
if I understand correctly, can be done via proxmox interface ?
thanks for the informationReplacing a failed ZFS device? No, not yet, tooling is CLI only, at least it will be for the initial Proxmox VE 6.0 release.
Can someone else confirm that /usr/sbin/ceph-disk exists? It shows up in 'apt-file list ceph-osd' but not in /usr/sbin.
I did check my other 2 nodes and no ceph-disk.
I also did a 'apt-get install --reinstall ceph-osd' but still no ceph-disk.
Initial documentation patches regarding this all have been already sent and are expected to be applied in their final form soon.
There will be a small tool which can assist one in replacing a Device and getting the Boot stuff to work again for the new one, which is naturally also possible to do by hand.
'authorization on proxmox cluster failed with exception: invalid literal for float(): 6-0.1'
# pvesh get /nodes/dev5/version --output-format=json-pretty
{
"release" : "11",
"repoid" : "6df3d8d0",
"version" : "5.4"
}
pvesh get /nodes/dev6/version --output-format=json-pretty
{
"release" : "6.0",
"repoid" : "c148050a",
"version" : "6.0-1"
}
Yes, the "GET /nodes/{nodename}/version" call changed it's return format a bit, from PVE 5:
to PVE 6:Code:# pvesh get /nodes/dev5/version --output-format=json-pretty { "release" : "11", "repoid" : "6df3d8d0", "version" : "5.4" }
Code:pvesh get /nodes/dev6/version --output-format=json-pretty { "release" : "6.0", "repoid" : "c148050a", "version" : "6.0-1" }
In short, the relase is now the main Proxmox VE relase (6.0 or 6.1 for example) repoid stayed the same and version is the full current manager version, (i.e., what the concatenation of "release" and "version" from the old 5.X call)
see https://git.proxmox.com/?p=pve-manager.git;a=commitdiff;h=b597d23d354665ddea247c3ad54ece1b84921768 for full details.
I'm guessing I have to fill out an issue request to proxmoxer maintainers?
Err:5 https://enterprise.proxmox.com/debian/pve buster InRelease
401 Unauthorized [IP: 66.70.154.81 443]
Reading package lists... Done
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 66.70.154.81 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve buster InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details
apt-get update -o Debug::Acquire::https=true