Hi there!
I've read good news about 6.* version kernels using for PVE. It's good because I've got fresh AMD AM5 Ryzen 5 7600X platform. Its' integrated RDNA2 GPU has got drivers only in 5.16.* kernels, so I hope to use Proxmox with 6.1.* option kernels.
But now bad news... how to INSTALL...
Why do you want such a virtual OS mess in one hardware box?
May be it'll be more usefull to have two or three different platform boxes with fast LAN and KVM switch between them?
Well, I've managed the issue manually by systemctl restart proxmox-backup-proxy.service.
But if I'd miswatched PBS not working, it would be the real problem after some time.
pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.124-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)...
PVE 6.4-13 with PBS co-installed, working well for many days...
As usual, I've apt-get update (from Web-interface), three packages proxmox-backup-* pending... well, I've pressed button to apt-get dist-upgrade, no strange messages.
But PBS Web-interface on 8007 port doesn't responding, PBS...
AFAIK in Hetzner using iKVM/IPMI console is not free and fast available service, so I install all my nodes with Hetzner's Debian distro and then apt-get PVE from repositories (as Hetzner's HOWTO recommends). For now I can't install PVE on Hetzner's dedicated server because of Debian 11 is not...
Wiki: https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
Well, let's test:
root@amd:~# pve6to7 --full
= CHECKING VERSION INFORMATION FOR PVE PACKAGES =
Checking for package updates..
PASS: all packages uptodate
Checking proxmox-ve package version..
PASS: proxmox-ve package has version >=...
Date and time is historicaly very complex theme... sometimes for the uniqueness I used UNIX Epoch time format (plain seconds from 1970.01.01 00:00:00), but IMHO it's better to make the names configurable, because of rare case of moving PBS to another timezone, and ever daylight saving is used...
What is of great use in this form with eternal nulls other than zpool structure?
Why not to use more informative output of zpool iostat -v command here?
The same issue with NetData https://www.netdata.cloud/ monitoring alarms.
IMHO it's better to switch it off. RAM must to work hard. ZFS is using RAM for the good things.
@Dunuin thanks for advice, I'll try to monitor available memory.
My 2 cents:
It looks like Proxmox web interface for File Restore hasn’t got national language symbols support at all :(
NTFS, can’t see cyrillic file names or directories, only english items. Very sad and frustrating.
P.S. proxmox-backup-client map and guestmount shows cyrillic names OK.
IMHO the best way is to use bind mount once host-mounted NFS directory into CTs' disks, so you don't need to NFS mount in every individual CTs'.
But I need a good solution like bind mount, but for VMs (not CTs) especially for Windows ones.
Key point of bind mount is to not loose volume data...
pve-kernel-5.4 (6.4-1) pve pmg; urgency=medium
...
* proxmox-boot-tool: also handle legacy-boot ZFS installs
* proxmox-boot: add grub.cfg header snippet to log that the config is managed by this tool
...
-- Proxmox Support Team <support@proxmox.com> Fri, 23 Apr 2021 11:33:37 +0200
Are that...
You can not even export/mount existing ZFS pool because of actual Proxmox ISO does not support fresh ZFS features.
And also you can not rollback new ZFS features because of action clause below
root@server ~ # zpool status
pool: rpool
state: ONLINE
status: Some supported features are not...
I've finally found some workaround, but it's rather ugly.
Boot with Proxmox installation ISO (not so fresh to import upgraded ZFS volumes).
Get some free drive inserted to install on (I've personally used USB flash drive -- toooo slooow, then SSD removed from ZFS log/cache).
Install fresh...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.