I installed PVE 6 with ZFS on two SSD drives on an IBM x3650 M3 server with a H200 SAS controller cross-flashed to IT mode.
The installation process ended correctly but after the reboot the server does not boot, nor in UEFI mode nor in legacy mode.
If the USB drive is plugged into the server...
I have a simple question which I would like to share with because I’m interested in you point of view.
On a Proxmox server I have fast storage (SSDs or NVMe) and slow storage (SAS or SATA).
My fast choice could be to install Proxmox on fast storage and use it also for storing virtual...
Hi, I'm suffering this problem on two different Proxmox hosts with two LXC containers:
[338962.945187] Memory cgroup out of memory: Kill process 33823 (celery) score 6 or sacrifice child
[338962.946422] Killed process 33823 (celery) total-vm:212304kB, anon-rss:51236kB, file-rss:28kB...
I have a container which has very slow performance, and trying to debug this i realised that it's swapping so much even if it has a lot of RAM memory available:
These are the status graphs of the hypervisor:
As you can see the host does not seems to be overloaded.
I'm trying to install Proxmox 5.4 on RAIDZ-1 on 3 NVMe drives, but I'm receiving the error "unable to create zfs root pool".
NVMe are correctly recognised and I can configure RAIDZ-1 on them:
But I'm receiving the following error:
Could you help me please?
I managed in solving the problem running:
root@node08:/# systemctl start pve-cluster
root@node08:/# pvecm updatecerts
(re)generate node files
merge authorized SSH keys and known hosts
root@node08:/# apt-get -f install
Reading package lists... Done
Building dependency tree
today I made "apt-get dist-upgrade" to my Proxmox hosts, but the upgrade did not finish well and after the cluster was broken.
The problem was on the upgrade of pve-manager which cannot be started:
root@node07:/home/mattia# apt-get install pve-manager
Reading package lists... Done
I have a three nodes Proxmox cluster with Proxmox 5.2.
Expecially one node is failing without any apparent reason and it appears red on the web console.
If I recreate the cluster from scratch every machines are green for about 30 minutes, then some nodes become red randomly:
Yes, the cluster is healthy:
root@node11:/# pvecm status
Date: Fri Oct 26 21:53:55 2018
Quorum provider: corosync_votequorum
Node ID: 0x00000002
Ring ID: 8/2380
on my Proxmox host I cannot run pct list anymore because it's endless and I don't have any output:
root@node11:~# pct list
(no return to console...)
If I try to run it with strace I get one endless timeout but I cannot realize which program is creating it:
root@node11:~# strace pct list...
my LXC containers are showing the same load average of the host.
I know this is due to some limitations of LXC and I had the same problem with CPU and memory too until some months ago.
Now after some upgrades containers are showing the correct information about CPU and memory limits. Why...