I installed PVE 6 with ZFS on two SSD drives on an IBM x3650 M3 server with a H200 SAS controller cross-flashed to IT mode.
The installation process ended correctly but after the reboot the server does not boot, nor in UEFI mode nor in legacy mode.
If the USB drive is plugged into the server...
I have a simple question which I would like to share with because I’m interested in you point of view.
On a Proxmox server I have fast storage (SSDs or NVMe) and slow storage (SAS or SATA).
My fast choice could be to install Proxmox on fast storage and use it also for storing virtual...
I have a container which has very slow performance, and trying to debug this i realised that it's swapping so much even if it has a lot of RAM memory available:
These are the status graphs of the hypervisor:
As you can see the host does not seems to be overloaded.
I'm trying to install Proxmox 5.4 on RAIDZ-1 on 3 NVMe drives, but I'm receiving the error "unable to create zfs root pool".
NVMe are correctly recognised and I can configure RAIDZ-1 on them:
But I'm receiving the following error:
Could you help me please?
today I made "apt-get dist-upgrade" to my Proxmox hosts, but the upgrade did not finish well and after the cluster was broken.
The problem was on the upgrade of pve-manager which cannot be started:
root@node07:/home/mattia# apt-get install pve-manager
Reading package lists... Done
I have a three nodes Proxmox cluster with Proxmox 5.2.
Expecially one node is failing without any apparent reason and it appears red on the web console.
If I recreate the cluster from scratch every machines are green for about 30 minutes, then some nodes become red randomly:
on my Proxmox host I cannot run pct list anymore because it's endless and I don't have any output:
root@node11:~# pct list
(no return to console...)
If I try to run it with strace I get one endless timeout but I cannot realize which program is creating it:
root@node11:~# strace pct list...
my LXC containers are showing the same load average of the host.
I know this is due to some limitations of LXC and I had the same problem with CPU and memory too until some months ago.
Now after some upgrades containers are showing the correct information about CPU and memory limits. Why...
I am using a pfSense 2.4 (FreeBSD based) virtual machine on KVM and I see a different RAM usage in Proxmox than in the VM itself.
Proxmox shows more than 90% of RAM usage (~ 15Gb of 16 Gb):
but both pfSense and FreeBSD are showing only 2% of usage:
But the virtual machine is giving...
I have an entry level server (Dell PowerEdge T30, 16 Gb RAM, 2 x 1 Tb SATA hard drives) on which I installed Proxmox 5.1 configured with a ZFS RAID1 pool.
I know this is a slow system, but after creating 3 virtual machines the server is terribly slow: continuous high I/O delay and CPU...
I read the info on the page https://pve.proxmox.com/wiki/Cluster_Manager#_cluster_network, expecially the chapter Network Requirements:
For this reason I understand that it's better to create a separate network for the cluster communication, and this is completely reasonable.
What I want...
suddenly one node in my Proxmox 5.1 cluster becomes unavailable into the web interface and it's icon becomes grey with a question mark, like in the following screenshots:
This happened three times on three different nodes (on node01 and node02, I rebooted them to solve the problem).
I upgraded my PVE 4.4-20 system (pve-manager/4.4-20/2650b7b5 (running kernel: 4.4.95-1-pve) to the last PVE 5.1 (pve-manager/5.1-41/0b958203 (running kernel: 4.13.13-2-pve).
The upgrade ended without any error but after the reboot the networking on the server is really slow!
I can connect...
I have a PVE cluster made with Proxmox VE 5.1 nodes.
Sometimes when start tasks on VMs (start, stop, etc) I get this error on the nodes console:
unregistered_netdevice: waiting for lo to become free. Usage count = 1
When I get this error the task seems to be hanged and does not complete...
On my Proxmox 5.1 system I sometimes find some KVM virtual machines hanged.
Last week this happened on two different machines: one Windows 2016 and one FreeBSD 10.3 (pfSense 2.3.4-p1):
the Windows 2016 virtual machine was hanged with the CTRL+ALT+CANC screen on the console completely...
I have a Proxmox 5.1 with a root ZFS RAID1 pool and one Windows Server 2008 R2 virtual machine.
When the VM was running I added a new VIRTIO hard drive on the local-zfs storage and I received an error.
So I stopped the VM and tried to restart it.
After this I am not able to start any virtual...
when I was using hardware RAID controllers on old servers I was using the RAID controller commands (megacli for LSI, arcconf for Adaptec and so on) to monitor array and drives status.
What I should use for ZFS?
Are smartctl, zfs-zed and zpool status enough to be alerted and to predict...
I have some nodes with a local LVM Thin pool.
On one node the metadata pool is full and I cannot start any new containers:
root@node3:~# lvs -a -o+metadata_percent
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Meta%