Thank you Victor for prompt response.
How you recommend to try what you suggest.
To change one by one or to make all changes at once?.
First thing first
BIOS: first thing I was do when issue appear is to update BIOS.
The BIOS for this MoBo is quite old as MoBo itself. But it is latest available...
UPDATE
Definitely, the time when this happened isn't same.
The host works almost 20 hours without any issue but also without any load (no vm running no lxc running).
This morning I start one lxc container and one vm both linux/debian guests which use HHD storage. After one and a half hour of...
UPDATE
I just figured out that when this message appear at console of this host I/O Delay go high
That happened about 2 hour after restart host.
Further inspection using
which tells me that kernel disable IRQ who is in charge of ATA controller and also for on NIC.
Can anyone help me how to...
Hello everyone,
We experience strange behavior on one PVE host in our cluster. Our cluster have only three hosts. Hosts are not brand name servers they are custom made desktops. They are not the same hardware but they are very similar. All hosts have the same version of pve, latest uptades from...
thank you for the quick and precise answer
My first plan is to do in-place upgrade but if it not success on stand alone server than i will do clean install at my claster servers.
How to import zfs data pool, from gui or from cli, what is the better way?
Hello people,
I plan to upgrade my pve servers, and I am looking for best solution.
As is suggested at documentation it is better to do clean install.
My setup is like this (pve 6.4-15 with latest updates):
2 SSD disks - zfs mirror - rpool - system and local pool for some VMs
2 HDD disks - zfs...
UPDATE:
on node pm0 which has had incorrect HA manager_status i do following:
1-make this node offline
2-change expected nodes to make quorum only 1 command pvecm expected 1
3-then stop service command systemctl stop pve-ha-crm.service
4-then delete manager_status command rm...
Yes. that is correct but as i describe we have three nodes pm0 pm3 and pm4 in cluster. The problem was appeared because HA replication was not successfully finished and now one node pm0 produce problem when it is started.
On two nodes which work as it should we stop HA for ct201. But at pm0...
After couple of hours inspecting problem we found following
first case
two nodes pm4 and pm3 works and pm0 is off - cluster is stable and everything works
Cluster pm4 and pm3 is up and running
Datacenter - HA : ct201 deleting - cannot delete or edit
pm4 is master , lrm state is idle
pm3 lrm...
Hello to everyone,
We have three proxmox nodes in Cluster.
One is at site other two are at remote location.
We have only one container for which we setup HA.
All three nodes runs same version of proxmox, below is listed packages list
proxmox-ve: 6.4-1 (running kernel: 5.4.174-2-pve)
pve-manager...
Hello and thanks
hdd-tank is on the zfs pool named hdd-pool.
when i list files on the disk i sow the same as in the gui.
Those files are not from the old VM which is running before because there are no VM running before with same ID 200
This machine is ws2k19 which acts as DNS DC AD - just a...
Hello,
Something strange was happened or i am missing some classes ;)
I have one VM on my proxmox host which suddenly have more than one virtual disks created.
VM is Windows Server 2k19 and only with one disc 200-disk0. which is the disk created when vm created.
I notice now that this vm when...
@ph0x option "maxroot" is option if I choose only one disk and ext4 partition type. If I use ZFS array that option don't exist, as attached pics shows.
@ph0x Thank you, i will appreciate if you give me an answers to following questions:
1. do I have solution to choose maxroot option or I must use partition manager to do it. What maxroot size is minimal/recommended/optimal/maximal?
2. it is clear what i have to do. what about names of zfs pools...
Hello,
I plan to make three proxmox ve hosts in a cluster. I have at stock SSD and HDD disks.
I am thinking to use it like this:
SSD - for system itself, and some guest machine
HDD - for storage, local backup, guest machine (basically everything else)
NFS - on NAS storage
PBS - for automatic...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.