I cannot follow your arguments. All these operations are done in a background task, so no blocking UI. I don't remember what took so long. But I think most long running are destroying snapshot and datasets. Destroying depends on the used blockx that has to be freed...
Sorry but I don't agree. What does a 'normal' setup mean? It is not unusual that such operations can took long on larger datasets/snapshots. Would you say this is not 'normal'?
Things like creating a dataset (while creating a new VM/CT) or create, destroy, rollback of a snapshot. Often it took a little longer to complete as the timeout expects. ZFS has no guarantee for executing such operations in a given time-box. So why is there such a VERY small timeout? Could you...
Most operations I do leads to a ZFS timeout error when the is a notable I/O traffic on the node. The timeouts are way to small or in my opinion useless. Why don't you wait for the zfs command to complete? Why do you produce this errors artificially? I never saw an zfs command not to return, but...
Anytime the lxc-pve package is upgraded all running containers gets killed on the unpacking step.
Today this was:
Unpacking lxc-pve (1.1.5-7) over (1.1.5-5) ...
This is real f****ed up in a production env :-(
proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-13 (running...
Hi,
I found out that when a container is stopped the backup is doing a 'stopped' mode backup as well it's configured to snapshot. Now I can start the container while the backup is running. After that there is R/W action on the container file system which makes the backup inconsistent. If you...
Hi,
I found the following issue. When I try to use pigz with vzdump/gzip I got the following error:
root@ckc-b-p0005:~# vzdump 242 --compress gzip --storage FreeNAS --mode snapshot
INFO: starting new backup job: vzdump 242 --compress gzip --storage FreeNAS --mode snapshot --mailnotification...
Yes corosync is running and pvecm status is all good:
Quorum information
------------------
Date: Thu Nov 19 10:32:26 2015
Quorum provider: corosync_votequorum
Nodes: 3
Node ID: 0x00000001
Ring ID: 7940
Quorate: Yes
Votequorum information...
Hi,
I think the log is not usefull at the moment. It is flooding the logs with hundereds of messages per minute. Now I have the problem that I cannot restart cluster:
-- Logs begin at Tue 2015-11-17 06:39:38 CET, end at Tue 2015-11-17 11:47:39 CET. --
Nov 17 11:47:36 ckc-b-p0005 pmxcfs[15123]...
I was able to catch a full syslog from the beginning of the backup until crash and reboot. There are many error messsages before the crash. Does anybody know what this means?
Hi,
I found the problem. The clusterfs is blocking. Any process that is trying to read or write from /etc/pve become blocked forever. I saw this a few times. If you try ti `cd` into /etc/pve your shell is also blocked. Restarting pve-cluster from another shell is solving the problem. So this is...
Hi Wolfgang,
did I understand correctly that this is not a critical issue?
I tried your advice but can't trigger the error at the moment. But I had to restart the container to get rid of the messages beacuse they occur every second. So It's hard to see if it's triggered by my action.
Thx Wolfgang, that solves the problem. The LVM package comes from the proxmox repo, right? So you should concider to add this to the default config as LVM on Linux guests is a very common pattern.
After a while running some containers the syslog is flooded with:
Nov 12 12:21:37 ckc-b-p0004 lxcfs[2775]: Internal error: truncated write to cache
every second.
Does anybody know what this is?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.