More information, issues on another CT for which the host we upgraded.
Detailed pveversion:
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.106-1-pve...
We are still puzzled by this issue and we haven't found the issue yet.
More information:
There have been no recent changes to that server that would explain that kind of memory usage increase.
Other things we have observed:
- Running kernel 5.4.106-1-pve and reverting packages lxc-pve and...
Eventually I removed the disk from the pool and then, with the remark from @avw, I could attach it as mirror:
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 07:22:19 with 0 errors on Sun Apr 11 07:46:20 2021
remove: Removal of vdev 1 copied 415G in 1h0m, completed on Mon Apr 19...
zabbix-proxy and salt-master. The image used for both is centos-7.
The zabbix-proxy one:
arch: amd64
cores: 2
hostname: zabbix-proxy.mysite
memory: 12288
nameserver: 172.1.11.254
net0: name=eth0,bridge=vmbr0,gw=172.1.11.254,hwaddr=xx:xx:xx:xx:xx,ip=172.1.11.100/24,tag=11,type=veth
onboot: 1...
Hi,
We have detected a strange (or not well-understood) behavior in the memory usage of, at least, two containers but we believe it's a generalized issue.
After a CT restart the memory usage keeps steadily growing until, after a couple days, it reaches around 96-99% of the total assigned...
Yes, I also tried that and didn't work:
root@pve01:~# zpool detach rpool wwn-0x5000c500b00df01a-part3
cannot detach wwn-0x5000c500b00df01a-part3: only applicable to mirror and replacing vdevs
Proxmox version and zfs versions:
root@pve01:~# zfs version
zfs-0.8.5-pve1
zfs-kmod-0.8.5-pve1...
Hi,
By mistake I added a disk to my pool and now I cannot remove. Is there any way to do so?
root@pve01:~# zpool status
pool: rpool
state: ONLINE
scan: resilvered 0B in 0 days 03:48:05 with 0 errors on Wed Mar 24 23:54:29 2021
config:
NAME STATE...
This is a standard 6.2 installation, I didn't do anything special:
root@pve01:~# df -h /var/tmp
Filesystem Size Used Avail Use% Mounted on
rpool/ROOT/pve-1 2.5T 242G 2.2T 10% /
root@pve01:~# mount -v |grep pve-1
rpool/ROOT/pve-1 on / type zfs (rw,relatime,xattr,noacl)
After your previous message I checked my rbackup pool:
root@pve01:~# zfs get mountpoint rbackup
NAME PROPERTY VALUE SOURCE
rbackup mountpoint /rbackup local
But indeed df shows that /rbackup is not mounted, dunno why because the mountpoint was set already for the rbackup...
Well, /rbackup is a completely different pool. Thanks for taking the time to reply but I would appreciate if you actually paid attention to the information that is already shared.
The problem is that / is fill and the space does not reflect in files or snapshots anywhre.
The system is rendered completely unusable right now, lots of messages in dmesg:
[ 363.312850] INFO: task pvesr:24346 blocked for more than 120 seconds.
[ 363.312893] Tainted: P OE 5.4.41-1-pve #1
[ 363.312921] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables...
There are constant writes to the filesystem, iotop -Pa reports:
Total DISK READ: 0.00 B/s | Total DISK WRITE: 931.18 B/s
Current DISK READ: 0.00 B/s | Current DISK WRITE: 2.38 M/s
PID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND...
As far as I know the correct way of listing the zfs datasets is with `zfs list`. Anyways here's the df -h output:
root@pve01:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G...
Hi,
I have two ZFS pools, one for regular operation and another one for backups.
root@pve01:~# zpool status |grep -w pool
pool: rbackup
pool: rpool
I have two containers that take a lot of space in rpool and the backup to the rbackup pool can't succeed, I don't know why...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.