My bad, -r flag had to be in previous command zfs destroy -r rpool/data/vm-119-disk-0-old
I'll edit it in my previous post, so people who find this will get it right.
Yeah, snapshots were the same. Replication was not set up for this VM
I think i got everything i needed.
Thank you very much...
Well, i have used this command zfs send -Rp rpool/data/vm-172-disk-1 | zfs recv rpool/data/vm-172-disk-1-new but it didnt work, so i found out, that i need to create snapshot first and send the snapshot. Apparently there is not need for -o compression=on, because its already present on dataset...
Well i checked out how to use zfs send and zfs recv, but iam still not 100% sure and i really dont want to screw anything up.
Is this procedure with -R option right?
zfs send -R rpool/data/vm-172-disk-1 | zfs recv rpool/data/vm-172-disk-1-new
zfs rename rpool/data/vm-172-disk-1...
Oh, good point.
So i will set zfs set compression=on rpool/data on destination and if i get it right, i need to delete the replication and subvol-150-disk-0 dataset to start it freshly with compression enabled. Is there any possibility to compress other VM/CT datasets so i can prevent problems...
I did not do anything, pool is showing that i can upgrade for some time, but i did not proceed.
The same thing happened on other proxmox server with different container so i tried to delete the replication (it destroyed CTs dataset too) and set it up again, but it did not helped and ended with...
Thank you for trying to help.
Looks like there is planty of free space on the pool.
root@backup:~$ zfs list -o space rpool
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 4.23T 11.1T 0B 140K 0B 11.1T
root@backup:~$ zpool list rpool
we are having problem with host replication.
The host have 16GB disk, which is about 25% full. But when replication starts,after 2GB trasnfered it ends with an error cannot receive incremental stream: destination rpool/data/subvol-150-disk-0 space quota exceeded
The error is about space...
Some of them do have same problems (problems happens 100% on servers which contain logs about blocked tasks from previous post). And some have even same hardware.
Disks are mostly in hotswap front positions, but some of the servers does not have hotswap positions, so its connected internally...
Hi, here are details you asked for.
On another servers we have similar issues with blocked tasks with different Samsung disks
Samsung_SSD_860_PRO_1TB and Samsung_SSD_860_EVO_1TB
About resources, nothing looks unusual...on server with 80%+ RAM and 50-70% CPU wearout there are no errors but...
we are having problems with proxmox servers, our server sometimes freeze or something and does not work for like 2-10 minutes until he unfreeze and continue work...our monitoring system is reporting these freezes, because the server does not respond to ping.
In log we found "INFO: task...
Hello and thank you for response.
I compared snapshots on main host and replicated host, both are the same, but new replication already did ran, so its probably pointless to compare it right now, when it works...i try it when it "fails".
What do you mean by "compare on both sides with list"?
Hello, we are facing problems with replication.
Like once a month, our container replication is doing "full sync" instead of "incremental sync".
The container have few hundreds of GB so its really annoying.
Why it is doing full sync when initial full sync already happened.
Thanks for any advice.
knet 1.16 slightly improved stability of the cluster
New info - we have restarted 1 node, but another 3 nodes fall down and restarted...on all 3 nodes, i found (around time of the restart - 12:57) some watchdog messages.
Is it possible, that watchdog is causing some panic and restarting the...
We are probably going to wait until knet update is released in proxmox repositories and give it a try. Maybe it can help the situation.
For info - we have nodes in 3 different cities and they are connected via VPN.
Why we can not have nodes in multiple subnets, could you please refer us to some relevant documentation?
We understand, that corosync problems (cluster falling apart) can be caused by multiple subnets, but is it possible, that our present problem (ethernet adapter fails) is caused by multiple...
Hello, we have switched to sctp and set token to 10000, because the cluster was all the time falling apart...few times a day. We found this fix on forum and after applying to our cluster, it fixed our problem and cluster now keeps together...
we are facing pretty curios troubles in Proxmox cluster.
When we restart node or corosing on some host (systemctl restart corosync), then on some other nodes ethernet adapter crashes and does not recover himself.
Yesterday we performed a test - we restarted server "backup" and in few...
Hello, unfortunatelly, we are facing problem again.
Today morning the node started making mess and eventually restarted, after that, he did not connected to the cluster, even after manually restarting corosync on all nodes.
The problem started at 07:28 and continued until the node restarted at...