Can't see replication stats :-(
Attached webui screenshot and
root@pve:~# pvesr status
file '/var/lib/pve-manager/pve-replication-state.json' too long - aborting
Hi, we have a three nodes cluster with ZFS storage.
We are using replication for our LXC servers. On node 1, we have about 67 LXC containers, using replication to another server.
We have scheduled another replication and we got this errors on webui.
file...
Have you tried first to delete network config file and modify fstab file and try boot?
My script to convert to simfs is based on your script. Here is, with small changes
#!/bin/sh
# ./convert_ploop_to_simfs.sh CTID
# chmod +x ./convert_ploop_to_simfs.sh
# Check parameters
if [ $# -eq 0 ]...
I can think of two things:
1) Try to restore your vzdump with my previous commands and change ostype to unmanaged and make the other commands (mount filesystem, delete network config and change fstab file) and try boot.
2) Virtualize a SolusVM master on Proxmox, restore vzdump and convert...
Hi, i've migrated about 80 containers from SolusVM to Proxmox.
First converted from ploop to simfs (This is the reason Proxmox cannot detect OS) and make a dump with vzdump.
Then imported to Proxmox with a script like this.
pct restore $1 vzdump-$1.tar --storage local-zfs --arch amd64...
Also, restoring backup, i've converted from privileged to unprivileged.
Converting gives me this errors
tar: ./lib/udev/devices/ploop63531p1: Cannot mknod: Operation not permitted
tar: ./lib/udev/devices/ploop29757p1: Cannot mknod: Operation not permitted
tar: ./lib/udev/devices/ploop44060p1...
Hi, i'have a cluster with five nodes. Nodes are running CTs.
If we enable a replication between two nodes, replication is too slow. Weeks ago was instantly. Now when replication starts, saids.
2019-03-11 08:21:00 501-1: start replication job
2019-03-11 08:21:00 501-1: guest => CT 501, running...
Hi, we have two DELL servers running Proxmox.
- DELL R610 with four SAS 15k drives and PERC H700
- DELL PE1950 with two SAS 10k drives and PERC 6/i
Actually, as we can't pass disks directly to Proxmox, we have
- 4 x RAID 0 drives and Proxmox with ZFS Raid 10
- 2 x RAID 0 drives and Proxmox...
What about with a server with only 8 GB Ram and 2 x 1 TB SATA 7.2k drives?? ZFS Raid 1 and limit RAM used by ZFS?.
Purpose is run only a VM and make backups with Proxmox to NFS Share and can move to other cluster if we need run maintenance tasks on physical node.
For people as same situation: We have a server with two different disks: 500GB (/dev/sda) and 1TB (/dev/sdb)
I've installed Proxmox node on smallest disk with ZFS Raid 0 (/dev/sda)
After installation (be careful with disk names in your environment) ;-)
zpool status -v
NAME...
Hi, we have four servers running SolusVM. We want to migrate to Proxmox. All four servers have four 2TB 7k drives and motherboard is Intel Soft Raid ready.
In terms of performance and security what's the best?
- Independent four ACHI drives and install Proxmox and make a ZFS Raid 10 with the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.