From documentation
--full <boolean>
Create a full copy of all disks. This is always done when you clone a normal CT. For CT templates, we try to create a linked clone by default.
Thank you fabian.
pct clone 102 121 --full 0 --snapname snap0
Linked clone feature for 'local-zpool0:subvol-102-disk-0' is not available
pct listsnapshot 102
Wide character in printf at /usr/share/perl5/PVE/GuestHelpers.pm line 176.
`-> snap0 2024-11-25 07:48:14 Przed aktualizacją...
In your file /etc/network/interfaces
change line, it is error configuration
address 192.168.32.50
address 192.168.32.50/24
your router has LAN 192.168.32.1 ?
Now I understand. It's not a network problem, it's just that the host didn't start. How do you know that it's a problem with the configuration files and not a damaged disk, for example?
I see you have another thread on this topic. You should continue there. A new thread adds unnecessary noise...
When you connect a monitor to the host, do you get a black screen? You must have installed proxmox on that host somehow.
Sorry for asking, but for me, access to the console must be in some way, because otherwise there is no way to administer it.
Yes, you are right. I checked everything again and it turned out that the firewall operates in such a way that it opens port 8006 only on the first interface found in /etc/hosts. You need to additionally open the ports on other interfaces.
Hi
My host is called pve0. It has two network interfaces.
Proxmox GUI reports to me on the first address entered in /etc/hosts, i.e. in this case 10.0.0.1. Could I set it up so that it reports on the address 192.168.1.100?
root@pve0:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost...
This is a wifi problem, it's not really possible to make a bridged connection. All virtualizers in such a configuration will have problems. You should connect with a cable.
It's not clear what you can't save. Are you running a Windows application in which you save a file?
Saving just that one file doesn't work and the entire system works fine?
Yes, you can do that. The pool will evacuate data from the disconnected disks. You have to plan it well and practice a bit because these disks are bootable. You have to prepare new disks to be bootable, create partitions and refresh the system boot.
With this method, if the disks are healthy...
Not necessarily. ZFS provides a lot of possibilities for such a situation. You can attach another pair of disks to this pool if you have the technical capabilities and then disconnect the smaller ones.
You can also use the zfs send | zfs receive technique ro replace disks at this poool.
Look...
If zfs fills up or has low free space, it can cause high system load. This amount of free space now is not too much either.
You can also run this and check what specific operations load the pool.
zpool iostat -q
It doesn't necessarily have to be a local device.
Start the host in single mode
You can look at the entries in /etc/fstab
and the logs
journalctl -u sys-fs-fuse-connection.mount
I don't have much experience with slog. On my nvme disks when testing via fio, the maximum slog capacity was 8GB, I don't have this pool in production yet. Large slog sizes will never be used. If some of your disks are platter disks, it would be better if you could reduce the slog and add vdev...
In my opinion, for slog this vdev is too big. With 10Gbit network 16GB should be enough.
Read the ZFS paper that TrueNAS prepared. There are two parts.
https://www.truenas.com/blog/zfs-pool-performance-1/
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.