Hi.
You cannot access ZFS features from within the LXC CT. You can consider ZFS a host-only feature. The PVE host does all the ZFS operations, and you don't have to care about it. In CasaOS you can simply access the FS as a regular directory, no having to operate with ZFS specifically.
You may...
It's possible to share data from your Ceph cluster with NFS https://docs.ceph.com/en/latest/cephfs/nfs/
If you share via NFS, you can mount the NFS share inside the VM and you can interact like a regular network storage instead of trying to expose CephFS directly.
This is a common issue when using a VPN inside the VM. all outgoing traffic is sent via the VPN tunnel. You will have to configure a dedicated route or firewall rule to tell the kernel that packets from your local network shall be returned to there.
As you already noticed, you should avoid min_size = 1. Stating from [1]
Do not set a min_size of 1. A replicated pool with min_size of 1 allows I/O on an object when it has only 1 replica, which could lead to data loss, incomplete PGs or unfound objects.
This would also happen if one node...
Das Plugin ist gerade in Entwicklung und bereits weit fortgeschritten. Jedoch kann kein konkreter Zeitpunkt für einen Release genannt werden. Ohne meine Hand dafür ins Feuer zu legen, würde ich auf den nächsten minor Release (8.2) tippen.
You might find relevant logs regarding network configuration in the system journal with journalctl.
RSTP is an OVS-specific feature [1] and does not work with a regular Linux Bridge.
[1] https://pve.proxmox.com/wiki/Open_vSwitch
If you are running the APT update via the GUI, please post the entire Task log for the failed update.
If you run a manual apt update and apt dist-upgrade on the shell, what is the outpute there?
Best regards,
Stefan
Hi
You can replace individual disks in your Tank pool one by one by using zpool replace <pool> <old> <new> [1]
After you finished replacing the disk, you will still set the old size with zpool get size.
You can expand the pool, to use the full size of the disks with:
zpool set autoexpand=on...
I see that you're attaching you disk as IDE. Try attaching them via SCSI. To do this, Go in the GUI to Hardware, select the disk, press Detech, and then attach it again by pressing Edit on the unassigned disk.
You should also select discard as well as io_threads.
You mentioned that the issue...
Please share more details about your network setup. What is connected to what and how is your gateway (what IP) configures?
And post the content of /etc/network/interfaces
Please post the output of
ip addr
ip route
ip -6 route
There is no clear indication in the journals why this is happening.
Please post your Storage and VM config. Please replace <<vmid>> with a VMID of an affected VM.
head -n -1 /etc/pve/storage.cfg /etc/pve/qemu-server/<<vmid>>.cfg
Can you please provide a journal log from the host for the time the VMs crashed.
journalctl --since "2023-11-23" --until "2023-11-24" >| $(hostname)-journal.txt
Please run the following command on the host and on a VM and upload the resulting files.
journalctl --since "2023-11-23" --until "2023-11-24" >| $(hostname)-journal.txt
To see what happens when the issue persists for 1.5h, please adapt the dates accordingly
It looks like the VM was created but does not show up in the UI?
What is the output of qm status 100
If the VM is running, it would explain why you cannot delete the disk.
Please provide the status of your ZFS pool zpool status
As the UI is not listing VMs, let's verify if the API lists the...
As you're trying to use cloud-init to configure the VM network configuration, please verify that the configuration is actually applied to the system by checking the output of ip addr.
If not, try to configure the network directly in the VM.
I am not sure exactly what you're trying to achieve. Please elaborate on your setup.
What I understood: You have a PVE server and a separate pfSense machine? Or as a VM?
The IP 135.x.x.x is assigned to PVE and the other two addresses from Hetzner are assigned to pfSense?
I assume you want the...
If you start a nc on port 3000, you have the same situation? That sound to me that the forwarding from nginx is not working correctly?
Do you see any issues in the journal on the host and inside the LXC?
Are you trying to run Docker inside an (unprivileged) LXC?
You can upload the journal...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.