I am currently running these steps, finger crossed...
No, never done this before. And I don't know if this is lucky or unlucky, For working with proxmox, I have never running into a situation that requires "replacing" a node. Power down happened...
https://blog.slogra.com/post-804.html follow this blog i fix it ,it is working now ,
first
cd /etc/pve/lxc/
nano XXX.config
copy and reboot ,it is working now
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop...
I was thinking the same thing - be able to rearrange panels, add support for additional panels like datastore status, focus on one host, etc.
Also having the dashboard refresh periodically would be beneficial.
I personally use a LXC container with transmission and every other service like plex in containers as well.
All the data is stored on a ZFS pool called tank, the way I share data is through mountpoints.
Here are two examples of my config files...
> These are Western Digital Red SATA III, so as you suspected, they are lightweight shit. ashift is set to 12 for all pools. Dedup is also switched on. But actually no longer necessary
The article you listed is an excellent resource.
Yep...
WD Red HDD are slow harddisk with only 5400 rpm, I used (and still use on few servers) but its usage is very limited.
Can be ok for backup storage (if it doesn't matter how long it may take), fileserver but for virtual host for not too bad...
Are the Proxmox VE nodes on the latest packages? There is currently also work underway on the Proxmox VE side to make the export/import more consistent across the storage plugins. Therefore, if it doesn't work just yet, it should so in the near...
> Depending on the data activity on the ZFS datasets, there are always quite high IO delay values.
You don't give ANY details on the disks themselves. What is the make/model/capacity? For all we know, you could be using SMR 5400rpm drives. Or...
(Only) if I understand this sentence correctly you would create a Raid5. Then you add ZFS on top of that - effectively getting a pool with a single vdev.
Do NOT do that!
One of the remarkable features of ZFS is "self healing" and also bit-rot...
this issue was caused by my deleting a couple of subnets from our network configuration system [ONA OpenNetAdmin IP Address Management (IPAM) system ] . after that the generated kea-dhcp4 configuration had some changes subnet id's ... so...
If I try to migrate a VM (Online or Offline) from one Cluster to another. It will not work.
Both Clusters use Ceph.
But if i put the OS Disk on local-zfs and then migrate it to the other cluster it works.
Error from Ceph to Ceph:
2024-12-23...
I deleted all the storage points, and mounted the shares manually in /etc/fstab. In Proxmox I mapped the directory and I then had to delete the original disk from the VM's hardware section, and map the new directory as a "Hard disk". Since I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.