Hi,
You can override that URL in the remote settings' "Web UI URL" field. For example, if you want to use the same URL, but without the default 8006 port, you would enter the plain HTTPS URL (as the browser defaults to port 443 anyway), e.g...
That's the curse of such physically small devices. Keep in mind that "not ideal" may mean different things; one not obvious aspect in one of my systems is this:
rpool ONLINE 0 0...
Hi @DJohneys , welcome to the forum.
This is not PVE specific but rather standard Linux admistration. There are many ways to do what you want:
echo 'export http_proxy="http://proxy.example.com:8080"' >> ~/.bashrc
echo 'export...
I operate a small 5-node HA test cluster on PVE 9 with Ceph. During fault-injection tests (power loss, forced resets, cable disconnects) I observed that when Ceph OSDs were located on SSDs without power-loss protection (PLP), virtual machines...
You basically saved your data one time encrypted and one time unencrypted, thus in the end saving everything two times. As soon as every unencrypted backup is getting pruned or manually removed you should notice that the storage space occupied...
You have basically three options together with the integrated high-availability solution (https://pve.proxmox.com/wiki/High_Availability ):
Using Ceph
Using some shared storage (like a NAS with NFS, or a storage array, attached via ISCSI or...
Wenn dir Datenkonsistenz wichtig ist, dann ist ZFS die richtige Wahl und ganz sicher kannst du dann nur gehen mit ECC RAM. Denn wenn Fehler im RAM passieren sollten, kann ZFS auch nicht helfen.
It‘s not only the bottleneck of IOPS but also:
- Parity must be recalculated and rewritten for every small change
- Sync-write-heavy applications (e.g., databases) suffer massively under RAIDZ
RAIDZ should only be used for „cold“ storage. For...
I would try again with "Enterprise Class SSDs" with PLP / "Power-Loss-Protection".
...and with mirrors, not a RaidZ2 - as this gives you only the IOPS of a single device.
Of course SATA is massively slower than PCIe --> if possibly switch...
Unfortunately I am not aware of built-in dropbear support in PVE. How did you install that? May be it helps to re-read that documentation...?
(I mean it - you are outside of the usual PVE context and we just cannot know what you did!)
in that case, the only difference is in load calculation. Since Zen4 and Rocket Lake have similar IPC performance, its a simple matter of:
Relative Performance, Xeon E-2388G (3.2GHz base, 8 core)= 25.6
Relative Performance, AMD EPYC 4244P...
At the top right, in your user menu, check "My Settings - Webinterface Settings - Dashboard Storages". Possibly the same storage is summed up more than once.
Just guessing..., I am not using Ceph currently.
Just thought I would update now that proxmox 9 is out, the `ceph-exporter` package is available in the ceph squid repos even the non-subscription one, but at noted earlier in this thread it has some issues with starting up, namely that the...
@news - Danke für den Tip mit Clonezilla. Da ich leider nicht so fit bin als "Komandozeilen-Cowboy" habe ich mir die GUI basierte Alternative Rescuezilla gesucht und habe jetzt ohne großen Aufwand meine System SSD mit den passwortgefixeten...
Your root file system doesnt really matter for the purposes of this discussion. only the vm storage does. Assuming you intend to use the same filesystem for your OS and payload, you cant use ZFS replication- but that doesnt mean you cant...
No, since it needs a filesystem-specific feature which ext4 or XFS simply doesn't have. In theory ut would je possible with btrfs but at the moment ProxmoxVE storage replication only works with ZFS.
I would backup all vms/lxcs on your node (...
In hindsight, you should have skipped the HBA and SAS and gone with NVMe.
When choosing between ZFS and Ceph, keep in mind that ZFS is a local filesystem, while Ceph is a form of distributed/shared storage. Each comes with its own set of pros...
Enteprise Grade SAS drives -> SSDs?
If you decide to go the ceph route you should probably upgrade your network to 25G or faster if you have fast SSD's, otherwise the network could be a performance bottleneck.
Here are some official Ceph...
So you’ll have 5 servers? You can use Ceph with that . It would be ideal to use one NIC for Ceph public and one for Ceph private. Corosync should use both.
Will you be using a server for PBS?