Yes, that's the idea. It's however recommended to install PBS barebones on a physical server so you don't need a running hypervisor for recovery.
See here: https://forum.proxmox.com/threads/migrating-pbs-to-new-server-re-adding-datastore.157159/...
If this is the only storage "for everything, but with an unknown use case": go for striped mirrors (aka Raid10) and add a fast "Special Device", consisting of two "Enterprise Class" SSD - which may be small. (Below 1 %; ~30 TB --> 300GB)
Just my...
I don't known who as recommend you tu use consumer ssd with zfs or ceph, but performance will be horrible. (because lack of power-loss protection, so the fsync for the zfs/ceph journal can't be cached). pm1643 (or any other enterprise ssd with...
Why do you want to use ZFS RAIDZ? With your minimal amount of disks a mirror setup is faster with a RAID10 like setup.
IMHO if you do not need to migrate your vm between the nodes, ZFS is fine. IO delay will be lower as it is local.
If you value...
With Ceph, you need fast networking for your Storage, 10 Gbit should be the absolute minimum, better 25 or 40 Gbit.
Your data will be on all 3 nodes for redundancy, if one node fails you can still work, if 2 Nodes fail your ceph is no longer...
Wow, just wow. I watched that video (rest in peace Don) but that approach is just nutty to me.
First, if you are backing up to something (whether it is a VM, LXC, or docker image in an LXC) that resides on the same Proxmox host you want to back...
Use a vm, docker inside lxc is not recommended:
In fact docker inside lxc tends to break from time to time ( especially after major upgrades).
Another Option is to setup Samba inside a lxc e.g. with turnkeylinux Fileserver template or the...
Even a dedicated RT kernel won't provide real-time guarantees unless the processes that want such guarantees use the FIFO or RR schedulers, which isn't the default. Which means code changes are needed to make it work. There's also this (from the...
Okay. On the second screenshot you set "ip link up" and "ip addr show" did confirm success. Nevertheless the next screenshot lists "Link detected: no".
I am out. I have no idea what is wrong... :-(
The other two screenshots are storage...
Yes.
What you are looking for is "replication". It requires to have ZFS on both ends. It does the "trick" to only transport actually changed data.
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pvesr
Du hast da mal irgend etwas selbst installiert was DKMS erfordert. Das ist nach dem Upgrade weg und muss manuell nachinstalliert werden wenn du das für deine Software brauchst.
Fine, "qm config" is okay. But that screenshot shows the problem: ens18 is DOWN - it should be "UP" of course. Everything inside the VM:
Examine the state more low level:
ethtool ens18 and post it
Reload the configuration from "interfaces"...
da es eine NVMe SSD ist, hat sich vermutlich der name der NIC geändert und muss entsprechend angepasst werden (da beides am PCIe Bus hängt).
über "ip addr" den aktuellen namen suchen (enX, ethx, ...) und entsprechend in /etc/network/interfaces...
For debugging ssh use ssh -v user@host. Up to three "-v" may be useful. See man ssh.
This is without modifying the server part, which has its own debug mode: man sshd shows "-D -d" for example.
Thanks and good to know. Yes using static IP's - but was more during the initial process and finalise with statics but prob just best to start with a static!
Great! Now use ...-tags around the pasted text, for the next posts. See the "</>"-symbol to open the "Code"-editor.
Everything's looking fine, so far.
When I wrote "qm config <your-vmid>" you should have converted it to "qm config 103" as 103...
I do NOT use Backblaze, so take my idea with a grain of salt:
When you write a new backup the data is transferred to Backblaze. I am pretty sure they have fast (and secure) storage to put that incoming data in, before it is transferred to the...
Okay. I've never used noVNC to connect to PBS...
PBS is a virtual machine? I wasn't aware of this :-(
That information is still lacking. There is a (small, near zero) chance that /etc/network/interface is not being used.
----
Because that...