That is a possibility. Its also possible that people ask because they have a task they wish to perform. Not all tasks are going to have tools provided by a particular mechanism; its up to the person asking the make the determination of what to do with this information. They can either lift their...
While having a PVE integrated tool such as pve-sync is nice, there is nothing stopping you from using the other tools at hand to do it with any other snapshot capable source (namely tar and ssh)
Its possible (maybe probable) that you have stuff hiding in /mnt under the mount.
the way to deal with that is EITHER to dismount the filesystem sitting there and recheck, OR you can bind mount /dev/mapper/pve-root in a temporary location (eg, /temproot)
I cant speak of doing it with frr (never tried)
but its relatively simple to do it with linux networking.
here is a sample interfaces file (based on @admartinator interfaces; IP ranges are arbitrary)
# Node 1
# Corosync-n2 connection
auto ens19
iface ens19 inet static
address...
you have two options here. Unless you have NEED for openfabric, just use linux networking and the instructions provided by the docs (https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server) IF you do want to use openfabric, consider posting on their forum/support resources.
But the...
Its worth noting that to get full useful performance from this storage you really want TWO luns, since a lun can only have one active controller at a time. This obviously assumes you have two controllers.
So you're good to go? the rest of the instructions is pretty much the same as for any...
Its a three step process:
1. create and map LUNs on your controller. if you need help with this step, its in your MSA user manual.
2. This part depends on whether you have dual connections to the controller- install multipath-tools and multipath-tools-boot. configure according to the...
I hear that. a lot.
This is somewhat a challenge with Proxmox, especially if you're not on European time. Unless you have some in house talent (read: competent linux sysadmin) you may want to consider some third party help.
So it comes back to the beginning. Size and scope will determine the...
Again, apples and oranges. The featureset available to the hypervisor is a function of the underlying storage, with the exception being vmware can do more with iscsi storage then pve. When designing your solution, consider the disparate goals that you have in terms of functionality and...
The number/"oddness" of nodes isnt relevant in and of itself. What you want is:
3x monitors
(edit- a minimum of) r+1 OSD nodes, where r=number of shards in a placement group, which in most cases in a replication crush rule is 3.
Thats not a valid comparison; ceph is analogous to vsan, not...
There is no utility that q35 hardware emulation provides a windows xp guest. If your software works on windows xp, it will work regardless of the virtual hardware presented. Thats also the answer to your first question.
Thats not accurate. While it's true that under NORMAL circumstances having a separate private network pipe is not really an issue as long as the public interface has sufficient bandwidth, this changes with a rebalance storm. and that DOES happen. The ceph documentation suggests that a separate...
set the cpu max count to the total threads of a SINGLE socket. If you set it to higher you may end up with performance issues. see https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cpu for more information.
This is where I think you may want to rethink your entire approach. For better or...
your system is complaining about a mount for /mnt/sdc; first order of business is to exclude it from /etc/fstab. you can do that from the recovery shell.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.