@vesalius Thank you and @mira . I think I have reached success based on your notes here, and reading the document on how to implement the changes. Am I correct that this document will push these changes to all nodes?
**Edit, I can actually see the incremented version appear in the logs.
@vesalius Thank you for the response. After I edit the corosync configuration on each node, I need to restart Coro sync? And this need to be done on each node one at a time?
Mira, so when I add "knet_link_priority" is this allowing me to choose the priority of the
ring0_addr: 8.8.8.8
ring1_addr: 10.10.11.6
So I can tell these nodes to sync via private network vrs public?
Or is this for setting the priority of the node itself? I am trying to tell it to do this over...
Here is my corosync config file, ring0 I have changed so my public internet ips are not so obvious. Can I swap ring0 and ring1? So instead of primary being public network it is a private network. I'm hoping to stop these errors. All nodes are, "Virtual Environment 7.3-3"
If so is this the...
Exactly, what I was thinking, really on doing this on all my NFS servers. However, I was wondering if there were any downsides, or dramatic effects. I was hoping this might not in advertantly take me down or something.
From NFS --> NFS Sata Sata two different nfs servers Size is 450GB qcow2 cache=writeback network is 1GBE.
create full clone of drive scsi0 (NFS-NAS-l-DATA2:175/vm-175-disk-0.qcow2)
Formatting '/mnt/pve/NFS-NAS-M-SATA1/images/175/vm-175-disk-0.qcow2', fmt=qcow2 cluster_size=65536...
Sorry not to take over a thread, but if we disable preallocation after the fact the NFS storage has been built and currently in use? Will that effect current VM's in away? Could this be disastrous? What is preallocation for?
Lets say I want to migrate a VM disk because the underlying storage is getting low. All of my servers are raid 10 multi disk SSD or SATA arrays. However it would seem that if a VM grows above 500GB I can't simple "Move" storage to another NFS or NAS storage device it always times out.
Hello, we have two Dell R720XD (12Bay) servers 2x Drives for 10x drives for datastore. This is hardware raid 1 and hardware raid 10. In syslog on both servers I see this. These numbers seem pretty static like they don't change often, however I do have replacement drives for both of these...
Just to update what happened, we ended up moving the to another bay to test if the backplane or HBA controller. Proxmox handled export and re import fine. WE are going to monitor over the next few days.
We understood this. The plan here was to order in packs of 10 16TB drives starting this week as new additional VDEVS. We have other flash storage for more performance, and about 600 bare metal servers sir. Thank you though as I am old school ZFS is new, to us from hardware raid and this is our...
Thank you, this is a top load box, the company we bought it from specializes in these rigs. Its brand new about 30 days old. The drives come in top loaded. https://www.45drives.com/products/storinator-xl60-configurations.php We have a little bit beefier than the turbo with 256GB of Ram. We only...
This is a server from 45drives.com The XL60. Darn! I have ordered replacement drives, I have not really put much on it yet. We will see what they say tomorrow.
Hello, we recently purchased a new 45DRIVES XL60 server with 10X Seagate Exos Enterprise 16TB Disks. We configured in a raidz2. Recently I have noticed performance being kind of dismal, and checked it in ProxMox GUI? For a system that is about 30 days old wow! Is this something I can just reset...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.