Instead of a vlan I should use an independent network for corosync. Corosync communicates over the ring0_addr addresses, if I am right.
I will consider your inputs for the future. I am not sure yet, how I will solve it, but at least now all three nodes are online again and I know I have to take...
Good morning Stoiko
I have done everything you told me. For the results see below. During these steps I realized in the journal for the pve-cluste on node3 there were only entries from 23.12.2018. So I checked the service, which was running, but I than restarted it:
root@drax:/etc# service...
Hi Stoiko
I have no storage replication. I only have some NFS storages for backing up the VMs. All the VMs are running on local storage on the nodes.
storage.cfg:
dir: local
path /var/lib/vz
content iso,backup,vztmpl
lvmthin: local-lvm
thinpool data
vgname pve...
Hi all
I have a three node cluster. In the web gui of node1 and node2, node3 is shown as offline, while the web gui of node3 shows node1 and node2 offline.
daemon.log of node3:
Jan 8 11:49:00 drax systemd[1]: Starting Proxmox VE replication runner...
Jan 8 11:49:02 drax pvesr[12250]: trying...
Hi all
At the moment I have a two node proxmox 5.2-10 cluster. In a few weeks I get a new server and than this will become a three node cluster.
Now I have a problem with the status query from NFS shares. Some work and others dont. In the proxmox GUI I get for two NFS shares a question mark...
Switches support LACP and will be configured.
We have no VLANs, so I think I will stay with bonding type 4(LACP).
As hash policy I use layer2+3 as recommended in this book: https://www.packtpub.com/big-data-and-business-intelligence/mastering-proxmox-third-edition
Thanks for your tips. I...
Hi all
I have a question about creating a cluster.
We use two servers, which have each four 1GB NICs. We work with local storage. The most importent VMs are Actice Directory und fileserver. These VMs are redundant. AD1 is on hypervisor1 and AD2 is on hypervisor2.
Now I just wanted to make a...
Hi all,
I have made my harddisk to a LVM like this https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Local_Backing and my VMs feel faster now. At the moment I dont have time to make performance tests between directory storage and LVM, but it feels faster and I will do it when I have...
Hi all
First of all, thanks for all your inputs.
@udo
I was mistaken. I do not use a raid 1. So nothing is configured.
I did the pveperf on sdb9. It is in my second post the second code section.
What kind of sata disk do you use? It is in my case just for a testlab. And is this local...
EDIT: I found out which HDD is built in. It is a Seagate ST2000VN000. When I bought it, clearly didnt look at performance and till then i never made bad experience. What kind of HDD should be used in virtualization envirnments? My mainboard is AsRock H97 Pro4 which supports Sata 3.
Hello Udo
I just checked my test lab computer and saw, that there is no raid configured for the HDDs. I just use only one of the two HDDs at the moment.
pveperf
CPU BOGOMIPS: 63850.88
REGEX/SECOND: 3134022
HD SIZE: 27.31 GB (/dev/mapper/pve-root)
BUFFERED READS: 405.56...
Hi all,
At the moment I am testing Active Directory in a test lab, which is one physical server running proxmox.
Hardware:
Intel Core i7-4790K, 4 GHz, 8 Cores
32 GB of RAM
Proxmox is installed on a SSD
There are two Seagate 2000 GB HDD in RAID1. Here are the VMs stored
Proxmox...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.