Should I upgrade first the get the license or license first then upgrade.
How is upgrading to version 5 going to effect the real time operation of the VMs running on the ceph cluster?
Do I shut everything off and upgrade all three nodes?
Or can I upgrade one node, migrate the VMs to the new...
While running a VM on Cluster of three systems using CEPH on Virtual Environment 4.4-1/eb2d6f1e.
The VM upgraded to Linux kernel 2.6.32-754.3.5.el6.x86_64 and would not boot.
Kernel panic - not syncing: VFS: Unable to mount fs on unknown-block(0,0)
In order to get the VM to boot had to use...
your network is disrupted, only the quorate network partition can write to the cluster filesystem
this node cannot reach the other two via corosync
unless you fix the network, you cannot write to /etc/pve
OK the issue was two of them were plugged into a cisco gig switch and the third one was...
Aug 23 11:31:45 pteranode3 systemd[1]: Starting The Proxmox VE cluster filesystem...
Aug 23 11:31:45 pteranode3 pmxcfs[43643]: [status] notice: update cluster info (cluster name pteracluster, version = 3)
Aug 23 11:31:45 pteranode3 pmxcfs[43643]: [dcdb] notice: members: 2/43643
Aug 23 11:31:45...
root@pteracluster:~# pvecm status
Quorum information
------------------
Date: Thu Aug 23 11:11:01 2018
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1/308
Quorate: Yes
Votequorum information
----------------------...
root@pteranode3:/etc/pve/local# pvecm status
Quorum information
------------------
Date: Thu Aug 23 11:04:53 2018
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000002
Ring ID: 2/1704
Quorate: No
Votequorum information...
Yes I thought about that - eveything is pingable.
The 10.10.10.0 subnet is 10gig modules plugged into Dell 10gig switch
The 69.28.32.0 subnet is a cisco gig switch
root@pteranode3:/etc/pve/local# ping 10.10.10.1
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1...
So I thought I would change the permissions so I could manually put the commands in but the system will not let me chmod either
root@pteranode3:/etc/pve/local# chmod 600 host.fw
chmod: changing permissions of âhost.fwâ: Function not implemented
This system is being used for NTP DDOS attacks...
unable to open file '/etc/pve/local/host.fw.tmp.2630' - Permission denied (500)
Firewall has started on the other two nodes of the cluster.
But when I try to add a rule I get the error message above.
I also tried rebooting the node.
The datacenter says the node is offline and yet I can log into...
Virtual Environment 4.4-1/eb2d6f1e
Three node cluster.
Ceph Health_OK
Datacenter reports node 3 Offline under HA status.
lrm pteranode3 (old timestamp - dead?, Fri Mar 23 03:11:35 2018)
On pteranode3
pve-ha-lrm status
running
pve-ha-crm status
running
date
Thu Jul 5 10:25:52 PDT 2018
I do not...
That looks like the same thing as http://metadata.ftp-master.debian.org/changelogs/main/n/ntp/ntp_4.2.6.p5+dfsg-7+deb8u2_changelog
So I am sorry I have no clue what you are telling me to do :-(
The version running on my 3 node proxmox 4.4 cluster is 4.2.6p5 and was recently used for a NTP Amplification Attack - I tried apt-get update then apt-get upgrade ntp but it still comes back as 4.2.6p5 - the vulnerability was fixed in 4.2.7 - how do I get my ntp up to date so it can not be...
I have a three node cluster with the vm100 stored on the disk image ID rd_drive Type RBD
on Proxmox 4.4
I looked at all three nodes trying to find the file directories in attempt to renumber the vm but could not find any.
All three nodes show no folders under /var/lib/vz/images and...
OK well funny thing is I already config (3) R720 systems with 300 gig drives - but the boss wanted outside input before we spend the money on bigger drives. So if I follow you I should take one of three systems down to the other location and check if they still sync over the ten gig fiber link?
So asking those who have done this before, we want to have redundant servers running our CRM - would like to have them in two physically different locations so if one facility blew up we would still have access to our CRM software. Using Proxmox 4 what would be the best why to design this...
I originally setup my ceph cluster machines with 2 disks raid one for the Proxmox install and the other 6 drives as a raid array which I used for the cepf OSD. I later read that ceph works better if you do not do raid so I removed the raid config for the 6 drive. But when I rebooted the node...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.