After configure and activate firewall on a proxmox 7 cluster, webinterface stop show node status on 2 of 4 nodes after some minutes. If i disable proxmox cluster firewall again this 2 nodes return to show status. If i keep firewall on and do stop and start pvestatd nodes become on again for...
Some my vms have 2 disks: one small to SO and another big to data (share files, database etc).
9 in 10 problems on SO no need restore the big data disk...but web interface dont allow that. Its possible by command line i restore only one vm disk ?
If u have a template and clone to a LVM resource, cloud init drive is cloned on same LVM...after stop and start this Virtual Machine proxmox try create a new LV to cloud init and fail...so vm dont start. Seens same happens w Ceph and fixed...but lvm still failed.
Please help.
About AES-NI issue on proxmox: Westmere CPU on proxmox support it .... i just tested in 5.2 and seens work.
Anyone have any info about performance issues or others problems using guest CPU Westmere (or anyother) ?
I run a lot proxmox servers and cluster and almost all have PFsense as Firewall to network. All fine Until upgrade some this Proxmox to 5.2 and upgrade Pfsense on this Proxmox to 2.4 and 2.4.4....
Time to time PFsense network traffic stop on LAN (most times) or Wan... not all network cards die...
almost all clear (sticks to OS..crush map and so...). But no clear about "flash the Controller from IR to IT Mode" : U say i need change controller firmware or is just some setting on controller ?
You are Correct about H330 (its raid controller to R230).
Ok. Lets try remove raid layer... Next questions is How do that ?
- All hds are connected on H330, to make then visible to SO its need create a VD (i sopose only one disk per VD using raid 0)
- After that i will finish with 4 vds: 2 vds...
For sure raid and 2 nodes are a problem. But why OSD disconect and reconect all time? Raid thing supose affect read/write osd daemon just stop respondind for some moments then return again...
My Ceph cluster 5.2 show very frequent erros on logs.
I Upgrade to 5.3 and same happens:
Logs show thins like it:
2018-12-22 06:55:01.142920 osd.1 osd.1 192.168.0.200:6804/2869 427 : cluster [ERR] 1.52 shard 1: soid 1:4a2819a2:::rbd_data.55156b8b4567.0000000000015ac0:head candidate had a read...
I installed some weeks ago a cluster w 3 nodes. Ceph network id done c 10 gb cards wo switch. I used the bond0 w broadcast.
Each node have 2 or 3 ssds w arround 3 tb total.
Some days ago i notice only in node 3 latency in all 3 ssds there(like 30 /60 apply/commit). OSDs in other two nodes...
I start using a CLoud datecenter solution... the cloud provider use XEN and give me a Debian7 64 bits instaled SO. I convert this debian to Proxmox server using this directions:
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy
Worked like a charm... but this move Proxmox until...
I have a cluster runing proxmox 3.4. Vms on DRBD. Guest SO is 2003 and 2008. Backup log show:
INFO: Starting Backup of VM 103 (qemu)
INFO: status = running
INFO: update VM 103: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.