I have a freepbx with 5 trunks and 40 extension, everything in my network is very fast and I can ping the voip provider around 15ms. I receive random delay when calling or receiving even through the internal network. I read that asterisk virtualized is not a good idea and this is true cause on a...
as I already told maybe mikrotik can give other nat issues but at least I cannot got sip registration or a stable sip link through proxmox on a natted network without mikrotik in the middle that at the beginning was my first idea. mikrotik rules my friend, cisco's kingdom is ending
I have this great problem, after weeks investigating a solution searching on the routing and freepbx side, I found that on phisical host my problem doesn't happens, so maybe is related to proxmox networking.
I have a freepbx kvm with around 50 extensions and 4 sip trunks. connectivity is...
the correct procedure is
nano /etc/ceph/ceph.conf
add in [global]
mon_max_pg_per_osd = 300 (this is from ceph 12.2.2 in ceph 12.2.1 use mon_pg_warn_max_per_osd = 300)
restart the first node ( I tried restarting the mons but this doesn't apply the config).
Be aware that 300 is still a reasonable...
Can someone please tell me which is the command for increasing
mon_max_pg_per_osd from 200 to 300? I will add more osd in future to totaly fix this but for now I have good processors and a lot of memory in my servers and pgs are around 215 on each osd so I simply want to remove this message.
thanks
yes I have already searched and seems that solution is to backup everything and destroy the pool. but this is a problem cause this is a production environment and will require a lot of downtime. is the solution that I reported in my first post possible?
I don't know exactly when it happens but I suppose in the last 2 weeks updates, in ceph I have this warning message:
too many PGs per OSD (256 > max 200)
so I searched and found that is a problem regarding the lasts ceph's updates and the wrong pgs number like in this post...
sorry very noob I didn't know that container=rootdir found now in the help.
OK thank you so much I hope that you add the option to add nodes restrictions in the next releases.
thank you again!!!
ok my friend now everything is stable and look very good and I understood some very useful things, you were right about the availability of zfs from web ui! only last question:
my original last node storage.cfg was
dir: local
path /var/lib/vz
content iso,vztmpl,backup
zfspool...
you were very clear but command give this output
root@nodo1:~# pvesm set ceph_ct nodes nodo1,nodo2,nodo3
400 too many arguments
pvesm set <storage> [OPTIONS]
I can't see this option(see attachment) but this option appears if I create a new shared storage. appart from this I can't understand how to specify local storage settings on this last node. storage.cfg is replicated on all nodes like they are identical. the last one for example has a zfs pool...
I have a 5.1 3 node cluster with ceph. Everything works great. now I want to add another node to the cluster only for management without ceph. After adding the node it appears on the list in the web ui but is unresponsive and ceph disks appears in the list(I dind't add them). ceph network is on...
let's say I have a cluster with 3 ceph ha identical nodes with a lot of vms in a production environment and another single server(this one have very different hardware in comparision with the other nodes) in a separate network with proxmox and some vms for utilities, like fax management...
and this error too
dmesg | grep -e DMAR -e IOMMU
[ 1633.899057] vfio-pci 0000:04:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.
hy my friends I'm tryng to pass a dialogic pci-e card to a vm. I followed this tutorial
https://pve.proxmox.com/wiki/Pci_passthrough
with no success,
some code:
interested device is 04:00.0 Network controller: Dialogic Corporation BRI (rev 01)
root@nodo3:~# lspci
00:00.0 Host bridge: Intel...
good morning my friends,
my sources.list are
deb http://ftp.it.debian.org/debian stretch main contrib
# security updates
deb http://security.debian.org stretch/updates main contrib
##Proxmox VE pvetest
deb http://download.proxmox.com/debian/pve stretch pvetest
#Proxmox Ceph test
deb...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.