Hi, I usually use 2 x 2x10Gb network cards like this :
vmbr0 : LACP bond0 with 2x10Gb ports
ceph (pub/priv) : LACP bond1 with 2x10Gb ports
So we may say that there are 2 dedicated interfaces, one for vms, the other for ceph.
But it requires 4 ports on my switch.
I'm wondering if it could be...
Hi,
I'm testing the great Proxmox Mail Gateway. :)
For outgoing mails we want to push them to mailjet.
So at the end of main.cf I added :
relayhost = in-v3.mailjet.com
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
after installing libsasl2-modules and...
thanks !
I recreated the osd.9 seems to be ok
I still have these messages on the syslog file :
Running command: /usr/sbin/ceph-volume lvm trigger 9-7d980d55-34bf-456b-9f68-839585aba395
Running command: /usr/sbin/ceph-volume lvm trigger 9-9501cd08-984b-4ee9-aafe-c58ea0402dc4
theses volumes...
I still also have a tmpfs for the deleted osd
root@dc-prox-11:~# df -h | grep tmpfs
tmpfs 13G 11M 13G 1% /run
tmpfs 63G 63M 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock...
Hi,
I'm upgrading a 5 nodes cluster from pve 6.2.6 to 6.2.11
Everything was ok until I reboot the third one .
2 osds didn't start with message :
osd 8 and 10
unable to obtain rotating service keys; retrying
reboot again and then osd 8 and 10 are ok but osd 11 and 9 are ko with same messages...
Hi,
You're right this is Zabbix :)
I get wearout values via SNMP through Dell idrac.
I created an item under the disk discovery section.
It works for idrac 8 or newer.
The related OID is 1.3.6.1.4.1.674.10892.5.5.1.20.130.4.1.49.{#SNMPINDEX}
for HDD returned value is 255...
:p
Thanks for these urls.
So with still more than 80 % remaining, I have 2 or 3 years before replacing them I guess.
it decreases by 1% every 2 months.
It doesn't matter ;)
I know what to check.
But before buying lots of SSD in order to replace them, I want to know the wearout value where the drive must be replaced.
Hi,
Unfortunately, the right values are provided by another attribute for me. CF values provided by Dell idrac => 90% remaining
here is the result of 'smartctl -a /dev/sda'
smartctl -a /dev/sda
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.18-9-pve] (local build)
Copyright (C) 2002-16...
HI,
My 'oldest' prx ceph cluster is based on samsung SM863a drives.
After 3 years, wearout for some drives is less than "88% remaining".
I don't know if these values are safe enough.
Under what kind of Wearout value it is recommended to change the SSD drive ?
Thanks !
Hi,
I just provisioned some new VMs based on a debian10 template with cloud-init. It works as expected.
So, after first boot, do I need to keep the cloud-init drive mounted (a then delete it) ? Also, remove cloud-init package ?
Thanks !
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.