Testing Windows Server 2012, every time big incremental size:
using fast incremental mode (dirty-bitmap), 266.1 GiB dirty of 328.0 GiB total
------
backup was done incrementally, reused 301.62 GiB (91%)
transferred 266.07 GiB in 1679 seconds (162.3 MiB/s)
Daily optimize on C-disc is on. But i...
Well, this turned off telemetry and silenced the gui err: ceph mgr module disable telemetry
Still why it fails i dont know, i will not activate telemetry again until i understand why it failed.
Yes, that is their docs, thanks. Been there, doesnt help me.
ceph telemetry off, does not work.
Proxmox gui says big red warning: HEALTH_ERR
Well, this turned off telemetry and silenced the gui err: ceph mgr module disable telemetry
Hi.
I got Health_Err in gui for telemetry.ceph.com:
#ceph telemetry off
Error EIO: Module 'telemetry' has experienced an error and cannot handle commands: HTTPSConnectionPool(host='telemetry.ceph.com', port=443): Max retries exceeded with url: /report (Caused by...
Reinstall fixed and made it possible to add to cluster.
Gui "bug" is probably still there, but i will not try that again on this server, now in production.
The uuid was also looked (but failed since disk was reinstalled) for at boot, so maybe its a systemd thing ?
Hi
I have same issue,
Newly OSD for /dev/sdc became also LVM and is named "Ceph osd.xx (BLuestore)"
But only /dev/nvme0n1 is named "LVM"
So its clearly possible to fix this bug ?
Thanks for answer.
Well in that case its a bug in GUI since its not in /etc/pve/storage.cfg, i did create the directory in GUI. Not hardcoded.
But now when adding this server to the production cluster, it failed for unknown cause (corosync i guess, no log, in syslog: pveproxy[11952]...
Hi.
After testing some new disk with ext and xfs, i end up with a Server/Disks/Directory with "/mnt/pve/test-disk /dev/disk/by-uuid/xxxxxx ext4 defaults"
This disk in now gone/formatted and replaced with xfs manually.
But this line in gui i cant find out how to remove. (its not in...
Yep that was it:
--> ceph-volume lvm create successful for: /dev/nvme0n1
Thanks for GREAT support, you should add this info about ceph.conf to the wiki.
I have installed ceph in Proxmox 5 and upgraded to 6, always following wiki and forum, i dont know where it supposed to be ?
Anyway it must be on x.x.0.0.
Show me your ceph.conf
Yes ceph is working, its the same net at the moment, im trying to split them in future, thats why i have created new net.
# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet...
Smartmontools not working for Samsung Enterprise SSD SAS PM1643 for wearout.
I wonder if smartmontools 7.1 that is in Bullseye do work for these ? Is it possible for Proxmox to backport ?
I only get this from smartctl -a:
=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK...
Can not create OSD for ceph.
Same error in GUI and terminal:
# pveceph osd create /dev/nvme0n1
Error: any valid prefix is expected rather than "".
command '/sbin/ip address show to '' up' failed: exit code 1
The only thing i can think of is since last time it worked was that i now have two...
I have no Ethernet problems in my logs, so i dont think that is the main problem, you should fix that anyway.
I also mix corosync with other traffic, but trying to break the bonds now and set up new network gave me other problems, but that is for another thread in future.
Today i installed a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.