Hi
I'm very happy with the new CRS feature, even in early stage its a fantastic news! With this new actor in mind, I have one question about the proper settings/actions in my setup. Got 4xhost cluster with HA group involving all guests (no failback check, no restricted check). HA with...
Hi Miki
My whole cluster is in ceph 15.2.8 and running ok. I posted same error on this thread and seems to be a ceph bug reported by Fabian.
I'll waiting for the patch and hope to be fixed soon. Can't downgrade ceph and reinstall all nodes.
Greetings.
With my current version can't add osd to ceph. I reported in this thread: [SOLVED] Problem after upgrade to ceph octopus
but I realised that any other command also raiser error. Next command show same error:
# ceph-volume lvm zap /dev/sdc
--> AttributeError: module 'ceph_volume.api.lvm' has...
I had to reinstall a host from scratch and have the same error when adding a single osd. The node is up and running with ceph services ok. But when adding an osd reported this error from gui:
create OSD on /dev/sdc (bluestore)
wipe disk/partition: /dev/sdc
200+0 records in
200+0 records out...
Indeed, I have SD cards in raid1 for OS. I would make use whole disk bays for ceph osd.
I know log files are important, and Proxmox is doing quite writes, proxmox not suffer penalty from this writes but ceph is very sensitive here. My question is if any last tuning option is available to...
I have an issue related to slow mons reported by ceph. Aside of bluestore kind of disks, I have regular warnings about slow monitos except one host. I suspect that all these warnings are raised due to logs and data collected from ceph to OS disk "/var/lib","/var/log". Since OS disk are SD disks...
Hi,
With the new upgraded versión of ProxMox VE 6.3 can't start ceph manager dashboard. Prior to upgrade dashboard was up and running without issues.
The manager versión 14.2.15 log show these records:
Nov 27 12:31:06 sion ceph-mgr[61338]: 2020-11-27 12:31:06.083 7f6662c84700 -1...
I have the same problem with a W2016 guest. It keeps booting forever with 1 vcore 100% and consuming only 100MB of RAM. After 1 hour running, only windows splash is shown. I have a cluster of 3 nodes without subscription and Virtual Environment 5.3-12. This guest was installed without no issues...
Hi
I'm trying to mount a HA ceph storage. I have 3 nodes with Proxmox 5.2 with ceph Luminous. Nodes 1 & 2 have 5 local disks used as Osd's, node 3 have no disks. I created a pool with size=2 max=3 and pg=256 and all runs smoothly when all nodes are online.
When I reboot a node for maintenance...
Hi,
It is possible to install Proxmox 4.2 on a ZFS raid 1 on a two pendrives to get some redundancy? Obviusly is only a boot disk, VM resides on a shared storage SAN.
If possible, what is the procedure on a disk fail?
Greetings.
Hi Udo,
Thanks for help. I do a cluster migration with stopping and backing up to remote cluster. I know thats very dangerous share VMIDs in both cluster. The most of promox would be, to manage several clusters within a unique web portal, but nowadays this feature is not implemented yet...
Hi,
Is possible to do live migration from one cluster to another cluster?. Stop machine and backup is the only option I know nowadays.
Greetings.
Julian.
Thanks for reply, I known the idea of cloning a VM from a restore and doing some manual operations here. I ignore another direct approach.
Regards.
Julian.
Hi
Is it possible to restore a single disk image from a backup? For example, I got two virtual disks, one OS only disk and a data disk. If guest can't boot because some sort of corruption, how can I restore de OS disk only and try to boot?
Thanks in advance
Julian.
Thanks Wolfgang
I saw this doc, but I miss something, because MP is indeed to offer several paths to same disk in a node. In my case, each node only had one path to same virtual disk (one NIC attached). In this situation, is imperative to mount MP in each node?
Regards,
Julian
Hi
I have a two-node cluster up and running. My SAN storage (Dell MD3220i) have 4-port controller, and I attached one server in each port directly (without switch). Each storage port have one different portal, I can't reach the iscsi target with the same portal. In the proxmox environment, I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.