Recent content by Julian Lliteras

  1. J

    Proper way of host maintenance in HA + CRS

    Hi I'm very happy with the new CRS feature, even in early stage its a fantastic news! With this new actor in mind, I have one question about the proper settings/actions in my setup. Got 4xhost cluster with HA group involving all guests (no failback check, no restricted check). HA with...
  2. J

    Error: module 'ceph_volume.api.lvm' has no attribute 'is_lv'

    I'm stuck too. Upgraded all hosts and same issue.
  3. J

    [SOLVED] Problem after upgrade to ceph octopus

    Hi Miki My whole cluster is in ceph 15.2.8 and running ok. I posted same error on this thread and seems to be a ceph bug reported by Fabian. I'll waiting for the patch and hope to be fixed soon. Can't downgrade ceph and reinstall all nodes. Greetings.
  4. J

    Error: module 'ceph_volume.api.lvm' has no attribute 'is_lv'

    Nope. Done ceph osd require-osd-release nautilus command and rebooted node without luck.
  5. J

    Error: module 'ceph_volume.api.lvm' has no attribute 'is_lv'

    With my current version can't add osd to ceph. I reported in this thread: [SOLVED] Problem after upgrade to ceph octopus but I realised that any other command also raiser error. Next command show same error: # ceph-volume lvm zap /dev/sdc --> AttributeError: module 'ceph_volume.api.lvm' has...
  6. J

    [SOLVED] Problem after upgrade to ceph octopus

    I had to reinstall a host from scratch and have the same error when adding a single osd. The node is up and running with ceph services ok. But when adding an osd reported this error from gui: create OSD on /dev/sdc (bluestore) wipe disk/partition: /dev/sdc 200+0 records in 200+0 records out...
  7. J

    Ceph slow mons & SD OS disks

    Indeed, I have SD cards in raid1 for OS. I would make use whole disk bays for ceph osd. I know log files are important, and Proxmox is doing quite writes, proxmox not suffer penalty from this writes but ceph is very sensitive here. My question is if any last tuning option is available to...
  8. J

    Ceph slow mons & SD OS disks

    I have an issue related to slow mons reported by ceph. Aside of bluestore kind of disks, I have regular warnings about slow monitos except one host. I suspect that all these warnings are raised due to logs and data collected from ceph to OS disk "/var/lib","/var/log". Since OS disk are SD disks...
  9. J

    Upgraded to VE 6.3 ceph manager not starting

    Confirmed. In my case, upgraded successfully to octopus and back to normality again. No upgrade issues found and new dashboard online.
  10. J

    Upgraded to VE 6.3 ceph manager not starting

    root@sion:~# pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.73-1-pve) pve-manager: 6.3-2 (running version: 6.3-2/22f57405) pve-kernel-5.4: 6.3-1 pve-kernel-helper: 6.3-1 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.73-1-pve: 5.4.73-1 pve-kernel-5.4.60-1-pve: 5.4.60-2...
  11. J

    Upgraded to VE 6.3 ceph manager not starting

    Hi, With the new upgraded versión of ProxMox VE 6.3 can't start ceph manager dashboard. Prior to upgrade dashboard was up and running without issues. The manager versión 14.2.15 log show these records: Nov 27 12:31:06 sion ceph-mgr[61338]: 2020-11-27 12:31:06.083 7f6662c84700 -1...
  12. J

    Issues with Windows 10 VM

    I have the same problem with a W2016 guest. It keeps booting forever with 1 vcore 100% and consuming only 100MB of RAM. After 1 hour running, only windows splash is shown. I have a cluster of 3 nodes without subscription and Virtual Environment 5.3-12. This guest was installed without no issues...
  13. J

    ceph with 3 nodes total / 2 with local disks

    Hi I'm trying to mount a HA ceph storage. I have 3 nodes with Proxmox 5.2 with ceph Luminous. Nodes 1 & 2 have 5 local disks used as Osd's, node 3 have no disks. I created a pool with size=2 max=3 and pg=256 and all runs smoothly when all nodes are online. When I reboot a node for maintenance...
  14. J

    Installation Proxmox boot disk on raid 1 pendrive

    Hi, It is possible to install Proxmox 4.2 on a ZFS raid 1 on a two pendrives to get some redundancy? Obviusly is only a boot disk, VM resides on a shared storage SAN. If possible, what is the procedure on a disk fail? Greetings.
  15. J

    Live migration between clusters

    Hi Udo, Thanks for help. I do a cluster migration with stopping and backing up to remote cluster. I know thats very dangerous share VMIDs in both cluster. The most of promox would be, to manage several clusters within a unique web portal, but nowadays this feature is not implemented yet...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!