Search results

  1. J

    HEALTH_ERR with BlueStore

    After repair command, what do you see in health details? ceph health detail
  2. J

    Detect Container with High CPU Load

    LXC doesn't have per container based loads (like what openvz had), all container see the same load value.
  3. J

    Can local (host) storage be resized after setup?

    If you using local mounted partition, or LVM, yes, definitally the same. Resize it on the normal way and proxmox will pick up the new size.
  4. J

    4.15 based test kernel for PVE 5.x available

    You can download and install it manually if you dont want to use pvetest repo otherwise:: wget http://download.proxmox.com/debian/pve/dists/stretch/pvetest/binary-amd64/pve-kernel-4.15.17-1-pve_4.15.17-8_amd64.deb dpkg -i pve-kernel-4.15.17-1-pve_4.15.17-8_amd64.deb
  5. J

    RBD: error connecting: Operation not supported

    I rebooted the node, after the web interface also working. I guess perl file cached somewhere.
  6. J

    RBD: error connecting: Operation not supported

    However, i guess there is some other place where this wrong, because i can start VM with qm commadn, but i can not from the web interface, its existing with the same issue, related to colon.
  7. J

    RBD: error connecting: Operation not supported

    https://git.proxmox.com/?p=pve-storage.git;a=commitdiff;h=41aacc6cdeea9b0c8007cbfb280acf827932c3d6
  8. J

    RBD: error connecting: Operation not supported

    So, i found the issue. In /usr/share/perl5/PVE/Storage/RBDPlugin.pm file, older version have the correct semicolon list for mon hosts, but the new version have simply . Around line 63: $cmd_option->{mon_host} = $hostlist->($scfg->{monhost}, ',') if (defined($scfg->{monhost})); I modified...
  9. J

    RBD: error connecting: Operation not supported

    /etc/pve/storage.cfg part: rbd: rbd content images krbd 0 monhost 172.24.2.31;172.24.2.32;172.24.2.33 pool proxmox username proxmox
  10. J

    RBD: error connecting: Operation not supported

    I updated one of my node to the latest version, but on this node, KVM guest with Ceph RBD (we have external ceph cluster) not starting. pveversion -v proxmox-ve: 5.1-43 (running kernel: 4.13.16-2-pve) pve-manager: 5.1-52 (running version: 5.1-52/ba597a64) pve-kernel-4.13: 5.1-44...
  11. J

    4.15 based test kernel for PVE 5.x available

    Me too, HP BL460c G7 with P410i.
  12. J

    Vm shutdown when Qemu-guest-agent enabled

    Hello, If Qemu Agent optcion enabled on a vm, when qm shutdown VMID command initiated (or the same when i click to the Shutdown button in Webui) it try to send shutdown event to the agent. But, when the agent not respond (because its not running, crashed or whatever), it not try to send ACPI...
  13. J

    HA and IP migration

    You can not live migrate LXC.
  14. J

    XenServer to Proxmox

    Hi, We was in the same situation few month ago. After some testing, we decided to made the migration on application level instead of migrating VM. So, instead of migrating the whole VM, we installed new CentOS 7 vms, (old infrastructure on xen was centos6) and we redeployed everything on that.
  15. J

    is it possible to use sas for rbd share storage to get ceph service

    Hi, Is this a G6, G7, etc? What kind of CPU you have? Do you have P4xx raid? For best preformance you need same type HDD on all server (optionally with a separate boot disks what used only for OS, eq 2x 146 GB SAS).
  16. J

    XenServer to Proxmox

    No, its not break anything. If the module not need, kernel not loading it. You need to regenerate initramfs with these module befoure migratione, without that basically linux will not have driver to virtio devices.
  17. J

    XenServer to Proxmox

    Its depending on your distro. Eq. on Debian (based) distro, you need to put module names in /etc/initramfs-tools/modules file: virtio_pci virtio_blk
  18. J

    Zfs over RAID10 HW... wrong idea?

    With LSI cards its the same, you can simple remove disk from the machine and you can put anywhere and you can read all of the disk content.
  19. J

    Access LXC IP programmatically

    In this case you can acquire IP from your DHCP server. You know MAC from proxmox, and based on that you can get IP from your DHCP server.
  20. J

    Storage replication question.

    For storage replication you need configuration cluster, however this is not possible over Internet, because your servers are need to be in the same broadcast domain: https://pve.proxmox.com/wiki/Cluster_Manager#_requirements

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!