Search results

  1. J

    API: Resources index 400

    Yeah, i guess need to modify this limit on all container. Is there a bulk way for that? But its a simple patch, i will apply it manually.
  2. J

    API: Resources index 400

    Hi, Today i upgraded one of my nodes to the latest version: proxmox-ve: 5.2-2 (running kernel: 4.15.18-1-pve) pve-manager: 5.2-6 (running version: 5.2-6/bcd5f008) pve-kernel-4.15: 5.2-4 pve-kernel-4.15.18-1-pve: 4.15.18-17 pve-kernel-4.15.17-3-pve: 4.15.17-14 corosync: 2.4.2-pve5 criu...
  3. J

    Ceph HEALTH_WARN: Degraded data redundancy: 512 pgs undersized

    fix the clock skew first, because it can precent replication.
  4. J

    Ceph HEALTH_WARN: Degraded data redundancy: 512 pgs undersized

    What is the data usage on your osd? ceph osd df Size=3 means, all pg need to be replicated 3 times on 3 node. But your node1 have much less hdd than others. And first, fix clock skew, check all nodes using the same NTP server and time syncronized.
  5. J

    Drivers - iSCSI Boot

    I don't know, but its really doesnt matter, because the kernel size with initramfs is around 50Mbyte. But, for iSCSI this kind of workaround not necessary, because all modern server NIC can boot directly from iSCSI LUN.
  6. J

    Drivers - iSCSI Boot

    Our setup have similar like yours, but we using CEPH RBD for store disk images instead of iSCSI. Our server have a small SD card, what store /boot (with grub and kernel, but for iscsi this is not necessary), and it attach rbd image as root device. Everything are transparent from Proxmox.
  7. J

    Drivers - iSCSI Boot

    Why is this not correct? This is a well know and documented way for installation, we always installing our nodes on this way: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch
  8. J

    Drivers - iSCSI Boot

    Or install standard debian and install proxmox on this.
  9. J

    proxmox HA on shared SAN storage

    I gues you not loosing thin provisioning with storage, because the storage itself can do this. For snapshot, yes, that can be missing, but its depending on use case.
  10. J

    proxmox HA on shared SAN storage

    LVM working yes, with live migration, etc.
  11. J

    VM with 1TB memory +

    What is the error messages when you tried to start it?
  12. J

    Best RAID configuration for PVE

    RAID10 or RAID6. RAID10 have the best performance, but it can tolerate only 1 disk fail, RAID6 have worst write performance but it can tolerate more hdd fail. There is a calculator, you can check different solutions: http://wintelguy.com/raidcalc.pl http://wintelguy.com/raidperf.pl But...
  13. J

    Guest with Fedora or Centos will not shutdown

    Sorry dude, i don't have any idea, im not using selinux. Maybe you can add the required permission using selinux tools to this app.
  14. J

    1 LXC Makes Server Load 400!

    Hi, High load itself not mean the system need to be slow or unresponsible. You not wrote this information earlier.
  15. J

    Guest with Fedora or Centos will not shutdown

    What kind of CentOS is this? What is the version?
  16. J

    1 LXC Makes Server Load 400!

    Hi, When process in LXC container reach the CPU limit (in tihs case using 100% CPU), it start waiting for CPU time, what cause the high load number. Basically this is normal.
  17. J

    Directly mount the ceph pool and backup the whole VM there?

    Hello, There is no built-in feature for this in Proxmox. But there is several ways. However, if you want to store backups on the same cluster where your live data is have, simply create snapshots. There is 3rd party scripts what can do this automatically.
  18. J

    Guest with Fedora or Centos will not shutdown

    Hi, Yes, this is the expected result. https://pve.proxmox.com/wiki/Qemu-guest-agent#Testing_that_the_communication_with_the_guest_agent_is_working In this case, please check your syslogs inside the VM qemu-agent will log if it received the shutdown command, and what it doing, or not doing.
  19. J

    Guest with Fedora or Centos will not shutdown

    Hello, When Qemu Agent enabled in PVE, it try to shutdown VM via the agent (when not enabled, it send ACPI event). Please check if your communication beetwen your node and VM via the agent working well with qm agent <vmid> ping command.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!