Search results

  1. S

    migration between storages

    Hi Jonas, sorry, can you please write in details what you mean?
  2. S

    High Availability Cluster in Proxmox 5 + LXC & Watchdogs

    Hi dietmar, thanks for that answers! so: 1. it's work! 2. why it's take too long, in Video guides, I see 5-6 seconds for VM and LXC container should be even faster, no? who can I Reduce the downtime time? Regards,
  3. S

    migration between storages

    Thanks wolfgang! pct help migrate no -targetstorage option on it can you please try it? Regards,
  4. S

    High Availability Cluster in Proxmox 5 + LXC & Watchdogs

    Hi, just set HA Cluster with Proxmox 5.x with Ceph shared Storage now, I set the fencing via IPMI Watchdog 1. I set /etc/modprobe.d/ipmi_watchdog.conf with options ipmi_watchdog action=power_cycle panic_wdt_timeout=10 2. I added nmi_watchdog=0 for: /etc/default/grub and did reboot. now, my...
  5. S

    migration between storages

    Hi wolfgang, additional question, qm not working only for VM? I need to migrate LXC container, any solution?
  6. S

    Ceph Cluster

    * need also to add the "Ceph Network" interface to the additional servers... work, thanks!
  7. S

    migration between storages

    Thanks wolfgang! any plan to add this to GUI? mean, add storage drop down under the target destination, show the available storage options for each target node for example? Regards,
  8. S

    Ceph Cluster

    Hi, I have big Proxmox 5 cluster with multiple nodes I want to assign few nodes for Shared storage with Ceph, it's possible or all nodes inside this cluster should enabled Ceph? am asking since I did it and got this error: rados_conf_read_file failed - Invalid argument (500) from nodes without...
  9. S

    migration between storages

    I have Cluster on 4 Nodes over Proxmox 5 3 nodes working with Ceph Shared storage and one node with with local ZFS now, I want to migrate LXC container from node #4 with localzfs to Node #1 with Ceph storage, and I got this error: 2017-10-11 16:38:56 ERROR: migration aborted (duration 00:00:00)...
  10. S

    lxc live migration.

    Hi RobFantini, technically, LXC support Live migration via CRIU but since it's not stable enough, Proxmox team didn't implement it as feature in my opinion, this feature should be implemented and each admin will decide if use it or no.
  11. S

    Proxmox v5 and Ceph + ZFS

    Thanks Arnaudd, I need Redundancy, so if Chph can provide it, I can remove the ZFS, no? I have only 2 hard drives on each node and I want to work with Shared storage for HA and Redundancy. Regards,
  12. S

    Proxmox v5 and Ceph + ZFS

    Hi Alwin, 1. thanks, so I can use only Ceph only and get Data Redundancy? 2. Proxmox OS installed on sdc we have total of 3 Drives on each server (microblades) 2x PM863a 960DB SSD for data 1x SATA DOM 128GB for Proxmox OS
  13. S

    Ceph + ZFS on Proxmox

    Hi, there is any reason to use ZFS in Ceph environment or I can trust Ceph to handle the "Raid" and Redundancy for disks? can I set the Redundancy for Ceph disks? Regards,
  14. S

    Proxmox 5 + LXC - Max Network interfaces

    Thanks Dietmar, it's work, but give me new concern https://forum.proxmox.com/threads/ip-management-and-security.37395/
  15. S

    IP Management and security

    Hi, from what I see, it's possible to manage / Add IPs in both LXC and KVM guests without any Control of the admin mean, I can assign 80.179.1.1 to LXC and this host can add eth0:0 for 80.179.1.2 eth0:1 for 80.179.1.3 eth0:2 for 80.179.1.4 etc... even I assigned only one interface and one IP...
  16. S

    Proxmox 5 + LXC - Max Network interfaces

    Hi, I need too add 25 IPs in LXC over Proxmox 5 I see that I limited to max of 10 net interfaces, and the "Add" button is disabled now, now can I add more than 10 IPs? Regards,
  17. S

    Proxmox v5 and Ceph + ZFS

    Hi, I have 3x nodes with last version of Proxmox 5.x this 3 nodes has 3 local hard drives for each nodes 1 hard drive use for boot and Proxmox OS 2x Samsung SSD SM863 using for ZFS pool with RAID1 now, I try to Install Ceph and: 1. during the installation I got this error: root@server203:~#...
  18. S

    Multiple IPs for bonds on same Interfaces

    Hi, I have Blades with 2x Eth devices, each one connected to switch module, I create bonds for that 2 eth for HA and this bond got the "Public / Internet" IP, now, I need to add additional Private IPs to same bond, now can I do that? how can I set additional IPs for same bond and same Bond...
  19. S

    server Separation from a Cluster

    each node with Local storage

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!