Search results

  1. S

    Multiple IPs for bonds on same Interfaces

    Hi, I have Blades with 2x Eth devices, each one connected to switch module, I create bonds for that 2 eth for HA and this bond got the "Public / Internet" IP, now, I need to add additional Private IPs to same bond, now can I do that? how can I set additional IPs for same bond and same Bond...
  2. S

    server Separation from a Cluster

    each node with Local storage
  3. S

    server Separation from a Cluster

    Hi, this is what I did, but now the server that removed from cluster not function well I can't add pool, groups, storage etc... it's say that cluster not exist
  4. S

    server Separation from a Cluster

    Hi, I Have Proxmox v5 with 5 Nodes Cluster all nodes running LXC Containers now, I need to to separate on node from the cluster and let it work continue to work independently keeping the existing containers running in this separate node and with minimal downtime. suggestions? Thanks!
  5. S

    Move Large OpenVZ Container to LXC

    Hi all, I have server with Proxmox 3.4 with OpenVZ Containers, the large Containers can be 400GB and even 600GB each. now, I have new server with Proxmox 5.0 and LXC support, now I created backup files for the OpenVZ Containers, but I can't move this file to dump directory since it's limit to...
  6. S

    LXC and Live Migration

    I want minimum downtime in migrate, so this is not the optimal solution no any way to migrate -> shutdown -> sync delta data -> start on target node?
  7. S

    ZFS and RAID

    Hi, I have IBM 3550 M3 server with M5015 RAID Controller and 8 SSD Disks this raid not support JBOD now, it's possible and stable to create 8 spans of raid0 and ZFS over it or should I give up the ZFS and use RAID10 on Hardware level? Thanks!
  8. S

    Proxmox btrfs support roadmap, as fallback for possible licensing issues ZFS on linux

    Hi, thanks! just didn't see it here: https://pve.proxmox.com/wiki/Roadmap#Roadmap
  9. S

    Right RAID Settings with Proxmox 5

    sure I want to have, the question is if the "cost" of using Software raid in Performance / reliability is high...
  10. S

    htop show incorrect data inside LXC Container

    thanks for that update, any suggestion how to fix it with converted from OpenVZ / Old containers?
  11. S

    htop show incorrect data inside LXC Container

    Please: root@server1:~# pct config 206 arch: amd64 cpulimit: 8 cpuunits: 1024 hostname: automatic1.domain.co.il memory: 8192 net0: name=eth0,bridge=vmbr0,gw=10.0.0.1,hwaddr=0A:2B:65:14:E4:94,ip=10.0.0.51/32,rate=5,type=veth onboot: 1 ostype: centos rootfs: local-lvm:vm-206-disk-1,size=100G swap...
  12. S

    LXC and Live Migration

    sure: root@server1:~# pveversion -v proxmox-ve: 5.0-16 (running kernel: 4.10.17-1-pve) pve-manager: 5.0-23 (running version: 5.0-23/af4267bf) pve-kernel-4.10.11-1-pve: 4.10.11-9 pve-kernel-4.10.17-1-pve: 4.10.17-16 libpve-http-server-perl: 2.0-5 lvm2: 2.02.168-pve2 corosync: 2.4.2-pve3 libqb0...
  13. S

    htop show incorrect data inside LXC Container

    Hi, Using Proxmox v5 5.0-23/af4267bf root@server1:~# pveversion pve-manager/5.0-23/af4267bf (running kernel: 4.10.17-1-pve) root@server1:~# apt-get update && apt-get dist-upgrade Get:1 http://security.debian.org stretch/updates InRelease [62.9 kB] Ign:2 http://ftp.debian.org/debian stretch...
  14. S

    htop show incorrect data inside LXC Container

    Hi, I have node with 40 cores and inside this node I have LXC Container with 8 cores now, when I do htop inside the container, it's show me 40 CPU Cores instead of 8 * in Proxmox v3.x and OpenVZ it's show the right Cores any suggestion how to solve it in Proxmox v5? Regards,
  15. S

    LXC and Live Migration

    the restart mode checkbox not enabled to me, what should I change so it will work?
  16. S

    LXC and Live Migration

    Thanks wolfgang for that info, any other Idea / Solution in the foreseeable future?