Search results

  1. E

    High RAM usage in Proxmox 4.4

    df -h doesn't show anything unusual. root@pve:~# df -h Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev tmpfs 6.3G 8.8M 6.3G 1% /run /dev/dm-0 583G 41G 518G 8% / tmpfs 16G 43M 16G 1% /dev/shm tmpfs...
  2. E

    High RAM usage in Proxmox 4.4

    Dropping caches didn't help. root@pve:~# echo 1 > /proc/sys/vm/drop_caches root@pve:~# free -mh total used free shared buffers cached Mem: 31G 12G 19G 91M 828K 158M -/+ buffers/cache: 12G 19G Swap...
  3. E

    High RAM usage in Proxmox 4.4

    No, I don't use ZFS. I've attached my ps.
  4. E

    Proxmox 4.4 node ignores SWAP setting.

    I have a Centos 7 CT with 2 GB swap limit. But when I boot into it, it shows me the host's 8 GB available for swap. Weird?
  5. E

    High RAM usage in Proxmox 4.4

    I have 32 gb of memory which I shared in two CT's: 1. 2gb of RAM - Centos 6 for haproxy and nginx. 2. 29 gb of RAM - Centos 7 for Percona Cluster database. I've noticed high swapping on 29 gb VM in decided to stop it. And then I completely removed it from the system. When I SSH'ed into the...
  6. E

    ZFS trim and over-provisioning support

    When installing I can only select ZFS Raid type and what disks to use. I can't select how much disk space should be used (I assume that the installer uses the whole pool). Or are you talking about something else here?
  7. E

    ZFS trim and over-provisioning support

    Yep, that's what I've read by now as well. Do you think that making a cron job that would issue a TRIM command directly to the disks that run the ZFS Raid every week would solve the problem?
  8. E

    ZFS trim and over-provisioning support

    Hi! I want to install proxmox on a group of SSD disks. I have a few questions. Does ZFS Raid support TRIM in Proxmox? I want to pool my 4 SSD drives into a Raid 0 pool. Is it possible to do "over-provisioning" (partition the space lower than the actual size of the drives)? From what I saw, the...
  9. E

    Why does proxmox say 5gb is not enough for hdsize while it uses 1.3 gb in all my other installations

    I am installing on a SSD drive and trying to save up every GB I can. I tried 3, 5 gigabytes and it tells me that space is not enough AFTER I fill out all other information. I have to restart the server and go through the installation again. It's such a frustration. First of all, why do you say...
  10. E

    tun devices in ve 4 (lxc)

    Nope. Tried this on a new proxmox machine. Died. root@pve2:~# cat > /usr/share/lxc/config/common.conf.d/02-openvpn.conf << EOL > lxc.cgroup.devices.allow = c 10:200 rwm > EOL root@pve2:~# root@pve2:~# root@pve2:~# root@pve2:~# root@pve2:~# pct start 103 (long wait) And we're gone at this...
  11. E

    bug in "pct restore"

    I am transferring a LXC between proxmox nodes. When I run "pct restore ..." on the new node the process fails at the end with "unable to open file /etc/pve/firewall/103.fw.tmp.11828' - No such file or directory" and the restore is cancelled. The solution is simple - run "mkdir...
  12. E

    pct restore to lvm (how to restore lxc backup to a logical volume instead of raw image file)?

    Got it. "system_lvms" was the name for my lvm storage set in proxmox. pct restore 103 vzdump-lxc-103-2016_03_15-06_09_03.tar.lzo --storage system_lvms Logical volume "vm-103-disk-1" created. Thank you.
  13. E

    pct restore to lvm (how to restore lxc backup to a logical volume instead of raw image file)?

    Hello, I am moving a LXC from one Proxmox node to another. I keep my LXCs in LVM instead of raw images. Backup was fine. But when I did a "pct restore" on the new node it tried to restore my backup to a raw image file in /var/lib/vz/images instead. How can I properly restore a .lzo lxc backup...
  14. E

    tun devices in ve 4 (lxc)

    I found a solution how to do a clean start of openvpn inside LXC. First, in proxmox, alter /etc/pve/lxc/[ID].conf where ID is the ID of your LXC cat >> /etc/pve/lxc/[ID].conf << EOL lxc.cgroup.devices.allow = c 10:200 rwm EOL OR!!! If you want to set this option automatically for ALL LXC...
  15. E

    "pct start" killed my whole Proxmox server (again)

    I found a solution, how to start openvpn inside the lxc container without crashing proxmox: https://forum.proxmox.com/threads/tun-devices-in-ve-4-lxc.23473/#post-132999
  16. E

    "pct start" killed my whole Proxmox server (again)

    root@pve:~# pveversion -v proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve) pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb) pve-kernel-4.2.6-1-pve: 4.2.6-36 pve-kernel-4.2.8-1-pve: 4.2.8-37 lvm2: 2.02.116-pve2 corosync-pve: 2.3.5-2 libqb0: 1.0-1 pve-cluster: 4.0-32 qemu-server: 4.0-55...
  17. E

    tun devices in ve 4 (lxc)

    Warning!!! Do not use this solution - it will cause your proxmox to break - you will loose SSH access to your system. For some reason using autodev causes a bug to appear: https://forum.proxmox.com/threads/pct-start-killed-my-whole-proxmox-server-again.26468/
  18. E

    [SOLVED] OpenVPN Tun\Tap device - Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)

    Warning!!! Do not use this solution - it will cause your proxmox to break - you will loose SSH access to your system. For some reason using autodev causes a bug to appear: https://forum.proxmox.com/threads/pct-start-killed-my-whole-proxmox-server-again.26468/
  19. E

    "pct start" killed my whole Proxmox server (again)

    Here is what I tried to do (and what actually caused the problem). Looks like there is a bug with autodev and LXC: cat > /usr/share/lxc/config/common.conf.d/02-openvpn-auto-tun.conf << EOL lxc.hook.autodev = /usr/share/lxc/hooks/openvpn-auto-tun EOL cat > /usr/share/lxc/hooks/openvpn-auto-tun...
  20. E

    "pct start" killed my whole Proxmox server (again)

    I googled and someone has already reported exactly the same bug: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1425477