Search results

  1. manu

    [KNOWN ZFS PROBLEM] Freezing while high IO (only on host - not guest)

    1 0 6613 2 20 0 0 0 io_sch D ? 0:02 \_ [txg_sync] This tgx_sync here means ZFS is not able to write flush your IO to the disks fast enough for what you ask him to write. So the system is probably just waiting for the disks to respond. I am surprise to see this...
  2. manu

    [KNOWN ZFS PROBLEM] Freezing while high IO (only on host - not guest)

    Usually systems "hangs" boil down to slow IO. You can try the command the command ps faxl and look if you have processes in the "D" state ( uninterruptible sleep ). Also have a look if you have in your kernel messages if you have the message "task blocked for more that 120s" and note...
  3. manu

    How in proxmox 5.1 to throw a physical network adapter in KVM?

    What you probably want is called " PCI Passthrough" in the KVM world. see https://pve.proxmox.com/wiki/Pci_passthrough
  4. manu

    Decommission Cluster

    yes this is possible, see https://pve4.intern.lab:8006/pve-docs/chapter-pvecm.html#_remove_a_cluster_node you have on to execute the delnode command on the node you want to keep, giving as paramater the nodes you want to delete
  5. manu

    iSCSI Best Practice

    For connecting a SAN to Proxmox. * create a large LUN on the SAN side * add an ISCSI LUN using Datacenter -> Add storage * add a Volume Group on the LUN using Add VG -> Use existing Base Volume * this way you can use a on the large LUN LVs for VMs, without having to create LUNs for each VMs...
  6. manu

    OpenBSD client uses 100% CPU and freezes

    OpenBSD VMs are known to hang from time to time on Qemu/KVM. This was discussed on the openbsd mailing lists here: http://openbsd-archive.7691.n7.nabble.com/Openbsd-6-1-and-Current-Console-Freezes-and-lockup-Proxmox-PVE5-0-td322999.html People tried to mitigate the issue with a serial terminal (...
  7. manu

    KVM crash during vzdump of virtio-scsi using CEPH

    @lankaster Tried your setup here and backup works fine. This is my relevant vm.conf scsi0: pvepool_vm:vm-610-disk-2,cache=writeback,discard=on,size=9G Also using virtio-scsi, ceph luminous and PVE 5.1
  8. manu

    LXC/mdadm/SMB : Memory leakage during large file transfer by SMB protocol

    Most Probably the host kernel killed the samba process because it was using more memory that the whole container 103 was allowed to use: * [ 1044.690344] Memory cgroup out of memory: Kill process 17172 (smbd) score 23 or sacrifice child * Task in /lxc/103/ns/system.slice killed as a result of...
  9. manu

    New cluster install with Ceph

    the critical factor for ceph is that you need 10GB networks between the nodes, preferably on a dedicated NIC that's one of the most important thing to consider.
  10. manu

    Network Adapter showing as unclaimed

    First do you see the link led "on" on this card ? This should work wheter the driver for this card is loaded or not. If yes, try to see if the kernel did associated a driver with the device with the command: lspci -k | sed -n '/Ethernet/,/Kernel modules/p' This is an example output on my...
  11. manu

    how to get vm ip address from api

    If you use DHCP, then you can configure the DHCP server to send a fixed IP address corresponding to the Mac Adress of your container. You can get the IP address of a VM (not LXC ) via qemu-guest-agent, but this only works with Linux VMs (if you build the agent yourselt it will work for windows...
  12. manu

    how to get vm ip address from api

    For LXC containers, the IP adress you set in the configuration are then applied inside the LXC container. So it is them same in the end ( NB: in your shell script you also read the IP adress from the configuration) As long as your containers are not using DHCP, this should work. If not, please...
  13. manu

    how to get vm ip address from api

    Hi Vinny For such a task I would use one the API client library. You can either use proxmoxer ( https://github.com/swayf/proxmoxer) if python is your thing or our own libpve-apiclient-perl, that you can install on any debian based system ( no need to be a PVE system ) When you use these...
  14. manu

    high io delay with iscsi

    that can be really a lot of VMs for such a small pipe. Also reading the LACP entry in wikipedia: "This selects the same NIC slave for each destination MAC address" which means all your outgoing writes to the iscsi target might go through 1 single GB link .... I think you system is in danger...
  15. manu

    high io delay with iscsi

    the VMs remounting their volumes as read only is an indication that you're trying to push to much IO through the pipe I would advise you to monitor the latency while doing a restore ( with ping) and the io latency with ioping how much VMs do you have and which link to do you have to your...
  16. manu

    Program 'works' in VM but not CT

    without a debug trace of your program, it will be difficult of going any further does the error that you see in the kernel log relate to your programm ? I am aware of programm which does not run properly in unprivileged container when try to create devices nodes /dev or execute mounts, but I...
  17. manu

    Access Proxmox over VPN

    PVE needs to have a configured interface in the LAN you're giving access to via OpenVPN.
  18. manu

    Program 'works' in VM but not CT

    how are those two programms ( ombi and plex ) suppose to interact with each other ? via TCP/IP ? if yes then I would start tcpdump on the listening port of plex since I suppose this is the server and ombi the client, to see if there is any traffic coming tcpdump -i my_container_interface port...
  19. manu

    Redirect IP port for a container via Proxmox firewall?

    the pve firewall blocks or allow ports but doing port redirection is outside its functionality. If each of your container has its own IP you could for instance add on the container a reverse proxy which would forward the traffic to the real service running on port 8000
  20. manu

    [SOLVED] Error converting .vmdk file to .qcow2 file

    Do you have the option to export the disk to an "older" vmdk format ? It might be that your disk encodes VMDK features that qemu-does not yet support. By the way if you manage to get over the disk format problem, you can use the qm importovf command line tool to import an ovf export. See qm help...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!