1 0 6613 2 20 0 0 0 io_sch D ? 0:02 \_ [txg_sync]
This tgx_sync here means ZFS is not able to write flush your IO to the disks fast enough for what you ask him to write. So the system is probably just waiting for the disks to respond.
I am surprise to see this...
Usually systems "hangs" boil down to slow IO.
You can try the command the command
ps faxl
and look if you have processes in the "D" state ( uninterruptible sleep ).
Also have a look if you have in your kernel messages if you have the message
"task blocked for more that 120s" and note...
yes this is possible, see https://pve4.intern.lab:8006/pve-docs/chapter-pvecm.html#_remove_a_cluster_node
you have on to execute the delnode command on the node you want to keep, giving as paramater the nodes you want to delete
For connecting a SAN to Proxmox.
* create a large LUN on the SAN side
* add an ISCSI LUN using Datacenter -> Add storage
* add a Volume Group on the LUN using Add VG -> Use existing Base Volume
* this way you can use a on the large LUN LVs for VMs, without having to create LUNs for each VMs...
OpenBSD VMs are known to hang from time to time on Qemu/KVM.
This was discussed on the openbsd mailing lists here: http://openbsd-archive.7691.n7.nabble.com/Openbsd-6-1-and-Current-Console-Freezes-and-lockup-Proxmox-PVE5-0-td322999.html
People tried to mitigate the issue with a serial terminal (...
@lankaster
Tried your setup here and backup works fine. This is my relevant vm.conf
scsi0: pvepool_vm:vm-610-disk-2,cache=writeback,discard=on,size=9G
Also using virtio-scsi, ceph luminous and PVE 5.1
Most Probably the host kernel killed the samba process because it was using more memory that the whole container 103 was allowed to use:
* [ 1044.690344] Memory cgroup out of memory: Kill process 17172 (smbd) score 23 or sacrifice child
* Task in /lxc/103/ns/system.slice killed as a result of...
the critical factor for ceph is that you need 10GB networks between the nodes, preferably on a dedicated NIC that's one of the most important thing to consider.
First do you see the link led "on" on this card ? This should work wheter the driver for this card is loaded or not.
If yes, try to see if the kernel did associated a driver with the device with the command:
lspci -k | sed -n '/Ethernet/,/Kernel modules/p'
This is an example output on my...
If you use DHCP, then you can configure the DHCP server to send a fixed IP address corresponding to the Mac Adress of your container.
You can get the IP address of a VM (not LXC ) via qemu-guest-agent, but this only works with Linux VMs
(if you build the agent yourselt it will work for windows...
For LXC containers, the IP adress you set in the configuration are then applied inside the LXC container.
So it is them same in the end ( NB: in your shell script you also read the IP adress from the configuration)
As long as your containers are not using DHCP, this should work.
If not, please...
Hi Vinny
For such a task I would use one the API client library.
You can either use proxmoxer ( https://github.com/swayf/proxmoxer) if python is your thing or our own libpve-apiclient-perl, that you can install on any debian based system ( no need to be a PVE system )
When you use these...
that can be really a lot of VMs for such a small pipe.
Also reading the LACP entry in wikipedia:
"This selects the same NIC slave for each destination MAC address"
which means all your outgoing writes to the iscsi target might go through 1 single GB link ....
I think you system is in danger...
the VMs remounting their volumes as read only is an indication that you're trying to push to much IO through the pipe
I would advise you to monitor the latency while doing a restore ( with ping) and the io latency with ioping
how much VMs do you have and which link to do you have to your...
without a debug trace of your program, it will be difficult of going any further
does the error that you see in the kernel log relate to your programm ?
I am aware of programm which does not run properly in unprivileged container when try to create devices nodes /dev or execute mounts, but I...
how are those two programms ( ombi and plex ) suppose to interact with each other ? via TCP/IP ?
if yes then I would start tcpdump on the listening port of plex since I suppose this is the server and ombi the client, to see if there is any traffic coming
tcpdump -i my_container_interface port...
the pve firewall blocks or allow ports but doing port redirection is outside its functionality.
If each of your container has its own IP you could for instance add on the container a reverse proxy which would forward the traffic to the real service running on port 8000
Do you have the option to export the disk to an "older" vmdk format ? It might be that your disk encodes VMDK features that qemu-does not yet support.
By the way if you manage to get over the disk format problem, you can use the qm importovf command line tool to import an ovf export.
See qm help...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.