I have a Kubernetes VM and an unrelated VM on the same host in a cluster using an ovs-bridge. I'm able to ping between the two VMs but when I try to ping from a kubernetes container it fails. Using tcpdump on the ovs-bridge interfaces I can see ping and response on the unrelated VM's interface...
Also, be sure to enable logging on the firewall for all rules on that VLAN. You'll be able to track anyone trying to hit your iLO (even if you have it blocked).
Definitely sounds like someone gained unauthorized access. As a general precaution, you should move your iLOs to a management VLAN that has no access to anything other than a few designated VLANs. I would suggest deleting the user and upgrading iLO. The most recent version for 4 can be found...
The year break suggests it's not a rootkit but how you were re-infected depends on a lot of factors. What model server and version of iLO are you using? Did you change the default credentials?
Running into issues after following the instructions here: https://pve.proxmox.com/wiki/Tape_Drives#Installing_from_an_adapted_deb_packages_of_scst_3.1.x_for_Debian_Jessie_8.x_.28Preferred_method_and_recommended.29
I am working with scst 3.2.x rather than 3.1.x (though I should really be...
After further investigation it seems Proxmox just replicated parts of the directory structure to /mnt based on the storage.cfg from the other server. This is confusing behavior to say the least.
When joining a new host, the data stores on the other host(s) become available. I'm trying to figure out how this is setup. I assumed it was NFS but the folders don't show up as mounts, they show up as local folders.
What method is used to share storage once a host is joined to a new cluster? I've been looking for some sort of config file but haven't been able to find any information about it. Does it use the Corosync network?
I was able to get it up and running by manually recreating the container using the previous image.
The previous transfer had corrupted the image and I was able to move it the second try.
I have an LXC container (Ubuntu 16.04) that refuses to start after moving it using the GUI.
Log from lxc-start:
lxc-start 103 20180820144828.681 ERROR lxc_conf - conf.c:run_buffer:347 - Script exited with status 255
lxc-start 103 20180820144828.681 ERROR lxc_start - start.c:lxc_init:815 -...
The NVMe device is acting as the cache (bcache) for a two-drive mdraid 10 using ext4.
EDIT: Bcache is configured for writeback, I need the write speed that it gives.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.