Hello,
I'll cast my vote to see virtual appliances for both Nexpose and OpenVAS. Willing to bet somebody could do a better job than mine which I've just spent the day downloading, converting and deploying the community versions of both the above.
Bboth are supplied as OVF templates for vmware with VMDK disk formats - no bother, a qemu-img convert got me a raw disk which I used in each image.
My conversion of OpenVAS worked "nicer from the off" - from memory I had to do very little modification to get the machine stable.It did crash "out of puff" but as soon as I disabled memory ballooning that error went away, believe its been documented somewhere as an oustanding issue.
Nexpose on the other hand took quite a bit to strip out what is evidently hand-coded workarounds for vmware coexistence and also to remove some non-required modules, eg mpt-raid monitoring which causes errors. It has a much larger memory footprint than OpenVAS, possibly due to postgres consuming away in the background. They've been up for 4 hours each now not doing anything and at rest OpenVAS is consuming 895mb memory compared to Nexpose at 5.54gb. Not logged memory usage under any load yet.
The kernels used both appear to support virtio disk and networking devices, didn't have to do anything there.
I get a few kernel timing errors "kernel.perf_event_max_sample_rate" too low but think its just a case of optimisation for the hardware I'm using, still working through that one.
As above, my vote is cast, happy to pitch in with testing if this is viable.
Monk
I'll cast my vote to see virtual appliances for both Nexpose and OpenVAS. Willing to bet somebody could do a better job than mine which I've just spent the day downloading, converting and deploying the community versions of both the above.
Bboth are supplied as OVF templates for vmware with VMDK disk formats - no bother, a qemu-img convert got me a raw disk which I used in each image.
My conversion of OpenVAS worked "nicer from the off" - from memory I had to do very little modification to get the machine stable.It did crash "out of puff" but as soon as I disabled memory ballooning that error went away, believe its been documented somewhere as an oustanding issue.
Nexpose on the other hand took quite a bit to strip out what is evidently hand-coded workarounds for vmware coexistence and also to remove some non-required modules, eg mpt-raid monitoring which causes errors. It has a much larger memory footprint than OpenVAS, possibly due to postgres consuming away in the background. They've been up for 4 hours each now not doing anything and at rest OpenVAS is consuming 895mb memory compared to Nexpose at 5.54gb. Not logged memory usage under any load yet.
The kernels used both appear to support virtio disk and networking devices, didn't have to do anything there.
I get a few kernel timing errors "kernel.perf_event_max_sample_rate" too low but think its just a case of optimisation for the hardware I'm using, still working through that one.
As above, my vote is cast, happy to pitch in with testing if this is viable.
Monk