This is a bit of a generic question but I thought I'd ask in case anyone had ever seen issues from Qualys like this before.
We currently ingest our data from Qualys 3 times a day (every 8 hours) using the Qualys TA, but I've found that sometimes in our data we just don't have certain scan results. For example, a scan runs from 1pm-3pm on a Friday and it scans 2000 hosts (as seen in qualys) but splunk only has data on 1500 hosts over that time frame,
Anyone seen anything like this before?
Any help/suggestions would be appreciated.
We have the same issue with Qualys WAS module and we investigate it with Qualys support (very slowly, actually). All we could find by today is that the issue most likely related to timestamp parse error. To check if you have the same issue cause you can perform the search:
index=_internal ‘your_qualys_scan_sourcetype’ log_level=WARN
If you have the message like: “Failed to parse timestamp. Defaulting to timestamp of previous event…” you probably have the same issue.
P.S. I did say that we have the same issue with WAS but it’s quite possible that the issue exist on other modules as well. We use Cloud Agent instead of VM; Cloud Agent perform ‘mini-scans’ several time per day and probably we are not conscious about the data integrity problem.
Can you try changing limit in limits.conf.
limit = (integer)
* The maximum number of fields that an automatic key-value field extraction
(auto kv) can generate at search time.
* If search-time field extractions are disabled (KVMODE=none in props.conf)
then this setting determines the number of index-time fields that will be
* The summary fields 'host', 'index', 'source', 'sourcetype', 'eventtype',
'linecount', 'splunkserver', and 'splunkservergroup' do not count against
this limit and will always be returned.
* Increase this setting if, for example, you have indexed data with a large
number of columns and want to ensure that searches display all fields from
* Default: 100
Hope this helps!!!