Splunk Search

How can I optimize and improve the performance of my search?

Explorer
index=bigfix sourcetype=software | eval Hashes_allow_or_deny = if((sha256_allow_or_deny=="*deny*") OR (md5_allow_or_deny=="*deny*") OR (isnull(sha256_allow_or_deny) AND isnull(md5_allow_or_deny)),"Unauthorized","Authorized") |eval hashes = mvappend(md5,sha256)|join bigfix_computer_id search [|inputlookup asset_lookup] |stats values(computer_name) as Computer_Names,values(Hashes_allow_or_deny) as Authorized/Unauthorized,values(fileName) as FileName by hashes | fields - hashes | stats list(Authorized/Unauthorized) AS Authorized/Unauthorized,list(FileName) AS FileName by Computer_Names | where Computer_Names="$computer_name$"

0.85 command.eval 42 354,676 354,676
0.00 command.fields 22 208,283 208,283
4.22 command.join 25 177,338 162,932
6.37 command.search 21 - 177,338
0.40 command.search.calcfields 10 177,338 177,338
0.16 command.search.fieldalias 10 177,338 177,338
0.10 command.search.filter 10 - -
0.06 command.search.index 21 - -
0.00 command.search.index.usec18 866 - -
0.00 command.search.index.usec5124096 5 - -
1.94 command.search.rawdata 10 - -
1.47 command.search.typer 10 177,338 177,338
1.06 command.search.kv 10 - -
0.89 command.search.lookups 10 177,338 177,338
0.03 command.search.tags 10 177,338 177,338
0.00 command.search.summary 21 - -
6.26 command.stats 27 162,932 61
5.39 command.stats.executeinput 25 162,932 -
0.16 command.stats.execute
output 1 - -
0.00 command.table 1 1 2
0.00 command.where 1 61 1
0.00 dispatch.checkdiskusage 2 - -
0.00 dispatch.createdSearchResultInfrastructure 1 - -
0.14 dispatch.evaluate 1 - -
0.08 dispatch.evaluate.search 1 - -
0.06 dispatch.evaluate.join 1 - -
0.00 dispatch.evaluate.eval 2 - -
0.00 dispatch.evaluate.fields 1 - -
0.00 dispatch.evaluate.stats 2 - -
0.00 dispatch.evaluate.table 1 - -
0.00 dispatch.evaluate.where 1 - -
2.20 dispatch.fetch 25 - -
12.14 dispatch.localSearch 1 - -
2.47 dispatch.preview 3 - -
1.98 dispatch.preview.command.stats 3 - 181
0.48 dispatch.preview.stats.executeoutput 3 - -
0.00 dispatch.preview.write
resultstodisk 3 - -
0.00 dispatch.preview.command.fields 3 92,022 92,022
0.00 dispatch.preview.command.table 3 3 6
0.00 dispatch.preview.command.where 3 181 3
0.40 dispatch.results_combiner 25 - -
7.21 dispatch.stream.local 21 - -
0.04 dispatch.writeStatus 13 - -
0.04 startup.configuration 1 - -
0.26

0 Karma
1 Solution

SplunkTrust
SplunkTrust

You have a lot of things in here which kill the performance! You have an IF statement, lookup table, wildcards, eval, join.. If the below recommendations don't speed up your performance then I would create a summary index and feed the data into a separate index little at a time which would exponentially increase your search performance. The only downside to this would be some buffer time between the data being indexed and being able to search it

2 things I would recommend would be to ditch the wildcards that in front of deny, the trailing wildcards after are no big deal though. The other thing I would recommend would be to look at that IF condition, you should have the most frequent condition first, followed by the second most frequent followed by the least frequent condition. You don't want to go through 2-3 conditions before matching it if it's your most frequent.

View solution in original post

SplunkTrust
SplunkTrust

You have a lot of things in here which kill the performance! You have an IF statement, lookup table, wildcards, eval, join.. If the below recommendations don't speed up your performance then I would create a summary index and feed the data into a separate index little at a time which would exponentially increase your search performance. The only downside to this would be some buffer time between the data being indexed and being able to search it

2 things I would recommend would be to ditch the wildcards that in front of deny, the trailing wildcards after are no big deal though. The other thing I would recommend would be to look at that IF condition, you should have the most frequent condition first, followed by the second most frequent followed by the least frequent condition. You don't want to go through 2-3 conditions before matching it if it's your most frequent.

View solution in original post

Explorer

I have a field cve{} what does this mean, it has 100+ values, how to access this field in search to display all values?
index =xyz | table $cve{}$ ??

0 Karma

SplunkTrust
SplunkTrust

I suspect this has something to do with your environment.. Are you running Splunk as your SIEM? Splunk will create interesting fields at index time by looking at key-value pairs in your log data, so it could be garbage or it could be important.

You should be able to do index=xyz $cve{}$="*"

0 Karma

Explorer

So it's a multivalue field index=nessusta "cve{}"="cve-1999-0502"

I would like to rename cve{} to Name so that I can compare it with my lookup

Below is the crap for the above value.

{"description": "the remote ftp server has one or more accounts with a blank password.", "plugin_type": "remote", "risk_factor": "critical", "synopsis": "the remote ftp server has one or more account with a blank password.", "solution": "apply complex passwords to all accounts.", "id": 11000, "plugin_modification_date": "2013/05/16", "fname": "ddi_mpeix_ftp_accounts.nasl", "xref": ["osvdb:822"], "vuln_publication_date": "2001/01/01", "cvss_base_score": "10.0", "exploit_available": "true", "osvdb": ["822"], "cve": ["cve-1999-0502"], "cvss_vector": "cvss2#av:n/ac:l/au:n/c:c/i:c/a:c", "exploitability_ease": "exploits are available", "metasploit_name": "ssh user code execution", "family_name": "FTP", "script_version": "$revision: 1.18 $", "plugin_publication_date": "2002/06/05", "exploit_framework_metasploit": "true", "plugin_name": "mpei/x default ftp accounts"} 
0 Karma