All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Don't mention
Dear Community, I integrate the FireEye NX with Splunk, but logs are not parsing as expected. I was searching for relevant add-ons and application for FireEye. I found below add-on and app, - h... See more...
Dear Community, I integrate the FireEye NX with Splunk, but logs are not parsing as expected. I was searching for relevant add-ons and application for FireEye. I found below add-on and app, - https://splunkbase.splunk.com/app/1904 (fireeye add on) - https://splunkbase.splunk.com/app/1845 (fireeye App). While i was going through the documentation of these add-on and app, i found it only support Splunk Enterprise platform not Cloud.  Is there any other application or add-on of same functionality on Splunk Cloud?
sorry for lack of information @ITWhisperer . Here's the full information for the dashboard:   <panel> <title>Logging Command History by User</title> <input type="text" token="drilldown_... See more...
sorry for lack of information @ITWhisperer . Here's the full information for the dashboard:   <panel> <title>Logging Command History by User</title> <input type="text" token="drilldown_command" searchWhenChanged="true"> <label>Find Command</label> <default>*</default> </input> <input type="text" token="exclude_command" searchWhenChanged="true"> <label>Exclude Command</label> <default>NULL</default> </input> <table> <search> <query>index=unix_os sourcetype="bash_history" | dedup timestamp | fields _time process, dest, user_name | search user_name=$user_name$ dest=$host_name$ process="$user_command$" NOT process="$exclude_command$" | table _time user_name process dest | rename dest as hostname, process as user_command | sort -_time</query> <earliest>$time_global.earliest$</earliest> <latest>$time_global.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">false</option> </table> </panel>  
Compatibility   This is compatibility for the latest version Splunk Enterprise, Splunk Cloud Platform Version: 9.2, 9.1, 9.0, 8.2 CIM Version: 5.x   Since this is a Splunk-supported app, if ... See more...
Compatibility   This is compatibility for the latest version Splunk Enterprise, Splunk Cloud Platform Version: 9.2, 9.1, 9.0, 8.2 CIM Version: 5.x   Since this is a Splunk-supported app, if this information is wrong, raise a case with support.
Depending on how you have set up your exclude_command token (which you haven't shared with us yet), you could try something like this | search user_name=$user_name$ dest=$host_name$ process="$user_c... See more...
Depending on how you have set up your exclude_command token (which you haven't shared with us yet), you could try something like this | search user_name=$user_name$ dest=$host_name$ process="$user_command$" NOT process IN $exclude_command$
Small remark - doing dedup on whole events before displaying only one field seems like a waste of resources. But to the point. <your search> | lookup domain.csv domain AS dest OUTPUT domain AS mat... See more...
Small remark - doing dedup on whole events before displaying only one field seems like a waste of resources. But to the point. <your search> | lookup domain.csv domain AS dest OUTPUT domain AS match Will give you a field called "match" containing the pattern from your lookup that matched your event. Now you can do your append inputlookup and stats. EDIT: One caveat though - by default you will get _first_ match. Not all rows from the lookup that could potentially match your value.
| eval Name_A=json_array_to_mv(json_keys(field)) | mvexpand Name_A | eval Name_B=json_array_to_mv(json_keys(json_extract(field,Name_A)))
Hello there, im creating a #Splunk Dashboards table that utilized to monitor user command. And i want to make it flexible and dynamic to view the table by user inpu For now i already create this sea... See more...
Hello there, im creating a #Splunk Dashboards table that utilized to monitor user command. And i want to make it flexible and dynamic to view the table by user inpu For now i already create this search string as table that can apply filter by Find Command and Exclude Command, but it only accept single string as filter.   index=os_linux sourcetype="bash_history" | dedup timestamp | fields _time process, dest, user_name | search user_name=$user_name$ dest=$host_name$ process="$user_command$" NOT process="$exclude_command$" | table _time user_name process dest | rename dest as hostname, process as user_command | sort -_time     It is possible to make the exclude_command accept multiple values with some separator? or another option recomended.  
Yes HEC is often used when you cant use UF/syslog etc.    https://docs.splunk.com/Documentation/Splunk/9.2.2/Data/UsetheHTTPEventCollector   
With specific query, I can get below value for one field: {     "key1": {         "field1": x     },     "key2": {         "field2": xx     },     "key3": {         "field3": xxx     } }... See more...
With specific query, I can get below value for one field: {     "key1": {         "field1": x     },     "key2": {         "field2": xx     },     "key3": {         "field3": xxx     } }   Every time, the string of key1,2,3 are different, and the string of field1,2,3 are also different, even the number of key is different for each query, it may eixst key4, key5...   Now I want to get below table, could someone help on this? Thanks. Name A Name B key1 field1 key2 field2 key3 field3 ... ...
Did anyone ever get success or not in embed the dashboard, on the external web site/app? I am still struggling on it. If anyone, please help us. Or give us a hint. Thanks,
Hello,  I would like to obtain a list of all domains that did NOT match my lookup file which is composed of wildcard domain here is an example : Lookup file domain *adobe.com* *perdu.com* Even... See more...
Hello,  I would like to obtain a list of all domains that did NOT match my lookup file which is composed of wildcard domain here is an example : Lookup file domain *adobe.com* *perdu.com* Events  index=proxy | table dest dest acrobat.adobe.com geo2.adobe.com Result wanted  *perdu.com* My request looks like this  index=proxy | dedup dest | table dest | eval Observed=1 | append [| inputlookup domain.csv | rename domain as dest | eval Observed=0] | stats max(Observed) as Observed by dest | where Observed=0 Obtained results : *adobe.com* *perdu.com* because the request didn't count the lines acrobat.adobe.com and geo2.adobe.com as duplicates of *adobe.com* So what I need is a way to dedup the events based on the dest field matched from the lookup, and rename the dedup dest value like the wildcard domain field in the lookup. This way, mid request I would have these results : dest                              observed *adobe.com*             1                                               ==> from the events *adobe.com*              0                                              ==> from the lookup *perdu.com*             0                                                ==> from the lookup then stats(max) and where to get only the wildcard domains that never matched. how could I achieve that ?
Well, this is a third party app after all. And this transform class is kinda ugly. Regardless of regexes themselves, notice that it calls five separate transforms (I suppose each is relatively heavy)... See more...
Well, this is a third party app after all. And this transform class is kinda ugly. Regardless of regexes themselves, notice that it calls five separate transforms (I suppose each is relatively heavy). This should have been done completely differently - on ingest the sourcetype should have been recast from a general cisco:ios to a more specific one and in search time only those extractions specific to that sourcetype should be run. This is indeed very ineffective. As I dig a bit into this TA, it's... strange. It's trying to force the sourcetype on ingest from "syslog" to general "cisco:ios" instead of splitting the events to specific sourcetypes. This is a bad idea. And yes, the regexes aren't very  "performance-friendly". See this - https://regex101.com/r/S8NRvt/1 - over 29k steps only to decide "no match".
Thanks, I wanted to avid that as I would need to updated a lot of correlation searches.  Any idea why this isn'g possible as the search looks like standard SPL?
I don't want to use universal forwarder I mean, what is the correct way to pull data from a hacked device, then take the data, save it in a folder, and then analyze it in splunk, and the hacked devi... See more...
I don't want to use universal forwarder I mean, what is the correct way to pull data from a hacked device, then take the data, save it in a folder, and then analyze it in splunk, and the hacked device does not have any universal forwarder and does not allow it to be installed All I want is to know the way to create data from the device such as botsv2 data and analyze it in Splunk
Hello Together,    commented everything out, step by step and came to the solution, that following REPORT statement slows evertything out massively.... A 10 Minute searchwindow with this line ... See more...
Hello Together,    commented everything out, step by step and came to the solution, that following REPORT statement slows evertything out massively.... A 10 Minute searchwindow with this line needs 230 seconds (203 sec commandSearch / 192 KV) - without just 25 (24 sec CommandSearch / 13 Sec KV) - FastMode But it contains nessasary fields in our env.... Will try to analyse the regexes inside.    Regards
The Omnis Data Streamer cant have agent installed there. So, the option is when Splunk installed on the same environment with Omnis is using HEC but i havent try this. The syslog one is also not deta... See more...
The Omnis Data Streamer cant have agent installed there. So, the option is when Splunk installed on the same environment with Omnis is using HEC but i havent try this. The syslog one is also not detail enough to display the data requested by customer. The file format is JSON but its generated by Apache Kafka. And also Add on on Splunk Base about Omnis Data Streamer dont have any configuration in it. So i guess the configuration is in the Kafka's side which is generated the JSON file format from Omnis. So, should i use HEC since we cant install agent in it and syslog is not detail enough Please give me advice Thanks
Unfortunately, this app is not supported in the cloud. 
@TheLawsOfChaos , The Linux is a redhad. And I have already created  a user called splunk, so under this path - cd /opt/splunk/bin/ I am running this command - sudo ./splunk enable boot-start. ... See more...
@TheLawsOfChaos , The Linux is a redhad. And I have already created  a user called splunk, so under this path - cd /opt/splunk/bin/ I am running this command - sudo ./splunk enable boot-start. I am able to manually start the services using- sudo ./splunk start