All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It's not that because in the search log we saw that the $technique_id$ is well pass (T1059.003)   02-09-2024 10:37:46.161 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | ev... See more...
It's not that because in the search log we saw that the $technique_id$ is well pass (T1059.003)   02-09-2024 10:37:46.161 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003    And even when i'm doing this command, I have the same issue :    | mitrepurplelab T1059.003     I think the issue is with the commands.conf When i put command.arg.1 = $technique_id$ on the commands.conf the script try to run with $technique_id$ as an argument but literraly $technique_id$ not 1059.003 so It doesn't work   
1) yes, this is the first approach i have take, later i posted in the community. But why its not showing the value count over chart. 2) by the way @ITWhisperer  if you have any idea please help me... See more...
1) yes, this is the first approach i have take, later i posted in the community. But why its not showing the value count over chart. 2) by the way @ITWhisperer  if you have any idea please help me for this https://community.splunk.com/t5/All-Apps-and-Add-ons/JSON-data-unexpected-value-count/m-p/677019#M80209 3)is it possible to remove the label name below colored where like "Mon Jan 15" to "Jan 15" like this. from UI or XML source or SPL  
As you can see here, there are no configuration options for this feature
@ITWhisperer  i am expexting the same in the attached picture 
It is not possible to tell whether your subsearch should work or not since, despite being asked before, you have not shared your events (anonymised of course). If you want further assistance, please ... See more...
It is not possible to tell whether your subsearch should work or not since, despite being asked before, you have not shared your events (anonymised of course). If you want further assistance, please share some sample events preferably in a code block </> to prevent loss of vital information.
Hi @ITWhisperer , @PickleRick , adding it again sourcetype=“my_source” [search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>[\w-]+)" | top... See more...
Hi @ITWhisperer , @PickleRick , adding it again sourcetype=“my_source” [search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>[\w-]+)" | top limit=100 request_id | fields request_id]   so according subsearch document, my subsearch supposed to extract failed first 100 request and it should work as my main search and should search for that 100 request.  but this is not happening
What changes are you expecting - this is the way pie charts work - you could consider appending the value to the title |rest /services/data/indexes |rename title as index |rex field=index "^foo_(?<a... See more...
What changes are you expecting - this is the way pie charts work - you could consider appending the value to the title |rest /services/data/indexes |rename title as index |rex field=index "^foo_(?<appname>.+)" |rex field=index "^foo_(?<appname>.+)_" |table appname, index |stats dc(appname) as count |eval title = "currentapps: ".count | append [| makeresults | eval count = 300 | eval title="total_apps: ".count] | table title count  
I have an Alert that when triggered sends an email with a .PDF attachment of the Column Chart. I am trying to remove the legend truncation. In the UI ('Format' with the paintbrush icon) there is no... See more...
I have an Alert that when triggered sends an email with a .PDF attachment of the Column Chart. I am trying to remove the legend truncation. In the UI ('Format' with the paintbrush icon) there is no option for: (ellipsisNone) only end, middle, start. I then tried advance edit and get this error when trying to update this attribute to ellipsisNone display.visualizations.charting.legend.labelStyle.overflowMode Value of argument 'display.visualizations.charting.legend.labelStyle.overflowMode' must be either 'ellipsisEnd', 'ellipsisMiddle', or 'ellipsisStart'   I cannot find a way to edit an Alerts visualization .html code to do this manually ?
So where i can change the changes in the spl or xml source @
Does the argument need to be in quotes or passed as a field (so the SPL parser doesn't look for a field called T1059.003 and not find it so passes null? <query>| makeresults | eval technique_id="$te... See more...
Does the argument need to be in quotes or passed as a field (so the SPL parser doesn't look for a field called T1059.003 and not find it so passes null? <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query> <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab technique_id</query>
1. The general idea is sound but 2. The "top" command returns rows containing a value, count of events with this value and a percentage in the whole sample. So your subsearch will get rendered as (... See more...
1. The general idea is sound but 2. The "top" command returns rows containing a value, count of events with this value and a percentage in the whole sample. So your subsearch will get rendered as ((request_id="something" AND count=something AND percent="something") OR (request_id="something" AND count="something" AND percent="something") OR [...] ) So as you most probably don't have matching values in your data, you won't find anything. If you want to return only the request_id values from your subsearch, you must further limit the list of your returned fields from the subsearch by adding "fields" or "table" command at its end.
Yeah, my mind-reading qualification lapsed during lock-down and I have not been able to find an authorised examiner in my area in order to re-sit the assessment. 
Hello,  I created a dashbord with a text input, the token is then passed to a panel that executes this command: <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(techn... See more...
Hello,  I created a dashbord with a text input, the token is then passed to a panel that executes this command: <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab $technique_id$</query> the purpose of this command is to trigger a custom command with this config: [mitrepurplelab] filename = mitrepurplelab.py enableheader = true outputheader = true requires_srinfo = true chunked = true streaming = true   the mitrepurplelab.py script is then triggered, here is its code: import sys import requests import logging logging.basicConfig(filename='mitrepurplelab.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') def main(): logging.debug(f "Arguments received: {sys.argv}") if len(sys.argv) != 2: logging.error("Incorrect usage: python script.py <technique_id>") print("Usage: python script.py <technique_id>") return technique_id = sys.argv[1] url = "http://192.168.142.146:5000/api/mitre_attack_execution" # Make sure your JWT token is complete and correctly formatted token = "token headers = { "Authorization": f "Bearer {token}" } params = { "technique_id": technique_id } response = requests.post(url, headers=headers, params=params) if response.status_code == 200: print("Request successful!") print("Server response:") print(response.json()) else: logging.error(f "Error: {response.status_code}, Response body: {response.text}") print(f "Error: {response.status_code}, Response body: {response.text}") if __name__ == "__main__": main()   the script works well when run by hand, for example : python3 bin/mitrepurplelab.py T1059.003 but when I execute it via the dashboard I get this error: in the panel search.log I get this:   02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - Search process mode: preforked (reused process by new user) (build 1fff88043d5f). 02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - registering build time modules, count=1 02-09-2024 10:37:46.075 INFO dispatchRunner [1626 MainThread] - registering search time components of build time module name=vix 02-09-2024 10:37:46.076 INFO BundlesSetup [1626 MainThread] - Setup stats for /opt/splunk/etc: wallclock_elapsed_msec=7, cpu_time_used=0.00727909, shared_services_generation=2, shared_services_population=1 02-09-2024 10:37:46.080 INFO UserManagerPro [1626 MainThread] - Load authentication: forcing roles="admin, power, user" 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Setting user context: splunk-system-user 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Done setting user context: NULL -> splunk-system-user 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Unwound user context: splunk-system-user -> NULL 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Setting user context: admin 02-09-2024 10:37:46.080 INFO UserManager [10446 RunDispatch] - Done setting user context: NULL -> admin 02-09-2024 10:37:46.080 INFO dispatchRunner [10446 RunDispatch] - search context: user="admin", app="Ta-Purplelab", bs-pathname="/opt/splunk/etc" 02-09-2024 10:37:46.080 INFO SearchParser [10446 RunDispatch] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - Search running in non-clustered mode 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - SearchHeadInitSearchMs=0 02-09-2024 10:37:46.081 INFO dispatchRunner [10446 RunDispatch] - Executing the Search orchestrator and iterator model (dfs=false). 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - SearchOrchestrator is constructed. sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, eval_only=0 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - Initialized the SRI 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Initializing feature flags from config. feature_seed=2135385444 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=parallelreduce:enablePreview:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:search_retry:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:search_retry_realtime:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=parallelreduce:autoAppliedPercentage:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=subsearch:enableConcurrentPipelineProcessing:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=subsearch:concurrent_pipeline_adhoc:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=append:support_multiple_data_sources:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=join:support_multiple_data_sources:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search_optimization::set_required_fields:stats:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=searchresults:srs2:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:read_final_results_from_timeliner:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=search:fetch_remote_search_telemetry:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:boolean_flag:false 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:percent_flag:true 02-09-2024 10:37:46.081 INFO SearchFeatureFlags [10446 RunDispatch] - Setting feature_flag=testing:legacy_flag:true 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10446 RunDispatch] - Search feature_flags={"v":1,"enabledFeatures":["parallelreduce:enablePreview","search:read_final_results_from_timeliner","search:fetch_remote_search_telemetry","testing:percent_flag","testing:legacy_flag"],"disabledFeatures":["search:search_retry","search:search_retry_realtime","parallelreduce:autoAppliedPercentage","subsearch:enableConcurrentPipelineProcessing","subsearch:concurrent_pipeline_adhoc","append:support_multiple_data_sources","join:support_multiple_data_sources","search_optimization::set_required_fields:stats","searchresults:srs2","testing:boolean_flag"]} 02-09-2024 10:37:46.081 INFO ISplunkDispatch [10446 RunDispatch] - Not running in splunkd. Bundle replication not triggered. 02-09-2024 10:37:46.081 INFO SearchOrchestrator [10449 searchOrchestrator] - Initialzing the run time settings for the orchestrator. 02-09-2024 10:37:46.081 INFO UserManager [10449 searchOrchestrator] - Setting user context: admin 02-09-2024 10:37:46.081 INFO UserManager [10449 searchOrchestrator] - Done setting user context: NULL -> admin 02-09-2024 10:37:46.081 INFO AdaptiveSearchEngineSelector [10449 searchOrchestrator] - Search execution_plan=classic 02-09-2024 10:37:46.082 INFO SearchOrchestrator [10449 searchOrchestrator] - Creating the search DAG. 02-09-2024 10:37:46.082 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.082 INFO DispatchStorageManagerInfo [10449 searchOrchestrator] - Successfully created new dispatch directory for search job. sid=dc5edf3eebc8ccb6_tmp dispatch_dir=/opt/splunk/var/run/splunk/dispatch/dc5edf3eebc8ccb6_tmp 02-09-2024 10:37:46.082 INFO SearchParser [10449 searchOrchestrator] - PARSING: premakeresults 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - BatchMode: allowBatchMode: 1, conf(1): 1, timeline/Status buckets(0):0, realtime(0):0, report pipe empty(0):0, reqTimeOrder(0):0, summarize(0):0, statefulStreaming(0):0 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - required fields list to add to remote search = * 02-09-2024 10:37:46.082 INFO DispatchCommandProcessor [10449 searchOrchestrator] - summaryHash=f2df6493ea859e37 summaryId=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_f2df6493ea859e37 remoteSearch=premakeresults 02-09-2024 10:37:46.082 INFO DispatchCommandProcessor [10449 searchOrchestrator] - summaryHash=NSf2df6493ea859e37 summaryId=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_NSf2df6493ea859e37 remoteSearch=premakeresults 02-09-2024 10:37:46.082 INFO DispatchThread [10449 searchOrchestrator] - Getting summary ID for summaryHash=NSf2df6493ea859e37 02-09-2024 10:37:46.084 INFO DispatchThread [10449 searchOrchestrator] - Did not find a usable summary_id, setting info._summary_mode=none, not modifying input summary_id=A6ADAC30-27EC-4F28-BEB9-3BD2C7EC3E53_Ta-Purplelab_admin_NSf2df6493ea859e37 02-09-2024 10:37:46.085 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.085 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.155 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.161 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.161 INFO ScopedTimer [10449 searchOrchestrator] - search.optimize 0.076785640 02-09-2024 10:37:46.161 WARN SearchPhaseGenerator [10449 searchOrchestrator] - AST processing error, exception=31SearchProcessorMessageException, error=Error in 'mitrepurplelab' command: External search command exited unexpectedly.. Fall back to 2 phase. 02-09-2024 10:37:46.161 INFO SearchPhaseGenerator [10449 searchOrchestrator] - Executing two phase fallback for the search=| makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.161 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003 02-09-2024 10:37:46.162 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.232 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.239 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.239 ERROR SearchPhaseGenerator [10449 searchOrchestrator] - Fallback to two phase failed with SearchProcessorException: Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.239 WARN SearchPhaseGenerator [10449 searchOrchestrator] - Failed to create search phases: exception=31SearchProcessorMessageException, error=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, newState=BAD_INPUT_CANCEL, message=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 ERROR SearchStatusEnforcer [10449 searchOrchestrator] - SearchMessage orig_component=ChunkedExternProcessor sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37 message_key=CHUNKED:UNEXPECTED_EXIT message=Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - State changed to BAD_INPUT_CANCEL: Error in 'mitrepurplelab' command: External search command exited unexpectedly. 02-09-2024 10:37:46.240 INFO SearchStatusEnforcer [10449 searchOrchestrator] - Enforcing disk quota = 10485760000 02-09-2024 10:37:46.242 INFO DispatchManager [10449 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37', username='admin') 02-09-2024 10:37:46.242 INFO UserManager [10449 searchOrchestrator] - Unwound user context: admin -> NULL 02-09-2024 10:37:46.242 INFO SearchOrchestrator [10446 RunDispatch] - SearchOrchestrator is destructed. sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, eval_only=0 02-09-2024 10:37:46.242 INFO SearchStatusEnforcer [10446 RunDispatch] - SearchStatusEnforcer is already terminated 02-09-2024 10:37:46.242 INFO UserManager [10446 RunDispatch] - Unwound user context: admin -> NULL 02-09-2024 10:37:46.242 INFO LookupDataProvider [10446 RunDispatch] - Clearing out lookup shared provider map 02-09-2024 10:37:46.242 INFO dispatchRunner [1626 MainThread] - RunDispatch is done: sid=admin__admin_VGEtUHVycGxlbGFi__search1_1707475066.37, exit=0   the error seems to come from the fact that the argument went wrong:  02-09-2024 10:37:46.162 INFO ChunkedExternProcessor [10449 searchOrchestrator] - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py 02-09-2024 10:37:46.232 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Failed attempting to parse transport header: Usage: python script.py <technique_id> 02-09-2024 10:37:46.239 ERROR ChunkedExternProcessor [10449 searchOrchestrator] - Error in 'mitrepurplelab' command: External search command exited unexpectedly.   I don't understand why, because you can see that the argument is well transmitted to the custom command. and I can't retrieve the information about what is transmitted as an argument to the python script by the custom command   If you have any ideas, it would be a great help!
My goal is to identify status  enabled correlation searches that have triggered notables within the past 30 days. This does not answer the questions: Could you explain what's wrong with the orig... See more...
My goal is to identify status  enabled correlation searches that have triggered notables within the past 30 days. This does not answer the questions: Could you explain what's wrong with the original search?  What is expected and what is the actual results? (Illustrate with anonymized example/mockup.  Explain the difference between expected results and actual results from your search.)  Importantly, what is the logic in your original search to meet your expectation? If you cannot illustrate your data input, expected results and actual results, and clearly explain the logic between illustrated data and expected results (without SPL), this is just a waste of volunteers' time.  No one can read your mind.
Please check your subsearch as it looks like it isn't extracting anything. It is best to paste your search string into a codeblock </> so that it doesn't get reformatted and lose potentially vital in... See more...
Please check your subsearch as it looks like it isn't extracting anything. It is best to paste your search string into a codeblock </> so that it doesn't get reformatted and lose potentially vital information
To make sure that you have defined your use case and tech path clearly, let me highlight several factors that are not super clear to me. (Trust me, explaining to someone with less intimate knowledge ... See more...
To make sure that you have defined your use case and tech path clearly, let me highlight several factors that are not super clear to me. (Trust me, explaining to someone with less intimate knowledge will help you find the right path.  I was in your position.) You clarified that a major performance inhibitor is the 10-mil/day index _network.  Could you do some performance test to see if the join or eventstats contribute much to slowdown?  In other words, is your original main (outer) search, index=_network snat IN (10.10.10.10*,20.20.20.20*) | bucket span=1m _time | eval client_ip = mvindex(split(client, ":"), 0) | stats count by _time snat client_ip, much faster than your original join, or the solution that is working?  If they are comparable, constantly constructing a lookup is just not worth the trouble. Just as fundamentally, I just realized that my previous search was mathematically different from your original joint search.  Your original search has a common field name count in both outer search and subsearch.  I am not sure whether a mathematical definition exists in this condition but Splunk will output count from the subsearch, i.e., from unrestricted | stats count by _time snat client_ip.  Given that the subsearch could be millions of events a day, it could be many times bigger than the count from my previously proposed search.  My search gives the count of matching events ONLY.  Which count is needed?  In the following, I will revert to your original math.  Because you need to count _network events unrestricted, I have even more doubt wehther using lookup (or any other method) will really improve performance. I deduce (mind-read:-) from you original joint search that you expect to embed the search in a dashboard with IP (Source_Network_Address, matching snat) and Account_Name selectors.  Is this correct?  How narrow your selections are can have profound impact on performance. (I will embed comments in places where you should insert the tokens.) You say index _ad is small, and you want to turn it into a lookup table in order to speedup search.  So far only you know how this table is constructed, thus limiting other people's ability to help you.  I will construct a reference implementation below so we are on the same page. (For clarity, I will call the table index_ad_lookup.) On the subject of this lookup table , you say inclusion of time buckets are paramount.  I want to remind you that using a lookup containing time will limit how close your search can run against the latest production of lookup.  Unlike an index search, you can only produce the table in fixed intervals.  Let's say you want time bucket to be 5 minute, and you produce the lookup every 5 minutes.  This would mean that the closet match you can get can be up to 5-minute old.  Is this acceptable? There is also a question of cost.  If you want 1-minute time bucket, are you willing to refresh the lookup every minute?  The search that produces this lookup will also need its search interval to match your final search interval.  If you are looking an interval of up to 24 hours, running a 24-hour search every minute can be taxing for even a small index. Aside from that, search interval interval to produce lookup also limits the maximum search interval you can run the main search, i.e., if the lookup is produced with a 24-hour search, the maximum your _network search can run is 24-hours.  Is this acceptable? Reference implementation of lookup table Now, assuming that you still want to pursue the lookup path, here is my reference implementation of the table before I propose what I see as an efficient matching search. index=_ad (EventCode=4625 OR (EventCode=4771 Failure_Code=0x18)) | bucket span=1m _time | eval Account_Name4625= case(EventCode=4625,mvindex(Account_Name,1)) | eval Account_Name4771= case(EventCode=4771,Account_Name) | eval Account_Name = coalesce(Account_Name4771, Account_Name4625) | eval snat = Source_Network_Address +":"+Source_Port | eval DCName=mvindex(split(ComputerName, "."), 0) | stats count by _time snat Account_Name EventCode DCName | outputlookup index_ad_lookup This is effectively your original outer search with restrictions on snat and Account_Name removed.  Note my reference table name is index_ad_lookup. Using index_ad_lookup If you can keep the lookup fresh enough to suite your needs, this is how to use it to match index _network and add Account_Name, etc. index=_network ```snat IN ($snat_tok$)``` | bucket span=1m _time | eval client_ip = mvindex(split(client, ":"), 0) | stats count by _time snat client_ip | lookup index_ad_lookup snat _time ``` add Account_Name, EventCode, DCName where a match exists ``` The comment is a speculation of how you may eliminate events with input token.  Because your original join does not require the events to have a match with index _ad, I seriously doubt if this will have better performance. (In fact, I had already written a search that requires events to have a match in order to be counted before I realized what your original join was doing.  That would have improve performance if matching sets are small.) Alternative search without lookup or join? I was also making an alternative search based on possible event reduction by requiring match between _network and _ad before I realized the mathematical difference.  If your requirement is to count all _network events, just add Account_Name, etc. where a match exists, any alternative will probably perform similar to the join command.  Like this one: index=_network snat IN (10.10.10.10*,20.20.20.20*) | bucket span=1m _time | eval client_ip = mvindex(split(client, ":"), 0) | stats count by _time snat client_ip | append [search index=_ad (EventCode=4625 OR (EventCode=4771 Failure_Code=0x18)) Account_Name=JohnDoe Source_Network_Address IN (10.10.10.10 20.20.20.20) | bucket span=1m _time | eval Source_Network_Address1 = case(EventCode==4771, trim(Client_Address, "::ffff:")) ``` this field is not used ``` | eval Account_Name4625= case(EventCode=4625,mvindex(Account_Name,1)) | eval Account_Name4771= case(EventCode=4771,Account_Name) | eval Account_Name = coalesce(Account_Name4771, Account_Name4625) | eval snat = Source_Network_Address+":"+Source_Port | eval DCName=mvindex(split(ComputerName, "."), 0) | stats count as count_ad by _time snat Account_Name EventCode DCName] | stats values(Account_Name) as Account_Name values(EventCode) as EventCode values(DCName) as DCName by _time snat client_ip count In short, you need to clarify whether you want to count all events from index _network or only count events that find a match in index _ad, or maybe every event is a match in which case there is no difference.  Because the main performance inhibitor is the number of events in _network, there is little to be gained if the requirement is not to restrict events.
Hi @gcusello , @PickleRick , Sorry for late response, my main issue is I want to use output of a query as input for the subsequent, more or like example given here https://docs.splunk.com/Documen... See more...
Hi @gcusello , @PickleRick , Sorry for late response, my main issue is I want to use output of a query as input for the subsequent, more or like example given here https://docs.splunk.com/Documentation/Splunk/9.2.0/Search/Aboutsubsearches in how subsearches work example. I want to extract failed request happened past 24 hours so I am trying to do something like below sourcetype="mysource"  [search sourcetype="mysource" "failed request:(?<request_id>[\w-]+=" | table request_id | top limit=100 request_id] but this supposed to give me 100 failed req (because I have it in logs). I am not able to extract by above query
Numbers show up when you hover over each segment
@ITWhisperer  it works buts its not showing values(digits) on pie chart  
Hi you should create a Splunk support case for this. r. Ismo