Splunk Search

What is the meaning of table immediately before the lookup command?

hasegawaarte
Explorer

Hi all,
I would like to know one thing.

reproduction procedure
STEP1  Execute the following command

=======================

| makeresults
| eval _raw="Date,time,title,code
06/10/2023,10:22,AAA,100
06/10/2023,11:33,BBB,200"
| multikv forceheader=1

| outputlookup sample_data.csv

=======================

STEP2  Execute the following command

index=sandbox
| eval title = "AAA"
| lookup sample_data.csv title OUTPUT code

=======================

Executing STEP2 results in the following error.

[indexer01,indexer02,indexer03] Streamed search execute failed because: Error in 'lookup' command: Could not construct lookup 'sample_data.csv, title, OUTPUT, code'. See search.log for more details..

=======================
STEP3 Execute the following command 

index=sandbox
| eval title = "AAA"
| table *
| lookup sample_data.csv title OUTPUT code
=======================

Lookup is executed normally
I am wondering if you can tell me what is changed by running the | table command?

Labels (1)
0 Karma
1 Solution

isoutamo
SplunkTrust
SplunkTrust

I cannot see any real issues on your search.log. Couple of things 

  • You have some issues with VMware lookups (those hydra etc., but those shouldn't affected this issue?)
PARSING: litsearch index=_audit | eval  title="AAA"  | lookup  sample_data.csv title OUTPUT code | fields  keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"  | remotetl  nb=300 et=1686639600.000000 lt=1686729237.000000 remove=true max_count=1000 max_prefetch=100

This shows that everything is sent to your peers.

But as it works with table * command, but probably not with fields * command, I suppose that you have some definitions on distributedsearch.conf which prevent SH to send lookup files to peers. For that reason when you are using table (runs on SH) it works and without it, it didn't (runs on peers).

Actually there is 

Bundle replication not triggered

On your log file. I suppose that this means that search bundle not contains your private lookup file.

You could test your query by adding local=true to it like

index=_audit
| eval title="AAA"
| lookup local=true sample_data.csv title OUTPUT code

After that is should work as now it return to SH to run lookup command instead of run it on parallel on peers.

r. Ismo

View solution in original post

isoutamo
SplunkTrust
SplunkTrust

Hi

works for me with _audit index without issues.

What first line returns (content, field names, amount or events)?

What it exactly said on search.log? You can see it by looking Job -> Inspect Job selection.

Please add those to your response with </> edit element.

r. Ismo

0 Karma

hasegawaarte
Explorer

Hi

I tried index=_audit and got the same error.
==========
index=_audit
| eval title = "AAA"
| lookup sample_data.csv title OUTPUT code
==========
 (Running with index=_audit only will show the normal audit log)

==========
The following messages were returned by the search subsystem:

info : Assuming implicit lookup table with filename 'sample_data.csv'.
error : [indexer01,indexer02,indexer03] Streamed search execute failed because: Error in 'lookup' command: Could not construct lookup 'sample_data.csv, title, OUTPUT, code'. See search.log for more details..

06-14-2023 16:53:57.896 INFO  dispatchRunner [31365 RunDispatch] - search context: user="admin", app="search", bs-pathname="/opt/splunk/etc"
06-14-2023 16:53:57.896 INFO  SearchParser [31365 RunDispatch] - PARSING: search index=_audit\n\n| eval title = "AAA"\n| lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.896 INFO  dispatchRunner [31365 RunDispatch] - Search running in non-clustered mode
06-14-2023 16:53:57.896 INFO  dispatchRunner [31365 RunDispatch] - SearchHeadInitSearchMs=0
06-14-2023 16:53:57.896 INFO  SearchParser [31365 RunDispatch] - PARSING: search index=_audit\n\n| eval title = "AAA"\n| lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.896 INFO  SearchParser [31365 RunDispatch] - PARSING: search index=_audit\n\n| eval title = "AAA"\n| lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.896 INFO  dispatchRunner [31365 RunDispatch] - Executing the Search orchestrator and iterator model (dfs=0).
06-14-2023 16:53:57.896 INFO  SearchOrchestrator [31365 RunDispatch] - SearchOrchestrator getting constructed
06-14-2023 16:53:57.896 INFO  SearchOrchestrator [31365 RunDispatch] -  Initialized the SRI
06-14-2023 16:53:57.897 INFO  SearchFeatureFlags [31365 RunDispatch] - Initializing feature flags from config. feature_seed=3077982966
06-14-2023 16:53:57.897 INFO  SearchFeatureFlags [31365 RunDispatch] - Setting feature_flag=parallelreduce:enablePreview:true
06-14-2023 16:53:57.897 INFO  SearchFeatureFlags [31365 RunDispatch] - Setting feature_flag=search:search_telemetry_file_include_parallel_reduce:false
06-14-2023 16:53:57.897 INFO  SearchFeatureFlags [31365 RunDispatch] - Setting feature_flag=search:search_retry:false
06-14-2023 16:53:57.897 INFO  SearchFeatureFlags [31365 RunDispatch] - Setting feature_flag=parallelreduce:autoAppliedPercentage:false
06-14-2023 16:53:57.897 INFO  SearchFeatureFlags [31365 RunDispatch] - Setting feature_flag=stats:allow_stats_v2:true
06-14-2023 16:53:57.897 INFO  SearchFeatureFlags [31365 RunDispatch] - Setting feature_flag=search_optimization::set_required_fields:stats:false
06-14-2023 16:53:57.897 INFO  SearchOrchestrator [31365 RunDispatch] - Search feature_flags={"v":1,"enabledFeatures":["parallelreduce:enablePreview","stats:allow_stats_v2"],"disabledFeatures":["search:search_telemetry_file_include_parallel_reduce","search:search_retry","parallelreduce:autoAppliedPercentage","search_optimization::set_required_fields:stats"]}
06-14-2023 16:53:57.897 INFO  ISplunkDispatch [31365 RunDispatch] - Not running in splunkd. Bundle replication not triggered.
06-14-2023 16:53:57.897 INFO  SearchOrchestrator [31368 searchOrchestrator] - Initialzing the run time settings for the orchestrator.
06-14-2023 16:53:57.897 INFO  UserManager [31368 searchOrchestrator] - Setting user context: admin
06-14-2023 16:53:57.897 INFO  UserManager [31368 searchOrchestrator] - Done setting user context: NULL -> admin
06-14-2023 16:53:57.897 INFO  SearchOrchestrator [31368 searchOrchestrator] - Creating the search DAG.
06-14-2023 16:53:57.897 INFO  SearchParser [31368 searchOrchestrator] - PARSING: search index=_audit\n\n| eval title = "AAA"\n| lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.897 INFO  DispatchStorageManagerInfo [31368 searchOrchestrator] - Successfully created new dispatch directory for search job. sid=6e03aa52f7102493_tmp dispatch_dir=/opt/splunk/var/run/splunk/dispatch/6e03aa52f7102493_tmp
06-14-2023 16:53:57.913 INFO  SearchProcessor [31368 searchOrchestrator] - Building search filter
06-14-2023 16:53:57.926 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_access_log_fields'.
06-14-2023 16:53:57.926 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_gateway_log_fields'.
06-14-2023 16:53:57.926 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_scheduler_log_fields'.
06-14-2023 16:53:57.926 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='pool_name_field_extraction'.
06-14-2023 16:53:57.926 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_worker_log_fields'.
06-14-2023 16:53:57.930 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='pool_name_field_extraction'.
06-14-2023 16:53:57.932 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_logger_fields'.

06-14-2023 16:53:57.934 WARN  CsvDataProvider [31368 searchOrchestrator] - Unable to find filename property for lookup=sample_data.csv will attempt to use implicit filename.
06-14-2023 16:53:57.934 INFO  CsvDataProvider [31368 searchOrchestrator] - Assuming implicit lookup table with filename 'sample_data.csv'.
06-14-2023 16:53:57.934 INFO  CsvDataProvider [31368 searchOrchestrator] - Reading schema for lookup table='sample_data.csv', file size=225, modtime=1686718713
06-14-2023 16:53:57.934 INFO  DispatchThread [31368 searchOrchestrator] - BatchMode: allowBatchMode: 0, conf(1): 1, timeline/Status buckets(0):0, realtime(0):0, report pipe empty(0):1, reqTimeOrder(0):0, summarize(0):0, statefulStreaming(0):0
06-14-2023 16:53:57.934 INFO  DispatchThread [31368 searchOrchestrator] - Setup timeliner partialCommits=1
06-14-2023 16:53:57.934 INFO  DispatchThread [31368 searchOrchestrator] - required fields list to add to remote search = _bkt,_cd,_si,host,index,linecount,source,sourcetype,splunk_server
06-14-2023 16:53:57.934 INFO  SearchParser [31368 searchOrchestrator] - PARSING: fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"
06-14-2023 16:53:57.934 INFO  DispatchCommandProcessor [31368 searchOrchestrator] - summaryHash=985a2e1d474a3352 summaryId=113EFE7D-5214-4F12-9825-0BC662647973_search_admin_985a2e1d474a3352 remoteSearch=litsearch index=_audit | eval  title = "AAA"  | lookup  sample_data.csv title OUTPUT code | fields  keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"
06-14-2023 16:53:57.934 INFO  DispatchCommandProcessor [31368 searchOrchestrator] - summaryHash=NS6915b620af6b059b summaryId=113EFE7D-5214-4F12-9825-0BC662647973_search_admin_NS6915b620af6b059b remoteSearch=litsearch index=_audit | eval title = "AAA" | lookup sample_data.csv title OUTPUT code | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"
06-14-2023 16:53:57.934 INFO  DispatchThread [31368 searchOrchestrator] - Getting summary ID for summaryHash=NS6915b620af6b059b
06-14-2023 16:53:57.937 INFO  SearchParser [31368 searchOrchestrator] - PARSING: search index=_audit\n\n| eval title = "AAA"\n| lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.937 INFO  UnifiedSearch [31368 searchOrchestrator] - Processed search targeting arguments
06-14-2023 16:53:57.937 WARN  CsvDataProvider [31368 searchOrchestrator] - Unable to find filename property for lookup=sample_data.csv will attempt to use implicit filename.
06-14-2023 16:53:57.937 INFO  CsvDataProvider [31368 searchOrchestrator] - Assuming implicit lookup table with filename 'sample_data.csv'.
06-14-2023 16:53:57.937 INFO  CsvDataProvider [31368 searchOrchestrator] - Reading schema for lookup table='sample_data.csv', file size=225, modtime=1686718713
06-14-2023 16:53:57.937 INFO  AstOptimizer [31368 searchOrchestrator] - SrchOptMetrics optimize_toJson=0.000736677
06-14-2023 16:53:57.937 INFO  AstVisitorFactory [31368 searchOrchestrator] - Not building visitor : replace_datamodel_stats_cmds_with_tstats
06-14-2023 16:53:57.937 INFO  ReplaceTableWithFieldsVisitor [31368 searchOrchestrator] - search_optimization::replace_table_with_fields disabled due to VERBOSE Mode search
06-14-2023 16:53:57.937 INFO  AstVisitorFactory [31368 searchOrchestrator] - Not building visitor : replace_table_with_fields
06-14-2023 16:53:57.937 INFO  SearchParser [31368 searchOrchestrator] - PARSING: | lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.937 INFO  SearchParser [31368 searchOrchestrator] - PARSING:  | eval title="AAA"
06-14-2023 16:53:57.938 INFO  SearchParser [31368 searchOrchestrator] - PARSING: | search index=_audit
06-14-2023 16:53:57.938 INFO  ReplaceTableWithFieldsVisitor [31368 searchOrchestrator] - search_optimization::replace_table_with_fields disabled due to VERBOSE Mode search
06-14-2023 16:53:57.938 INFO  AstVisitorFactory [31368 searchOrchestrator] - Not building visitor : replace_table_with_fields
06-14-2023 16:53:57.938 INFO  ProjElim [31368 searchOrchestrator] - Black listed processors=[addinfo]
06-14-2023 16:53:57.938 INFO  AstOptimizer [31368 searchOrchestrator] - Search optimizations have been disabled in limits.conf. Set enabled=true in [search_optimization::replace_stats_cmds_with_tstats]
06-14-2023 16:53:57.938 INFO  AstVisitorFactory [31368 searchOrchestrator] - Not building visitor : replace_stats_cmds_with_tstats
06-14-2023 16:53:57.938 INFO  AstOptimizer [31368 searchOrchestrator] - SrchOptMetrics optimization=0.000474566
06-14-2023 16:53:57.938 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - Optimized Search =| search index=_audit | eval title="AAA" | lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.938 INFO  ScopedTimer [31368 searchOrchestrator] - search.optimize 0.001708104
06-14-2023 16:53:57.939 INFO  PhaseToPipelineVisitor [31368 searchOrchestrator] - Phase Search = | search index=_audit | eval title="AAA" | lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.939 INFO  SearchParser [31368 searchOrchestrator] - PARSING: | search index=_audit | eval title="AAA" | lookup sample_data.csv title OUTPUT code
06-14-2023 16:53:57.950 INFO  SearchProcessor [31368 searchOrchestrator] - Building search filter
06-14-2023 16:53:57.957 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_access_log_fields'.
06-14-2023 16:53:57.957 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_gateway_log_fields'.
06-14-2023 16:53:57.957 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_scheduler_log_fields'.
06-14-2023 16:53:57.957 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='pool_name_field_extraction'.
06-14-2023 16:53:57.957 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_worker_log_fields'.
06-14-2023 16:53:57.959 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='pool_name_field_extraction'.
06-14-2023 16:53:57.960 WARN  SearchOperator:kv [31368 searchOrchestrator] - Invalid key-value parser, ignoring it, transform_name='hydra_logger_fields'.

06-14-2023 16:53:57.961 INFO  UnifiedSearch [31368 searchOrchestrator] - Expanded index search = index=_audit
06-14-2023 16:53:57.961 INFO  UnifiedSearch [31368 searchOrchestrator] - base lispy: [ AND index::_audit ]
06-14-2023 16:53:57.962 INFO  UnifiedSearch [31368 searchOrchestrator] - Processed search targeting arguments
06-14-2023 16:53:57.962 INFO  PhaseToPipelineVisitor [31368 searchOrchestrator] - Phase Search = 
06-14-2023 16:53:57.962 INFO  SearchPipeline [31368 searchOrchestrator] - ReportSearch=0 AllowBatchMode=0
06-14-2023 16:53:57.962 INFO  SearchPhaseParserControl [31368 searchOrchestrator] - Adding SimpleResultsCombiner to merge remote input into correct time order.
06-14-2023 16:53:57.962 INFO  SearchParser [31368 searchOrchestrator] - PARSING: simpleresultcombiner max=0
06-14-2023 16:53:57.962 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - Storing only 1000 events per timeline buckets due to limits.conf max_events_per_bucket setting.
06-14-2023 16:53:57.962 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - Timeline information will be computed remotely
06-14-2023 16:53:57.962 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - No need for RTWindowProcessor
06-14-2023 16:53:57.962 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - Adding timeliner to final phase
06-14-2023 16:53:57.962 INFO  SearchParser [31368 searchOrchestrator] - PARSING: | timeliner remote=1 partial_commits=1 max_events_per_bucket=1000 fieldstats_update_maxperiod=60 bucket=300 extra_field=*
06-14-2023 16:53:57.962 INFO  TimelineCreator [31368 searchOrchestrator] - Creating timeline with remote=1 partialCommits=1 commitFreq=0 syncKSFreq=0 maxSyncKSPeriodTime=60000 bucket=300 latestTime=1686729237.000000 earliestTime=1686639600.000000
06-14-2023 16:53:57.962 INFO  SimpleResultsCombiner [31368 searchOrchestrator] - Base tmp dir removed.
06-14-2023 16:53:57.963 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - required fields list to add to different pipelines = *,_bkt,_cd,_si,host,index,linecount,source,sourcetype,splunk_server
06-14-2023 16:53:57.963 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - Remote Timeliner= | remotetl nb=300 et=1686639600.000000 lt=1686729237.000000 remove=true max_count=1000 max_prefetch=100
06-14-2023 16:53:57.963 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - Fileds=fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1686639600.000000 lt=1686729237.000000 remove=true max_count=1000 max_prefetch=100
06-14-2023 16:53:57.963 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - REMOTE TIMELINER ADDED
06-14-2023 16:53:57.963 INFO  SearchParser [31368 searchOrchestrator] - PARSING: fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=1686639600.000000 lt=1686729237.000000 remove=true max_count=1000 max_prefetch=100
06-14-2023 16:53:57.963 INFO  SearchPhaseGenerator [31368 searchOrchestrator] - Search Phases created.
06-14-2023 16:53:57.991 INFO  UserManager [31368 searchOrchestrator] - Setting user context: admin
06-14-2023 16:53:57.991 INFO  UserManager [31368 searchOrchestrator] - Done setting user context: admin -> admin
06-14-2023 16:53:57.991 INFO  UserManager [31368 searchOrchestrator] - Unwound user context: admin -> admin
06-14-2023 16:53:57.992 INFO  DistributedSearchResultCollectionManager [31368 searchOrchestrator] - Stream search: litsearch index=_audit | eval  title="AAA"  | lookup  sample_data.csv title OUTPUT code | fields  keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"  | remotetl  nb=300 et=1686639600.000000 lt=1686729237.000000 remove=true max_count=1000 max_prefetch=100
06-14-2023 16:53:57.992 INFO  ExternalResultProvider [31368 searchOrchestrator] - No external result providers are configured

06-14-2023 16:53:57.996 INFO  UserManager [31396 SearchResultExecutorThread] - Setting user context: admin
06-14-2023 16:53:57.996 INFO  UserManager [31396 SearchResultExecutorThread] - Done setting user context: NULL -> admin
06-14-2023 16:53:57.996 INFO  SearchParser [31368 searchOrchestrator] - PARSING: | streamnoop
06-14-2023 16:53:57.996 INFO  SearchParser [31368 searchOrchestrator] - PARSING: streamnoop  | timeliner remote=1 partial_commits=1 max_events_per_bucket=1000 fieldstats_update_maxperiod=60 bucket=300 extra_field=*
06-14-2023 16:53:57.996 INFO  UserManager [31398 localCollectorThread] - Setting user context: admin
06-14-2023 16:53:57.996 INFO  UserManager [31398 localCollectorThread] - Done setting user context: NULL -> admin
06-14-2023 16:53:57.996 INFO  SearchParser [31398 localCollectorThread] - PARSING: litsearch index=_audit | eval  title="AAA"  | lookup  sample_data.csv title OUTPUT code | fields  keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"  | remotetl  nb=300 et=1686639600.000000 lt=1686729237.000000 remove=true max_count=1000 max_prefetch=100
06-14-2023 16:53:58.009 INFO  TimelineCreator [31368 searchOrchestrator] - Creating timeline with remote=1 partialCommits=1 commitFreq=0 syncKSFreq=0 maxSyncKSPeriodTime=60000 bucket=300 latestTime=1686729237.000000 earliestTime=1686639600.000000
06-14-2023 16:53:58.009 INFO  SearchOrchestrator [31368 searchOrchestrator] - Starting the status control thread.
06-14-2023 16:53:58.009 INFO  SearchOrchestrator [31368 searchOrchestrator] - Starting phase=1

06-14-2023 16:53:58.030 WARN  SearchOperator:kv [31398 localCollectorThread] - Could not find a transform named hydra_logger_fields
06-14-2023 16:53:58.030 WARN  SearchOperator:kv [31398 localCollectorThread] - Could not find a transform named pool_name_field_extraction
06-14-2023 16:53:58.030 WARN  SearchOperator:kv [31398 localCollectorThread] - Could not find a transform named hydra_worker_log_fields
06-14-2023 16:53:58.030 WARN  SearchOperator:kv [31398 localCollectorThread] - Could not find a transform named pool_name_field_extraction
06-14-2023 16:53:58.030 WARN  SearchOperator:kv [31398 localCollectorThread] - Could not find a transform named hydra_scheduler_log_fields
06-14-2023 16:53:58.030 WARN  SearchOperator:kv [31398 localCollectorThread] - Could not find a transform named hydra_gateway_log_fields
06-14-2023 16:53:58.030 WARN  SearchOperator:kv [31398 localCollectorThread] - Could not find a transform named hydra_access_log_fields

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

I cannot see any real issues on your search.log. Couple of things 

  • You have some issues with VMware lookups (those hydra etc., but those shouldn't affected this issue?)
PARSING: litsearch index=_audit | eval  title="AAA"  | lookup  sample_data.csv title OUTPUT code | fields  keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"  | remotetl  nb=300 et=1686639600.000000 lt=1686729237.000000 remove=true max_count=1000 max_prefetch=100

This shows that everything is sent to your peers.

But as it works with table * command, but probably not with fields * command, I suppose that you have some definitions on distributedsearch.conf which prevent SH to send lookup files to peers. For that reason when you are using table (runs on SH) it works and without it, it didn't (runs on peers).

Actually there is 

Bundle replication not triggered

On your log file. I suppose that this means that search bundle not contains your private lookup file.

You could test your query by adding local=true to it like

index=_audit
| eval title="AAA"
| lookup local=true sample_data.csv title OUTPUT code

After that is should work as now it return to SH to run lookup command instead of run it on parallel on peers.

r. Ismo

inventsekar
SplunkTrust
SplunkTrust

Hi @hasegawaarte ... 

i am on Splunk 9.0.4 and step 1 executed fine. 

step 2 executed and showed me no results

step 3 executed and showed me no results. 

 

then, for step 2, i ran this.. it ran fine and given results as expected. 

|makeresults| eval title = "AAA"
| lookup sample_data.csv title OUTPUT code

 

0 Karma

hasegawaarte
Explorer

Hi, inventsekar

Set index=main or whatever index actually exists in your environment.

Change index=xxx to |makeresults,
and generate the main search results,
STEP 2 is also executable.

In STEP2, if you set index=xxx,
error occurs.

The version is Splunk 8.2.7

0 Karma
Get Updates on the Splunk Community!

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...