Getting Data In

Does anyone have troubleshooting steps on index vs search configuration via splunk logs?

youngsuh
Contributor

Does anyone have troubleshooting steps on how to troubleshoot parse time or index time related issue.  The use case sourcetype override or sending thing to nullQueue and filter.

The reason for asking is that I didn't see anything in the internal logs or search string that was obvious to me.  Any tips can help...  Thanks in advance 

Tags (1)
0 Karma
1 Solution

isoutamo
SplunkTrust
SplunkTrust

Hi

As @chaker said, 1st you must understand the data path from source to indexer(s) and also is there separate search head(s). After that you can start to look if your configurations are on correct places. 

My own "best practices" are always do onboarding on my test instance (e.g. in workstation). Just take sample data and use own test app and index on that for easier remove events after tries and copy configurations to the correct production nodes. On your own instance use just

  1. Settings -> Add Data
  2. Upload
  3. Select your sample file & Next
  4. Set Event Breaks
  5. Set Timestamp
  6. Adjust Advanced values
  7. Loop with 4-6 until evens are seen correctly on "View Event Summary"
  8. Save As a new sourcetype
  9. Next 
  10. Test with that data and update search time preferences / field extractions etc.
    1. This means update those props.conf and transforms.conf on your test app
    2. Reload/restart
    3. Test and fix until works as you want
  11. Copy configurations to correct places
    1. UF
      1. props.conf if there are something which need to add there
    2. HF if you have those
      1. props.conf & transforms.conf
    3. Indexer(s)
      1. props.conf & transforms.conf
    4. SH(s)
      1. props.conf & transforms.conf if you have field extractions etc.
      2. fields.conf if you have a new indexed fields

If you are getting data into with HEC then this is little bit different process based on which kind of events vs. raw you are getting in.

For trouble shooting "splunk btool" is the tool to check that you have correct configuration in use. If/when you are troubleshooting SH side configurations remember to add --app and --user if/when needed as the precedence is different than in index phase!

Summary:

  1. Ensure that it works on your clean test environment 
  2. Check if events are coming in e.g. realtime search with correct parameter with all-time (this is the only time when you should use that ,-) when you are starting data ingestion on source system. That shows if events are coming or not (e.g. with iron timestamp or index or sourcetype).As thesis quite heavy operation to your splunk environment don't use it unless it's the only way to check that. Better is just do normal query with needed key words.

 

Some useful links:

r. Ismo

View solution in original post

isoutamo
SplunkTrust
SplunkTrust

Hi

As @chaker said, 1st you must understand the data path from source to indexer(s) and also is there separate search head(s). After that you can start to look if your configurations are on correct places. 

My own "best practices" are always do onboarding on my test instance (e.g. in workstation). Just take sample data and use own test app and index on that for easier remove events after tries and copy configurations to the correct production nodes. On your own instance use just

  1. Settings -> Add Data
  2. Upload
  3. Select your sample file & Next
  4. Set Event Breaks
  5. Set Timestamp
  6. Adjust Advanced values
  7. Loop with 4-6 until evens are seen correctly on "View Event Summary"
  8. Save As a new sourcetype
  9. Next 
  10. Test with that data and update search time preferences / field extractions etc.
    1. This means update those props.conf and transforms.conf on your test app
    2. Reload/restart
    3. Test and fix until works as you want
  11. Copy configurations to correct places
    1. UF
      1. props.conf if there are something which need to add there
    2. HF if you have those
      1. props.conf & transforms.conf
    3. Indexer(s)
      1. props.conf & transforms.conf
    4. SH(s)
      1. props.conf & transforms.conf if you have field extractions etc.
      2. fields.conf if you have a new indexed fields

If you are getting data into with HEC then this is little bit different process based on which kind of events vs. raw you are getting in.

For trouble shooting "splunk btool" is the tool to check that you have correct configuration in use. If/when you are troubleshooting SH side configurations remember to add --app and --user if/when needed as the precedence is different than in index phase!

Summary:

  1. Ensure that it works on your clean test environment 
  2. Check if events are coming in e.g. realtime search with correct parameter with all-time (this is the only time when you should use that ,-) when you are starting data ingestion on source system. That shows if events are coming or not (e.g. with iron timestamp or index or sourcetype).As thesis quite heavy operation to your splunk environment don't use it unless it's the only way to check that. Better is just do normal query with needed key words.

 

Some useful links:

r. Ismo

chaker
Contributor

First understand your Splunk topology. Are there heavy forwarders (HF) involved? 

Index time instructions will only apply on a HF or Indexer, and events will only be parsed once. So if an event is parsed by and HF and sent to an indexer, instructions for that event on the indexer will be ignored.

Try to use btool to track down the entries in props.conf. That should give you a good place to start.

https://docs.splunk.com/Documentation/Splunk/9.0.1/Troubleshooting/Usebtooltotroubleshootconfigurati...

/opt/splunk/bin/splunk btool props list <sourcetype> --debug

 

gcusello
SplunkTrust
SplunkTrust

Hi @youngsuh,

the best solution to troubleshoot parsing is to manually upload a file in the "Add data" GUI feature and  use it to search the correct parsing rule.

In Splunk internal logs there isn't any usegul information because a not correct parsing rule isn't an error for Splunk abut only for you the read the events.

Ciao.

Giuseppe

Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...