Splunk Enterprise

Unable to Fetch Data to Splunk OTX Index

tolgaakkapulu
Explorer

Hello,

After completing all the installation steps and integration with the Key on the Alien Vault OTX side in the Forwarder Splunk interface, I see that the index=otx query result is empty. I could not find any errors. What could be the reasons for the OTX index being empty? Can you help me with this?

Labels (1)
0 Karma

tolgaakkapulu
Explorer

The problem was that the index was not created on the master. When the index created on the master server was pushed to all indexers, the OTX data was pulled and written to the index. Thank you for your help and feedback.

PickleRick
SplunkTrust
SplunkTrust

The index is not created on the CM. It is defined in an app which is pushed to indexers and the index is created there.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

1. The addon is not Splunk-supported so you might need to contact the authors directly.

2. We don't know your architecture and we don't know where in your environment you installed the addon

3. We don't know _exactly_ what you did and on which component. "completing all the installation steps" is kinda vague.

0 Karma

livehybrid
Super Champion

Hi @tolgaakkapulu,

Are there any other events in ta_otx_otx.log file?

Having checked the python code for the addon, it doesnt look like there is much in terms of logging so I wouldnt expect there to be much. 

Are you able to confirm that the X-OTX-API-KEY you entered is correct? Also, did you specify a backfill days value for the input? If not then I think it will only report pulses sine you setup the Splunk input. 

Note - Changing the backfill *after* creating the input might not take effect because the checkpoint generated by the input uses the input stanza name, therefore you would need to create an input with a new name if you want to try this.

If you still have no joy then please try the following:

curl -X GET "https://otx.alienvault.com/api/v1/pulses/subscribed?modified_since=1743940560" -H "X-OTX-API-KEY: <api_key>" 

Updating the <api_key> with your OTX api key. 

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

tolgaakkapulu
Explorer

@kiran_panchavat Thank you for your feedback.

Yes, it was created. When internal logs are examined, the following logs are taken from the $SPLUNK_HOME/var/log/splunk/ta_otx_otx.log file. Is it unable to pull data or could there be a different situation?

2025-04-08 15:08:38,918 INFO pid=433448 tid=MainThread file=base_modinput.py:log_info:295 | Completed polling. Logged 0 pulses and 0 indicators.

0 Karma

livehybrid
Super Champion

Hi @tolgaakkapulu 

Are there any other events in ta_otx_otx.log file?

Having checked the python code for the addon, it doesnt look like there is much in terms of logging so I wouldnt expect there to be much. 

Are you able to confirm that the X-OTX-API-KEY you entered is correct? Also, did you specify a backfill days value for the input? If not then I think it will only report pulses sine you setup the Splunk input. 

Note - Changing the backfill *after* creating the input might not take effect because the checkpoint generated by the input uses the input stanza name, therefore you would need to create an input with a new name if you want to try this.

If you still have no joy then please try the following:

curl -X GET "https://otx.alienvault.com/api/v1/pulses/subscribed?modified_since=1743940560" -H "X-OTX-API-KEY: <api_key>" 

Updating the <api_key> with your OTX api key. 

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

tolgaakkapulu
Explorer

@livehybrid  Additionally, log samples:

2025-04-08 13:56:19,927 INFO pid=426325 tid=MainThread file=base_modinput.py:log_info:295 | Retrieving subscribed pulses since: 2025-04-08 10:54:39.948582
2025-04-08 13:56:19,927 INFO pid=426325 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled!
2025-04-08 13:56:20,146 INFO pid=426325 tid=MainThread file=base_modinput.py:log_info:295 | Completed polling. Logged 0 pulses and 0 indicators.
2025-04-08 13:58:00,005 INFO pid=426392 tid=MainThread file=setup_util.py:log_info:117 | Log level is not set, use default INFO
2025-04-08 13:58:00,006 INFO pid=426392 tid=MainThread file=splunk_rest_client.py:_request_handler:99 | Use HTTP connection pooling
2025-04-08 13:58:00,038 INFO pid=426392 tid=MainThread file=base_modinput.py:log_info:295 | Retrieving subscribed pulses since: 2025-04-08 10:56:19.897881
2025-04-08 13:58:00,039 INFO pid=426392 tid=MainThread file=setup_util.py:log_info:117 | Proxy is not enabled!
2025-04-08 13:58:00,268 INFO pid=426392 tid=MainThread file=base_modinput.py:log_info:295 | Completed polling. Logged 0 pulses and 0 indicators.

0 Karma

tolgaakkapulu
Explorer

Hi @livehybrid 

curl request returns results but no data can be pulled to the OTX index.

Additionally, according to your experience, which server would be more beneficial to install the OTX application on the Splunk Cluster side? (Ex: master, deploy, forwarder, indexer, etc.)

0 Karma

kiran_panchavat
Influencer

@tolgaakkapulu 

We need to install the add-on on the heavy forwarder to configure the data inputs. This allows the heavy forwarder to collect and forward data to the indexers. The same add-on should be installed on the search heads to enable search-time parsing. Most add-ons should be installed on BOTH the HF/indexer and the search head.  That's because they often have some properties that apply at index time and others that apply at search time.

Where to install Splunk add-ons - Splunk Documentation

NOTE: If your first Splunk Enterprise instance is a Heavy Forwarder (HF), install the add-on there for data parsing and configuring data inputs.  And if there's no Heavy Forwarder (HF) in your environment and you need to configure the add-on's data input, install the add-on on the search head and set up the data input there.

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

livehybrid
Super Champion

Hi @tolgaakkapulu 

I would probably install this on a heavy forwarder.

Since the curl request worked we can pretty much accept that your API key and connectivity works.

Were you able to see the date/time on one of the events returned from the curl request (It would be in the modified field)? Was this within the time period of the your "Backfill days" for the input you configured? 

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

 

0 Karma

tolgaakkapulu
Explorer

@livehybrid  Thank you for feedback. Yes, each result is dated today.

For example:

"modified":"2025-04-08T10:52:59.705000",
"created":"2025-04-08T10:51:32.650000",

 

And, definition of Backfill days: 30

0 Karma

livehybrid
Super Champion

@tolgaakkapulu 

Im afraid I'm a little stuck here too...as it sounds like its configured correctly and you've confirmed the data coming back, the backfill and connection etc.

If you're feeling adventurous you could modify the Python in the app to add more logging, to see if that helps!

In $SPLUNK_HOME/etc/apps/TA-otx/bin/input_module_otx.py find the following section of code:

    response = helper.send_http_request(
      'https://otx.alienvault.com/api/v1/pulses/subscribed',
        'GET',
        parameters = {'modified_since' : since },
        headers = { 'X-OTX-API-KEY' : api_key },
        verify=True,
        use_proxy=True
    )

    response.raise_for_status()

    pulses = response.json()['results']

Replace it with:

    response = helper.send_http_request(
      'https://otx.alienvault.com/api/v1/pulses/subscribed',
        'GET',
        parameters = {'modified_since' : since },
        headers = { 'X-OTX-API-KEY' : api_key },
        verify=True,
        use_proxy=True
    )
    helper.log_info("modified_since: %s" % str(since))

    response.raise_for_status()
    respData = response.json()
    helper.log_info("Response from request")
    helper.log_info(respData)
    pulses = respData['results']

Then disable and re-enable the input and check the logs to see if it gives any more insight!

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

kiran_panchavat
Influencer

@tolgaakkapulu 

Based on the log entry you've provided, it appears that the OTX technical add-on for Splunk successfully ran a polling operation, but didn't find any new data to ingest.
Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!

kiran_panchavat
Influencer

@tolgaakkapulu 

Please verify whether the `OTX` index has been created on both the indexers and the heavy forwarder. If it hasn't been created, kindly proceed to create it. In some cases, data may be successfully fetched, but if the index doesn't exist, the events will be discarded.

  • Create the index on the Heavy Forwarder and also on the Indexer, if not already created.

  • If you're using a single standalone Splunk instance, create the index only on that instance.

  • To verify if the OTX Add-on is functioning correctly, check the internal logs by running the following search on the Search Head:

index=_internal *otx*

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!

PickleRick
SplunkTrust
SplunkTrust

Why would you create an index on the HF? This is plain wrong. A HF is supposed to receive/fetch the data and forward it to the next tier (usually indexer but can be an intermediate forwarder). It's not supposed to do local indexing.

0 Karma

kiran_panchavat
Influencer

@PickleRickThe index needs to be created on the Heavy Forwarder (HF) so it can be selected in the data input configuration. There's no requirement to store data on the HF unless indexAndForward=true is explicitly set. This is mainly for naming consistency within the GUI.

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

PickleRick
SplunkTrust
SplunkTrust

Yes. Only if you're using GUI to configure the input. Even then it's not really necessary. You can define input to set any destination index and then manually edit input.conf (or use REST to update the input).

0 Karma

livehybrid
Super Champion

Incase it helps @tolgaakkapulu you do *not* need to create the index on the HF - this app has allows you to type in a chosen index name rather than select from a static list.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud's AI Assistant in Action Series: Auditing Compliance and ...

This is the third post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how to ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

What You Read The Most: Splunk Lantern’s Most Popular Articles!

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...