Hi Everyone,
I am working on an addon to collect event result based for an an alert and send it to an API endpoint. Once the response is success the endpoint returns a success message in a json format and I Want to store it in a custom index and sourcetype.
I tired using below code but the data is written to Main index instead of my custom index. Is there way to write the event in to custom index for an alert action build via Splunk Addon builder.
helper.addevent("hello", sourcetype="customsource")
helper.addevent("world", sourcetype="customsource")
helper.writeevents(index="mycustomindex", host="localhost", source="localhost")
Regards,
Naresh
I don't know of a solution that works as you want it to. What I would normally do is what I described where the remote side would log its results to Splunk using something like HEC, syslog or a UF
I used alert_logevent alert action to ingest events from custom alert action, you may try below :
from future.moves.urllib.parse import urlencode
from future.moves.urllib.request import urlopen, Request
from future.moves.urllib.error import HTTPError, URLError
from splunk.util import unicode
def log_event(helper, event, source, sourcetype, host, index):
if event is None:
helper.log_error("ERROR No event provided\n")
return False
query = [('source', source), ('sourcetype', sourcetype), ('index', index)]
if host:
query.append(('host', host))
url = '%s/services/receivers/simple?%s' % (helper.settings['server_uri'], urlencode(query))
try:
encoded_body = unicode(event).encode('utf-8')
req = Request(url, encoded_body, {'Authorization': 'Splunk %s' % helper.settings['session_key']})
res = urlopen(req)
if 200 <= res.code < 300:
helper.log_debug("receiver endpoint responded with HTTP status=%d\n" % res.code)
return True
else:
helper.log_error("receiver endpoint responded with HTTP status=%d\n" % res.code)
return False
except HTTPError as e:
helper.log_error("Error sending receiver request: %s\n" % e)
except URLError as e:
helper.log_error("Error sending receiver request: %s\n" % e)
except Exception as e:
helper.log_error("Error %s\n" % e)
return False
def your_def_like_to_ingest_events():
success = log_event(
helper,
event=data_url_final,
source=source,
sourcetype=sourcetype,
host=host,
index=index
)
if not success:
sys.exit(2)
Note: only one event per call to log_event will be ingested. if you want to ingest multiple events then you need to call log_event multiple times.
@rfaircloth_splu I tried to use the following code but I got new_event is not available. Could you please let me know on how to load helper.new_event for alert action.
Could some one help me on this. I am stuck on how to save the API call results back to splunk index for an alert action call.
Regards,
Naresh
Could some one help me on this. I am stuck on how to save the API call results back to splunk index for an alert action call.
Regards,
Naresh
I don't have a specific solution as you ask for but its a common practice to send events and receive results async so however you would collect events (file syslog hec) the response would come back over that method
I honestly don't think you can the design of alert actions was to send the result to _internal its not a collection tool.
@rfaircloth_splu I am not trying to write the event to _internal index. I am trying to store the data into my custom index. So is there a way to write the data into my custom index for my requirement using alert action?
I don't know of a solution that works as you want it to. What I would normally do is what I described where the remote side would log its results to Splunk using something like HEC, syslog or a UF
@rfaircloth_splu Yes I have that in place now using HEC. I was just wondering because if we use HEC method the traffic goes out and comes in. Instead if it can write the data without multiple hope should be great.
Anyway thank you for your reply @rfaircloth_splu 👍