All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Perhaps your API request is malformed. Has your python program ever got the desired response, perhaps with another token? If not, you could post a sanitized version of the segment of your python scr... See more...
Perhaps your API request is malformed. Has your python program ever got the desired response, perhaps with another token? If not, you could post a sanitized version of the segment of your python script that sends the API request, so we can see if there is something wrong.
As a test, does the app still complain when you add a filler proxy user+password combination in the settings? There is also a different app that is often suggested for the use case of searching Elas... See more...
As a test, does the app still complain when you add a filler proxy user+password combination in the settings? There is also a different app that is often suggested for the use case of searching Elasticsearch data from Splunk. If it is not strictly necessary for you to migrate the data from Elasticsearch into Splunk, then this may be an option: https://github.com/brunotm/elasticsplunk
Usually when you scroll to the bottom of the Splunk docs pages, there is a feedback form where you can submit feedback about broken links, items needing better explanation, etc. Unfortunately this i... See more...
Usually when you scroll to the bottom of the Splunk docs pages, there is a feedback form where you can submit feedback about broken links, items needing better explanation, etc. Unfortunately this is not present in the Splexicon pages. Perhaps you can submit feedback on another docs page pointing out the broken links, with the explanation that there was no equivalent feedback form on the Splexicon.
Thank you for your help @marnall  You are correct, I did enter my elastic search information in the app but it did not pull any data. When I go thorough _Internal logs, I see some error logs th... See more...
Thank you for your help @marnall  You are correct, I did enter my elastic search information in the app but it did not pull any data. When I go thorough _Internal logs, I see some error logs that contains users like proxy and root, but I dont have any of this users in my configs nor in my database credentials and also I didnt active the proxy option in the Elasticsearch Data Integrator add-on.   I could mention that I can connect to elastic database via curl from splunk server which means the connection is open.
Hello @ITWhisperer  Ive entered  INTERNAL_VALIDATION_FAILED| spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail   And there is onl... See more...
Hello @ITWhisperer  Ive entered  INTERNAL_VALIDATION_FAILED| spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail   And there is only "errorDetail\":  + count of events without values.          
@ITWhisperer  I have removed so many duplicates events. Because of it delta_time difference is decreased to 1.9 hours as compared to yesterday. Is the duplicate events also be the potential cause ... See more...
@ITWhisperer  I have removed so many duplicates events. Because of it delta_time difference is decreased to 1.9 hours as compared to yesterday. Is the duplicate events also be the potential cause ?  
Thanks @ITWhisperer  I have updated the original post with event text.
How can I create the own page with SSO Authentication Failure Redirect option. Acutally I’m new to splunk.  @PickleRick 
Your existing props.conf settings are good for telling Splunk how to extract _time from the events.  Don't try to put _time into human-readable format.  That's done automatically at search time.  For... See more...
Your existing props.conf settings are good for telling Splunk how to extract _time from the events.  Don't try to put _time into human-readable format.  That's done automatically at search time.  Forcing it at ingest time will break how Splunk stores and retrieves events. If you need another field to contain a human-readable form of _time then do it at search time using EVAL in props.conf. [myprops] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = "timestamp": TIME_FORMAT = %s%3N EVAL-timestamp = strftime(_time, "%Y-%m-%dT%H:%M:%S.%3N") This applies to all apps, not just Enterprise Security
Hello, I have events with epoch time. How can I extract epoch time in human readable format using props.conf. My props.conf file is provided below: [myprops] SHUOLD_LINEMERGE=false LINE_BREAK=(... See more...
Hello, I have events with epoch time. How can I extract epoch time in human readable format using props.conf. My props.conf file is provided below: [myprops] SHUOLD_LINEMERGE=false LINE_BREAK=([\r\n]+) TIME_PREFIX="timestamp": TIME_FORMAT=%s%3N Sample Events: {"id":"A303", "timestamp":1723933920339","message":"average time to transfer file"} {"id":"A307", "timestamp":1723933915610","message":"average time to hold process"} {"id":"A309", "timestamp":1723933735652","message":"average time to transfer file"} Extracted time should be: YYYY-mm-ddTHH:MM:SS.3N       
This thread is more than 2 years old.  For better chances at having more people see it, please post a new question.
Ugh. That's a pretty example of ugly data. Technically your data is a json structure with a field containing a string. That string describes another json structure but from splunk's point of view it'... See more...
Ugh. That's a pretty example of ugly data. Technically your data is a json structure with a field containing a string. That string describes another json structure but from splunk's point of view it's just a string. That makes it very inconvenient and possibly inefficient to manipulate. It would be much better if you got this from your source as some more sane format.
I don't think you can edit this page. But you can set your own page with ssoAuthFailureRedirect option so your users will be redirected to a webpage of your choice in case of sso authentication failure
Dear Splunkers...  As i was checking about the fishbuckets at the splexicon https://docs.splunk.com/Splexicon:Fishbucket this page got a link - See the detailed Splunk blog topic but that blog li... See more...
Dear Splunkers...  As i was checking about the fishbuckets at the splexicon https://docs.splunk.com/Splexicon:Fishbucket this page got a link - See the detailed Splunk blog topic but that blog link is a broken link.  (PS - on Splunk docs, at lower page, there is a comment input box to give feedbacks, but on splexicon page, no feedbacks input box !)   many of us are aware of wiki.splunk links are broken too.    shouldn't splunk do something about these broken links? shouldn't splunk do splunking on its own.. suggestions pls.  have a great weekend, best regards Sekar
There are at least two separate apps for "integration" with ES (haven't used either so can't help much in terms of reviewing them). But the question (not necessarily for answering here, just a food f... See more...
There are at least two separate apps for "integration" with ES (haven't used either so can't help much in terms of reviewing them). But the question (not necessarily for answering here, just a food for thought) is what do you really wanna do. Because in terms of high-level overview you have twp options: 1. Simply pull the data from ES, ingest it into Splunk and work with it as any other Splunk-indexed data. This has two drawbacks - you're getting data already pre-processed by ES and it might be in a completely different format than Splunk native addons for your source types would expect. And of course you're wasting resources (most notably storage). 2. Try to search data from your ES cluster and only do "post-processing" in Splunk. While this might work (I suppose those apps on Splunkbase aim at it) you're not using Splunk's abilities to the fullest - most importantly you're not using Splunk's map-reduce processing splitting the workload and parallelizing it if possible. So while it might be possible with one or both of those apps just as you can query a SQL database using dbconnect it is probably not something I'd do on big datasets.
Hello everyone, I hope you’re doing well. I need assistance with integrating Splunk with Elasticsearch. My goal is to pull data from Elasticsearch and send it to Splunk for analysis. I have a few q... See more...
Hello everyone, I hope you’re doing well. I need assistance with integrating Splunk with Elasticsearch. My goal is to pull data from Elasticsearch and send it to Splunk for analysis. I have a few questions on how to achieve this effectively: 1. **Integration Methods:** Are there recommended methods for integrating Splunk with Elasticsearch? 2. **Tools and Add-ons:** What tools or add-ons can be used to facilitate this integration? 3. **Setup and Configuration:** Are there specific steps or guidelines to follow for setting up this integration correctly? 4. **Examples and Guidance:** Could you provide any examples or guidance on how to configure Splunk to pull data from Elasticsearch? Any help or useful resources would be greatly appreciated. Thank you in advance for your time and assistance!    
Hi ! I am facing the same issue getting windows logs and sysmon logs but not getting any Linux and zeek logs . Using this inputs.conf file and all settings followed per documentation credneial packa... See more...
Hi ! I am facing the same issue getting windows logs and sysmon logs but not getting any Linux and zeek logs . Using this inputs.conf file and all settings followed per documentation credneial package installed sucessfully as well. Also installed Zeek Apps as well. Sorry forgot to mention that seeing hosts when do index=_internal search last 24 hours.    Any help please ? default] host = zeek-VirtualBox [monitor:///var/log/messages] disabled = 0 index = unix [monitor:///var/log/syslog] disabled = 0 index = unix [monitor:///var/log/faillog] disabled = 0 index = unix [monitor:///var/log/auth.log] disabled = 0 index = unix [monitor:///opt/zeek/log/current] disabled = 0 _TCP_ROUTING = * index = zeek sourcetype = bro:jason whitelist = \.log$
From your raw event you could do this | spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail if you have already extracted statusMessage w... See more...
From your raw event you could do this | spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail if you have already extracted statusMessage when the event was ingested, you can skip the first spath command
Hello @ITWhisperer  Thank you for your response.   Here is the raw data: { "messageType": "Data", "status": "Error", "statusMessage": "invalid message fields, wrong message from ds:[{\"three... See more...
Hello @ITWhisperer  Thank you for your response.   Here is the raw data: { "messageType": "Data", "status": "Error", "statusMessage": "invalid message fields, wrong message from ds:[{\"threeDSServerTransID\":\"123\",\"messageType\":\"Erro\",\"messageVersion\":\"2.2.0\",\"acsTransID\":\"123\",\"dsTransID\":\"123\",\"errorCode\":\"305\",\"errorComponent\":\"A\",\"errorDescription\":\"Transaction data not valid\",\"errorDetail\":\"No issuer found\",\"errorMessageType\":\"AReq\"}]; type[Erro] code[101] component[SERVER]" }
Response Code: 401 Response text: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> I am using Splun... See more...
Response Code: 401 Response text: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> I am using Splunk bearer token in my python program using REST API, but suddenly I got this error also I have another precisely program that using Splunk token and it works fine without get the error that I got from the other program.  I already test the token it gets 200 responses. I don't know what happens.