All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Here is an updated accurate data. Thank you. 3DS2 Server ARes Response: {"messageType":"ARes","status":"INTERNAL_VALIDATION_FAILED","statusMessage":"invalid message fields, wrong message fro... See more...
Hello Here is an updated accurate data. Thank you. 3DS2 Server ARes Response: {"messageType":"ARes","status":"INTERNAL_VALIDATION_FAILED","statusMessage":"invalid message fields, wrong message from ds:[{\"threeDSServerTransID\":\"123\",\"messageType\":\"Erro\",\"messageVersion\":\"2.2.0\",\"acsTransID\":\"345\",\"dsTransID\":\"567\",\"errorCode\":\"305\",\"errorComponent\":\"A\",\"errorDescription\":\"Cardholder Account Number is not in a range belonging to Issuer\",\"errorDetail\":\"acctNumber\",\"errorMessageType\":\"AReq\"}]; type[Erro] code[101] component[SERVER]"}    
You create your own page anywhere (not in the Splunk installation - on your own web infrastructure) and put URL to that page into the setting I mentioned. Where this setting is (what it is I already ... See more...
You create your own page anywhere (not in the Splunk installation - on your own web infrastructure) and put URL to that page into the setting I mentioned. Where this setting is (what it is I already literally specified) and how to apply the setting I leave as an exercise to the reader because authentication-related stuff (even if it's just a failed login page) is not something you should fiddle with freely if you're just a newcomer.
Here is a runanywhere example using your original event data showing the solution working. If it is not working with your real data, this means that the sample you shared is not an accurate represent... See more...
Here is a runanywhere example using your original event data showing the solution working. If it is not working with your real data, this means that the sample you shared is not an accurate representation of your real data. Please share an updated accurate representation of your data.
Users can send search jobs to background, where they will run until completion. If Splunk restarts, then these search jobs will be interrupted so they cannot finish. The option "restart background ... See more...
Users can send search jobs to background, where they will run until completion. If Splunk restarts, then these search jobs will be interrupted so they cannot finish. The option "restart background jobs" will make it so that Splunk re-runs the interrupted search jobs after the restart is complete. It is set on a per-user basis.
What you have posted is not the raw text, and is therefore not valid JSON! Having said that, try something like this | spath ModifiedProperties{} output=ModifiedProperties | eval ModifiedProperties... See more...
What you have posted is not the raw text, and is therefore not valid JSON! Having said that, try something like this | spath ModifiedProperties{} output=ModifiedProperties | eval ModifiedProperties=mvindex(ModifiedProperties,1) | spath input=ModifiedProperties | eval {Name}.NewValue=NewValue | eval {Name}.OldValue=OldValue | fields - Name NewValue OldValue ModifiedProperties
As a test, could you add dummy credentials to the proxy and "additional_parameters" field?
How is it possible for me to tell ? You haven't explained which duplicate events you have removed, nor how you removed them. If you can show that the 10 hour delay that you are seeing in your calcula... See more...
How is it possible for me to tell ? You haven't explained which duplicate events you have removed, nor how you removed them. If you can show that the 10 hour delay that you are seeing in your calculation is caused by duplicate events (which is possible if you have collected events for those time periods over 10 hours after their timestamps), then removing these duplicate events would affect your delay statistic.
Perhaps your API request is malformed. Has your python program ever got the desired response, perhaps with another token? If not, you could post a sanitized version of the segment of your python scr... See more...
Perhaps your API request is malformed. Has your python program ever got the desired response, perhaps with another token? If not, you could post a sanitized version of the segment of your python script that sends the API request, so we can see if there is something wrong.
As a test, does the app still complain when you add a filler proxy user+password combination in the settings? There is also a different app that is often suggested for the use case of searching Elas... See more...
As a test, does the app still complain when you add a filler proxy user+password combination in the settings? There is also a different app that is often suggested for the use case of searching Elasticsearch data from Splunk. If it is not strictly necessary for you to migrate the data from Elasticsearch into Splunk, then this may be an option: https://github.com/brunotm/elasticsplunk
Usually when you scroll to the bottom of the Splunk docs pages, there is a feedback form where you can submit feedback about broken links, items needing better explanation, etc. Unfortunately this i... See more...
Usually when you scroll to the bottom of the Splunk docs pages, there is a feedback form where you can submit feedback about broken links, items needing better explanation, etc. Unfortunately this is not present in the Splexicon pages. Perhaps you can submit feedback on another docs page pointing out the broken links, with the explanation that there was no equivalent feedback form on the Splexicon.
Thank you for your help @marnall  You are correct, I did enter my elastic search information in the app but it did not pull any data. When I go thorough _Internal logs, I see some error logs th... See more...
Thank you for your help @marnall  You are correct, I did enter my elastic search information in the app but it did not pull any data. When I go thorough _Internal logs, I see some error logs that contains users like proxy and root, but I dont have any of this users in my configs nor in my database credentials and also I didnt active the proxy option in the Elasticsearch Data Integrator add-on.   I could mention that I can connect to elastic database via curl from splunk server which means the connection is open.
Hello @ITWhisperer  Ive entered  INTERNAL_VALIDATION_FAILED| spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail   And there is onl... See more...
Hello @ITWhisperer  Ive entered  INTERNAL_VALIDATION_FAILED| spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail   And there is only "errorDetail\":  + count of events without values.          
@ITWhisperer  I have removed so many duplicates events. Because of it delta_time difference is decreased to 1.9 hours as compared to yesterday. Is the duplicate events also be the potential cause ... See more...
@ITWhisperer  I have removed so many duplicates events. Because of it delta_time difference is decreased to 1.9 hours as compared to yesterday. Is the duplicate events also be the potential cause ?  
Thanks @ITWhisperer  I have updated the original post with event text.
How can I create the own page with SSO Authentication Failure Redirect option. Acutally I’m new to splunk.  @PickleRick 
Your existing props.conf settings are good for telling Splunk how to extract _time from the events.  Don't try to put _time into human-readable format.  That's done automatically at search time.  For... See more...
Your existing props.conf settings are good for telling Splunk how to extract _time from the events.  Don't try to put _time into human-readable format.  That's done automatically at search time.  Forcing it at ingest time will break how Splunk stores and retrieves events. If you need another field to contain a human-readable form of _time then do it at search time using EVAL in props.conf. [myprops] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = "timestamp": TIME_FORMAT = %s%3N EVAL-timestamp = strftime(_time, "%Y-%m-%dT%H:%M:%S.%3N") This applies to all apps, not just Enterprise Security
Hello, I have events with epoch time. How can I extract epoch time in human readable format using props.conf. My props.conf file is provided below: [myprops] SHUOLD_LINEMERGE=false LINE_BREAK=(... See more...
Hello, I have events with epoch time. How can I extract epoch time in human readable format using props.conf. My props.conf file is provided below: [myprops] SHUOLD_LINEMERGE=false LINE_BREAK=([\r\n]+) TIME_PREFIX="timestamp": TIME_FORMAT=%s%3N Sample Events: {"id":"A303", "timestamp":1723933920339","message":"average time to transfer file"} {"id":"A307", "timestamp":1723933915610","message":"average time to hold process"} {"id":"A309", "timestamp":1723933735652","message":"average time to transfer file"} Extracted time should be: YYYY-mm-ddTHH:MM:SS.3N       
This thread is more than 2 years old.  For better chances at having more people see it, please post a new question.
Ugh. That's a pretty example of ugly data. Technically your data is a json structure with a field containing a string. That string describes another json structure but from splunk's point of view it'... See more...
Ugh. That's a pretty example of ugly data. Technically your data is a json structure with a field containing a string. That string describes another json structure but from splunk's point of view it's just a string. That makes it very inconvenient and possibly inefficient to manipulate. It would be much better if you got this from your source as some more sane format.
I don't think you can edit this page. But you can set your own page with ssoAuthFailureRedirect option so your users will be redirected to a webpage of your choice in case of sso authentication failure