All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Again, here is a runanywhere example with your sample data | makeresults | eval _raw="3DS2 Server ARes Response: {\"messageType\":\"ARes\",\"status\":\"INTERNAL_VALIDATION_FAILED\",\"statusMessage\"... See more...
Again, here is a runanywhere example with your sample data | makeresults | eval _raw="3DS2 Server ARes Response: {\"messageType\":\"ARes\",\"status\":\"INTERNAL_VALIDATION_FAILED\",\"statusMessage\":\"invalid message fields, wrong message from ds:[{\\\"threeDSServerTransID\\\":\\\"123\\\",\\\"messageType\\\":\\\"Erro\\\",\\\"messageVersion\\\":\\\"2.2.0\\\",\\\"acsTransID\\\":\\\"345\\\",\\\"dsTransID\\\":\\\"567\\\",\\\"errorCode\\\":\\\"305\\\",\\\"errorComponent\\\":\\\"A\\\",\\\"errorDescription\\\":\\\"Cardholder Account Number is not in a range belonging to Issuer\\\",\\\"errorDetail\\\":\\\"acctNumber\\\",\\\"errorMessageType\\\":\\\"AReq\\\"}]; type[Erro] code[101] component[SERVER]\"}" | rex "Response: (?<response>\{.+\})" | spath input=response | rex field=statusMessage "ds:\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail If it is not working for some of your real data, then your sample is not an accurate representation of said (failing) data.
There are two types of scheduling modes for a saved search - real-time (not to be confused with real-time searches!) and continuous. Oversimplifying (but just a bit): - A real-time scheduled search... See more...
There are two types of scheduling modes for a saved search - real-time (not to be confused with real-time searches!) and continuous. Oversimplifying (but just a bit): - A real-time scheduled search will try to execute a search covering time from t0 till t1 at some point in time tA. It might not get executed at tA because the SH(C) is overloaded. In that case scheduler will try to execute it until tA+(schedule window). If it cannot run the search because the SH(C) is still overloaded, it will finally give up. Next scheduled run of the same search which might occur at some tB in the future will cover time from t2 to t3. - A continuously scheduled search will try to run the search from t0 till t1 at tA. If it cannot find a free "search slot", it will retry the same search (still from t0 till t1) until it finally can. Additional difference here is that for real-time scheduled search if the schedule window is sufficiently big, or if there were sufficiently many skipped occurrences of the search, you might have significant periods of your data not covered by run searches. The point of continuous-scheduled searches is to finally get all your data (hence the "continuous") covered by searches at the expense of "response time" (the more searches you have and the more "clogged" your search heads are, the bigger "lag" you will have because scheduler will search more and more for the opportunity to run queued searches over old data). More information here (the scheduling mechanics works the same for reports and alerts - they are all just scheduled searches). https://docs.splunk.com/Documentation/Splunk/latest/Report/Configurethepriorityofscheduledreports  
Magic. It works. But small issue here. It shows \"errorDetail\": Hmmm  
Try this | rex "Response: (?<response>\{.+\})" | spath input=response | rex field=statusMessage "ds:\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail
This is again an example of data formatted in a way non-friendly to Splunk (at least in your case). While you can use @ITWhisperer 's search to get your results, bear in mind that it's not field extr... See more...
This is again an example of data formatted in a way non-friendly to Splunk (at least in your case). While you can use @ITWhisperer 's search to get your results, bear in mind that it's not field extraction that happens in this search, it's data manipulation. You don't _have_ the fields in your data, you have to create them manually by manipulating and correlating other data extracted from events. It is one of the hard decisions that must be made when exporting data do json. You have two possible approaches - one is to dynamically name the fields like { "Group1":  {  "NewValue":"a", "OldValue":"b" }, "Group2": { "NewValue":"c", "OldValue":"d" }} Another is - as is in your case - to have what you have as field name, exported as a "label" to a constant name. [ { "Name": "Group1", "NewValue": "a", "OldValue":"b"}, { " Name": "Group1", "NewValue": "c", "OldValue": "d" }}] Each of those approaches has its pros and cons. The first form is not very friendly if you want to do some aggregations and other similar manipulations because your fields are not statically named so you might have to do some strange things with foreach and wildcarding in stats when manipulating them. But you can search with conditions like "Group1.NewValue=a". But you can't do "stats count by GroupName" or something like that The second form though doesn't "tie" values with the name - Splunk doesn't support structured data, it flattens the JSONs and XMLs on parsing - so you have to bend over backwards to get your specific values for interesting "field" and you can't simply use conditions like "Group1.NewValue=a" in your search. but you can do "stats count by Name". So it's always something. One way or another your data format will end up being a burden.  
Well. 401 does mean that the authentication was not performed correctly (which means no token provided or wrong token). So I'd start by checking what requests are being sent to Splunk from your both... See more...
Well. 401 does mean that the authentication was not performed correctly (which means no token provided or wrong token). So I'd start by checking what requests are being sent to Splunk from your both scripts (the one working properly and the one not working) and comparing the requests (especially the tokens of course).
You can try #docs channel on Slack
OK. It's starting to get a bit silly. A community is meant to be a help by users for other users. Help in learning the platform and what it can do, checking if your train of thought is correct and so... See more...
OK. It's starting to get a bit silly. A community is meant to be a help by users for other users. Help in learning the platform and what it can do, checking if your train of thought is correct and so on. It is _not_ meant as a free support service. And you're trying to do just that - get your problem solved without trying to understand the underlying issue and providing almost no information about it. You obviously have _some_ problem with your data ingestion process. What it is? We don't know. It's something that should be examined on your site locally by someone who can verify the data as it is ingested into Splunk, who can check the settings across your Splunk infrastructure and who can talk with administrators of your sources to verify the settings on their side and what and how they produce the data you're ingesting into Splunk. This is not something you can do by asking single questions on Answers without any significant effort on your side (true, sometimes Answers can be helpful in diagnostics when the asking person does quite a lot of work on their own and only needs some gentle hints now and then). This is something a skilled Splunk engineer would probably diagnose in a relatively short time compared to ping-ponging scraps of information to Answers and back. People on Answers are volunteers who use their spare time to help others. But that doesn't mean that they are free support service. You want some effort from them - show some serious effort on your side as well. Make the problem interesting, not frustrating because you're asking about stuff they have no idea of knowing because it's your internal information.
Hello Here is an updated accurate data. Thank you. 3DS2 Server ARes Response: {"messageType":"ARes","status":"INTERNAL_VALIDATION_FAILED","statusMessage":"invalid message fields, wrong message fro... See more...
Hello Here is an updated accurate data. Thank you. 3DS2 Server ARes Response: {"messageType":"ARes","status":"INTERNAL_VALIDATION_FAILED","statusMessage":"invalid message fields, wrong message from ds:[{\"threeDSServerTransID\":\"123\",\"messageType\":\"Erro\",\"messageVersion\":\"2.2.0\",\"acsTransID\":\"345\",\"dsTransID\":\"567\",\"errorCode\":\"305\",\"errorComponent\":\"A\",\"errorDescription\":\"Cardholder Account Number is not in a range belonging to Issuer\",\"errorDetail\":\"acctNumber\",\"errorMessageType\":\"AReq\"}]; type[Erro] code[101] component[SERVER]"}    
You create your own page anywhere (not in the Splunk installation - on your own web infrastructure) and put URL to that page into the setting I mentioned. Where this setting is (what it is I already ... See more...
You create your own page anywhere (not in the Splunk installation - on your own web infrastructure) and put URL to that page into the setting I mentioned. Where this setting is (what it is I already literally specified) and how to apply the setting I leave as an exercise to the reader because authentication-related stuff (even if it's just a failed login page) is not something you should fiddle with freely if you're just a newcomer.
Here is a runanywhere example using your original event data showing the solution working. If it is not working with your real data, this means that the sample you shared is not an accurate represent... See more...
Here is a runanywhere example using your original event data showing the solution working. If it is not working with your real data, this means that the sample you shared is not an accurate representation of your real data. Please share an updated accurate representation of your data.
Users can send search jobs to background, where they will run until completion. If Splunk restarts, then these search jobs will be interrupted so they cannot finish. The option "restart background ... See more...
Users can send search jobs to background, where they will run until completion. If Splunk restarts, then these search jobs will be interrupted so they cannot finish. The option "restart background jobs" will make it so that Splunk re-runs the interrupted search jobs after the restart is complete. It is set on a per-user basis.
What you have posted is not the raw text, and is therefore not valid JSON! Having said that, try something like this | spath ModifiedProperties{} output=ModifiedProperties | eval ModifiedProperties... See more...
What you have posted is not the raw text, and is therefore not valid JSON! Having said that, try something like this | spath ModifiedProperties{} output=ModifiedProperties | eval ModifiedProperties=mvindex(ModifiedProperties,1) | spath input=ModifiedProperties | eval {Name}.NewValue=NewValue | eval {Name}.OldValue=OldValue | fields - Name NewValue OldValue ModifiedProperties
As a test, could you add dummy credentials to the proxy and "additional_parameters" field?
How is it possible for me to tell ? You haven't explained which duplicate events you have removed, nor how you removed them. If you can show that the 10 hour delay that you are seeing in your calcula... See more...
How is it possible for me to tell ? You haven't explained which duplicate events you have removed, nor how you removed them. If you can show that the 10 hour delay that you are seeing in your calculation is caused by duplicate events (which is possible if you have collected events for those time periods over 10 hours after their timestamps), then removing these duplicate events would affect your delay statistic.
Perhaps your API request is malformed. Has your python program ever got the desired response, perhaps with another token? If not, you could post a sanitized version of the segment of your python scr... See more...
Perhaps your API request is malformed. Has your python program ever got the desired response, perhaps with another token? If not, you could post a sanitized version of the segment of your python script that sends the API request, so we can see if there is something wrong.
As a test, does the app still complain when you add a filler proxy user+password combination in the settings? There is also a different app that is often suggested for the use case of searching Elas... See more...
As a test, does the app still complain when you add a filler proxy user+password combination in the settings? There is also a different app that is often suggested for the use case of searching Elasticsearch data from Splunk. If it is not strictly necessary for you to migrate the data from Elasticsearch into Splunk, then this may be an option: https://github.com/brunotm/elasticsplunk
Usually when you scroll to the bottom of the Splunk docs pages, there is a feedback form where you can submit feedback about broken links, items needing better explanation, etc. Unfortunately this i... See more...
Usually when you scroll to the bottom of the Splunk docs pages, there is a feedback form where you can submit feedback about broken links, items needing better explanation, etc. Unfortunately this is not present in the Splexicon pages. Perhaps you can submit feedback on another docs page pointing out the broken links, with the explanation that there was no equivalent feedback form on the Splexicon.
Thank you for your help @marnall  You are correct, I did enter my elastic search information in the app but it did not pull any data. When I go thorough _Internal logs, I see some error logs th... See more...
Thank you for your help @marnall  You are correct, I did enter my elastic search information in the app but it did not pull any data. When I go thorough _Internal logs, I see some error logs that contains users like proxy and root, but I dont have any of this users in my configs nor in my database credentials and also I didnt active the proxy option in the Elasticsearch Data Integrator add-on.   I could mention that I can connect to elastic database via curl from splunk server which means the connection is open.
Hello @ITWhisperer  Ive entered  INTERNAL_VALIDATION_FAILED| spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail   And there is onl... See more...
Hello @ITWhisperer  Ive entered  INTERNAL_VALIDATION_FAILED| spath | rex field=statusMessage "\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail   And there is only "errorDetail\":  + count of events without values.