All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @att35, Assuming _raw is properly formatted--and both your original Splunk Web screenshot and your new formatted event imply it is--you can use a combination of eval and spath commands to iterate... See more...
Hi @att35, Assuming _raw is properly formatted--and both your original Splunk Web screenshot and your new formatted event imply it is--you can use a combination of eval and spath commands to iterate over the array and create new fields: | eval tmp="{".mvjoin(mvmap(json_array_to_mv(json_extract(json(_raw), "ModifiedProperties")), "\"".spath(_raw, "Name").".NewValue\":\"".spath(_raw, "NewValue")."\",\"".spath(_raw, "Name").".OldValue\":\"".spath(_raw, "OldValue")."\""), ",")."}" | spath input=tmp | fields - tmp The eval command creates a tmp field with the following value: {"Group.ObjectID.NewValue":"111111-2222222-333333-444444","Group.ObjectID.OldValue":"","Group.DisplayName.NewValue":"Group A","Group.DisplayName.OldValue":"","Group.WellKnownObjectName.NewValue":"","Group.WellKnownObjectName.OldValue":""} The spath command extracts the *.NewValue and *.OldValue fields from the tmp field. Note that empty values will be empty strings and null values will have the string value 'null'. If you want null values to be null fields, you can use the foreach command the nullif() eval function to override them: | foreach Group.*.NewValue [ eval "<<FIELD>>"=nullif('<<FIELD>>', "null") ] | foreach Group.*.OldValue [ eval "<<FIELD>>"=nullif('<<FIELD>>', "null") ] Search memory usage may be higher when using temporary fields to store and manipulate JSON objects in this way, and you may need to run multiple searches over smaller time ranges, depending on your user's search limits and workload policy.
@PickleRick  I know that my way of asking queries is wrong in bits and pieces and a master like you did not like it. I value the Splunk Answers platform and I am also familiar with the contributions... See more...
@PickleRick  I know that my way of asking queries is wrong in bits and pieces and a master like you did not like it. I value the Splunk Answers platform and I am also familiar with the contributions you have been making to users on the Splunk Answers platform over the years. You could have simply told me you don't want to respond on my half of the details post. I have been posting at least 100 + queries on Splunk answers throughout my Splunk career and I have not received a reply like today. You are such a valuable member of the Splunk trust, your reply has shattered my confidence. By writing like this type of unpleasant reply you diverted the attention of other users and experts who want to reply me and as a result essence of the query post is lost. I have seen your behaviour from my last two posts.  I exit this thread chat while maintaining the decorum of the Splunk Answers platform. Thanks for all the help.
Hello - I realize this question has been asked several times before and I've tried to implement every solution I've found, but nothing seems to be working. I simply want to update a single value v... See more...
Hello - I realize this question has been asked several times before and I've tried to implement every solution I've found, but nothing seems to be working. I simply want to update a single value visualization based on the text. If "Yes", then green, and if "No", red.  I've tried using older solutions involving rangemap and changing some of the charting options, but I'm not having any luck in v9.3.0.  | inputlookup mylookup.csv | search $time_tok$ $field_tok$=Y | stats max(Distance) AS GuideMiles | appendcols [| mylookup.csv | search $month_tok$ | stats max(TargetMiles)] | rename max(TargetMiles) AS TargetMiles | eval OnTarget=case(GuideMiles>=TargetMiles,"Yes", true(), "No") | table OnTarget  
I'm not saying you're wasting people's time deliberately. It's just that this is one of those cases where someone (in this case you) asks one thing without giving much background info, then it leads ... See more...
I'm not saying you're wasting people's time deliberately. It's just that this is one of those cases where someone (in this case you) asks one thing without giving much background info, then it leads to more and more problems and issues the poster is either unaware of or is not willing to share and only keeps insisting on providing a solution based on a very small piece of the actual information needed for such troubleshooting. I didn't mean to be rude against you but you're repeatedly asking "how to fix that" without actually digging into what we're suggesting. You do some random things (like "removing duplicates" whatever that should mean) instead of really investigating the issue. And then ask "is this the potential cause". We're trying to help here but it quickly gets frustrating. I understand that people have different skill levels and knowledge but you're doing completely different things that are suggested to you and end up asking "why is it so?". That's why I'm saying that this is something you normally pay people for - they come to you, they do things _for you_ and everybody's happy. I cannot say for others but I'm usually trying to be helpful and friendly and if you check other threads when I'm active I take my time to explain my answers so that people not only know _what_ to do but also _why_ it works but in this case... well, if we're telling you "check your f...ascinating sources" then please do check your sources. You can't fix reality - if the sources do send you wrong data, you'll end up with wrong data. No amount of "removing duplicates" will fix that. So don't take it personally, because I don't know you and I don't know who you are. All I know is that this thread as it is leads nowhere for now. That's why I wrote that it's frustrating and it's all getting silly. Of course we could point you to the docs and tell you - here's what should be configured, apparently something is not done properly (most of the time the answer really _is_ in the docs or your config/data) but we're not doing that. But in return we'd (ok I'd) expect some serious effort on your side. Not some random bits and pieces, jumping from one index to another and dropping some screenshots which tell us absolutely nothing. Honestly, I find it more frustrating than if you simply asked "ok, guys, I have no idea what you're talking about, can you explain that?".
Yes, still it does generate proxy logs even when fill fake settings.   The problem with those apps you mentioned is that they dont support authentication. My Elasticsearch database is protecte... See more...
Yes, still it does generate proxy logs even when fill fake settings.   The problem with those apps you mentioned is that they dont support authentication. My Elasticsearch database is protected by authentication.
@PickleRick Thanks for your help so far. I have received this unpleasant response 2 times from you .Let me tell you that I do not post queries to waste anyone's time. If you don't want the response o... See more...
@PickleRick Thanks for your help so far. I have received this unpleasant response 2 times from you .Let me tell you that I do not post queries to waste anyone's time. If you don't want the response on my queries then please don't respond. But this kind of reply from you makes me feel more embarrassed that I am really wasting people's time in Splunk Answers platform. You are not working for me and I am not working for you. According to me, this is a platform where I can ask my query, whoever wants to respond to it should do so.
Again, here is a runanywhere example with your sample data | makeresults | eval _raw="3DS2 Server ARes Response: {\"messageType\":\"ARes\",\"status\":\"INTERNAL_VALIDATION_FAILED\",\"statusMessage\"... See more...
Again, here is a runanywhere example with your sample data | makeresults | eval _raw="3DS2 Server ARes Response: {\"messageType\":\"ARes\",\"status\":\"INTERNAL_VALIDATION_FAILED\",\"statusMessage\":\"invalid message fields, wrong message from ds:[{\\\"threeDSServerTransID\\\":\\\"123\\\",\\\"messageType\\\":\\\"Erro\\\",\\\"messageVersion\\\":\\\"2.2.0\\\",\\\"acsTransID\\\":\\\"345\\\",\\\"dsTransID\\\":\\\"567\\\",\\\"errorCode\\\":\\\"305\\\",\\\"errorComponent\\\":\\\"A\\\",\\\"errorDescription\\\":\\\"Cardholder Account Number is not in a range belonging to Issuer\\\",\\\"errorDetail\\\":\\\"acctNumber\\\",\\\"errorMessageType\\\":\\\"AReq\\\"}]; type[Erro] code[101] component[SERVER]\"}" | rex "Response: (?<response>\{.+\})" | spath input=response | rex field=statusMessage "ds:\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail If it is not working for some of your real data, then your sample is not an accurate representation of said (failing) data.
There are two types of scheduling modes for a saved search - real-time (not to be confused with real-time searches!) and continuous. Oversimplifying (but just a bit): - A real-time scheduled search... See more...
There are two types of scheduling modes for a saved search - real-time (not to be confused with real-time searches!) and continuous. Oversimplifying (but just a bit): - A real-time scheduled search will try to execute a search covering time from t0 till t1 at some point in time tA. It might not get executed at tA because the SH(C) is overloaded. In that case scheduler will try to execute it until tA+(schedule window). If it cannot run the search because the SH(C) is still overloaded, it will finally give up. Next scheduled run of the same search which might occur at some tB in the future will cover time from t2 to t3. - A continuously scheduled search will try to run the search from t0 till t1 at tA. If it cannot find a free "search slot", it will retry the same search (still from t0 till t1) until it finally can. Additional difference here is that for real-time scheduled search if the schedule window is sufficiently big, or if there were sufficiently many skipped occurrences of the search, you might have significant periods of your data not covered by run searches. The point of continuous-scheduled searches is to finally get all your data (hence the "continuous") covered by searches at the expense of "response time" (the more searches you have and the more "clogged" your search heads are, the bigger "lag" you will have because scheduler will search more and more for the opportunity to run queued searches over old data). More information here (the scheduling mechanics works the same for reports and alerts - they are all just scheduled searches). https://docs.splunk.com/Documentation/Splunk/latest/Report/Configurethepriorityofscheduledreports  
Magic. It works. But small issue here. It shows \"errorDetail\": Hmmm  
Try this | rex "Response: (?<response>\{.+\})" | spath input=response | rex field=statusMessage "ds:\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail
This is again an example of data formatted in a way non-friendly to Splunk (at least in your case). While you can use @ITWhisperer 's search to get your results, bear in mind that it's not field extr... See more...
This is again an example of data formatted in a way non-friendly to Splunk (at least in your case). While you can use @ITWhisperer 's search to get your results, bear in mind that it's not field extraction that happens in this search, it's data manipulation. You don't _have_ the fields in your data, you have to create them manually by manipulating and correlating other data extracted from events. It is one of the hard decisions that must be made when exporting data do json. You have two possible approaches - one is to dynamically name the fields like { "Group1":  {  "NewValue":"a", "OldValue":"b" }, "Group2": { "NewValue":"c", "OldValue":"d" }} Another is - as is in your case - to have what you have as field name, exported as a "label" to a constant name. [ { "Name": "Group1", "NewValue": "a", "OldValue":"b"}, { " Name": "Group1", "NewValue": "c", "OldValue": "d" }}] Each of those approaches has its pros and cons. The first form is not very friendly if you want to do some aggregations and other similar manipulations because your fields are not statically named so you might have to do some strange things with foreach and wildcarding in stats when manipulating them. But you can search with conditions like "Group1.NewValue=a". But you can't do "stats count by GroupName" or something like that The second form though doesn't "tie" values with the name - Splunk doesn't support structured data, it flattens the JSONs and XMLs on parsing - so you have to bend over backwards to get your specific values for interesting "field" and you can't simply use conditions like "Group1.NewValue=a" in your search. but you can do "stats count by Name". So it's always something. One way or another your data format will end up being a burden.  
Well. 401 does mean that the authentication was not performed correctly (which means no token provided or wrong token). So I'd start by checking what requests are being sent to Splunk from your both... See more...
Well. 401 does mean that the authentication was not performed correctly (which means no token provided or wrong token). So I'd start by checking what requests are being sent to Splunk from your both scripts (the one working properly and the one not working) and comparing the requests (especially the tokens of course).
You can try #docs channel on Slack
OK. It's starting to get a bit silly. A community is meant to be a help by users for other users. Help in learning the platform and what it can do, checking if your train of thought is correct and so... See more...
OK. It's starting to get a bit silly. A community is meant to be a help by users for other users. Help in learning the platform and what it can do, checking if your train of thought is correct and so on. It is _not_ meant as a free support service. And you're trying to do just that - get your problem solved without trying to understand the underlying issue and providing almost no information about it. You obviously have _some_ problem with your data ingestion process. What it is? We don't know. It's something that should be examined on your site locally by someone who can verify the data as it is ingested into Splunk, who can check the settings across your Splunk infrastructure and who can talk with administrators of your sources to verify the settings on their side and what and how they produce the data you're ingesting into Splunk. This is not something you can do by asking single questions on Answers without any significant effort on your side (true, sometimes Answers can be helpful in diagnostics when the asking person does quite a lot of work on their own and only needs some gentle hints now and then). This is something a skilled Splunk engineer would probably diagnose in a relatively short time compared to ping-ponging scraps of information to Answers and back. People on Answers are volunteers who use their spare time to help others. But that doesn't mean that they are free support service. You want some effort from them - show some serious effort on your side as well. Make the problem interesting, not frustrating because you're asking about stuff they have no idea of knowing because it's your internal information.
Hello Here is an updated accurate data. Thank you. 3DS2 Server ARes Response: {"messageType":"ARes","status":"INTERNAL_VALIDATION_FAILED","statusMessage":"invalid message fields, wrong message fro... See more...
Hello Here is an updated accurate data. Thank you. 3DS2 Server ARes Response: {"messageType":"ARes","status":"INTERNAL_VALIDATION_FAILED","statusMessage":"invalid message fields, wrong message from ds:[{\"threeDSServerTransID\":\"123\",\"messageType\":\"Erro\",\"messageVersion\":\"2.2.0\",\"acsTransID\":\"345\",\"dsTransID\":\"567\",\"errorCode\":\"305\",\"errorComponent\":\"A\",\"errorDescription\":\"Cardholder Account Number is not in a range belonging to Issuer\",\"errorDetail\":\"acctNumber\",\"errorMessageType\":\"AReq\"}]; type[Erro] code[101] component[SERVER]"}    
You create your own page anywhere (not in the Splunk installation - on your own web infrastructure) and put URL to that page into the setting I mentioned. Where this setting is (what it is I already ... See more...
You create your own page anywhere (not in the Splunk installation - on your own web infrastructure) and put URL to that page into the setting I mentioned. Where this setting is (what it is I already literally specified) and how to apply the setting I leave as an exercise to the reader because authentication-related stuff (even if it's just a failed login page) is not something you should fiddle with freely if you're just a newcomer.
Here is a runanywhere example using your original event data showing the solution working. If it is not working with your real data, this means that the sample you shared is not an accurate represent... See more...
Here is a runanywhere example using your original event data showing the solution working. If it is not working with your real data, this means that the sample you shared is not an accurate representation of your real data. Please share an updated accurate representation of your data.
Users can send search jobs to background, where they will run until completion. If Splunk restarts, then these search jobs will be interrupted so they cannot finish. The option "restart background ... See more...
Users can send search jobs to background, where they will run until completion. If Splunk restarts, then these search jobs will be interrupted so they cannot finish. The option "restart background jobs" will make it so that Splunk re-runs the interrupted search jobs after the restart is complete. It is set on a per-user basis.
What you have posted is not the raw text, and is therefore not valid JSON! Having said that, try something like this | spath ModifiedProperties{} output=ModifiedProperties | eval ModifiedProperties... See more...
What you have posted is not the raw text, and is therefore not valid JSON! Having said that, try something like this | spath ModifiedProperties{} output=ModifiedProperties | eval ModifiedProperties=mvindex(ModifiedProperties,1) | spath input=ModifiedProperties | eval {Name}.NewValue=NewValue | eval {Name}.OldValue=OldValue | fields - Name NewValue OldValue ModifiedProperties
As a test, could you add dummy credentials to the proxy and "additional_parameters" field?