All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There can be a small problem: the error message, or "invalid message fields, wrong message from ds" as prefaced in the raw message, is a JSON array.  You want to handle that as an entity. | rex "^[^... See more...
There can be a small problem: the error message, or "invalid message fields, wrong message from ds" as prefaced in the raw message, is a JSON array.  You want to handle that as an entity. | rex "^[^{]+(?<response>.+)" | spath input=response | rename messageType as topMessageType ``` handle namespace conflict ``` | rex field=statusMessage "^[^\[]+(?<message_from_ds>[^\]]+\])" | spath input=message_from_ds path={} | mvexpand {} | spath input={} | dedup errorDetail | table errorDetail
@mistydennis  ### Steps to Use Single Value Visualization in your dashboard. 1. **Run the Query**: Use the query you provided to generate the `OnTarget` value. 2. **Select Visualization**: - Afte... See more...
@mistydennis  ### Steps to Use Single Value Visualization in your dashboard. 1. **Run the Query**: Use the query you provided to generate the `OnTarget` value. 2. **Select Visualization**: - After running the query, go to the **Visualization** tab in the search results. - From the available visualizations, choose **Single Value**. 3. **Configure Conditional Coloring**: - Click on **Format** in the Visualization tab. - Under **Color**, enable **Color by value**. - Add your conditions: - **If value is "Yes"**: Set the color to green. - **If value is "No"**: Set the color to red. 4. **Save and Use**: - Apply the settings, and you will see the value displayed either in green or red based on the result ("Yes" or "No"). - You can then save this as part of your dashboard if needed. upvote is appreciated.
I am not sure specifically what you want to do, but if you have that _raw data in an event, and you would like to extract the uuid into a field, then you can make a regex with a named capture group i... See more...
I am not sure specifically what you want to do, but if you have that _raw data in an event, and you would like to extract the uuid into a field, then you can make a regex with a named capture group in the rex command to extract it during search time. If you would like this to be permanent then you can copy the regex into a Field Extraction. <yoursearch> | rex field=_raw "com.companyname.package: (stringstart\s)?(?<uuid>\S+) (stringend )?for" I made the assumptions that there are no space characters in the uuid string, and that it is surrounded by "com.companyname.package: and "for"
Here is the raw text -  com.companyname.package: stringstart e-38049e11-72b7-4968-b575-ecaa86f54e02 stringend for some.datahere with status FAILED, Yarn appId application_687987, Yarn state FINISH... See more...
Here is the raw text -  com.companyname.package: stringstart e-38049e11-72b7-4968-b575-ecaa86f54e02 stringend for some.datahere with status FAILED, Yarn appId application_687987, Yarn state FINISHED, and Yarn finalStatus FAILED with root cause: samppleDatahere: com.packagenamehere: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: sjhdjksdn;  Need to list down the uuid which is in between stringstart and stringend 
While you have clearly shown your search (which by the way seems perfectly fine), what you haven't shown or described is what you have tried in your dashboard. Please can you provide further informat... See more...
While you have clearly shown your search (which by the way seems perfectly fine), what you haven't shown or described is what you have tried in your dashboard. Please can you provide further information?
OK. If you found a way to feel offended, well that was not my intention. I just wanted to point out that what you were doing in this thread was counterproductive and it was indeed simply impossible t... See more...
OK. If you found a way to feel offended, well that was not my intention. I just wanted to point out that what you were doing in this thread was counterproductive and it was indeed simply impossible to help you this way. Want to help us help you? Fine, do so - check your sources and verify what was already suggested in this thread. Want to just take offense? Well, I'm trully sorry to hear that because we're really trying to create an overally friendly atmosphere here. And again - it was not my intention to make you personally feel bad. The intention was to point out that doing random things and just "splashing" random bits of information you will not get a reasonable answer because it's simply impossible. That's all. Hope you still have fun on Answers.
Hi @att35, Assuming _raw is properly formatted--and both your original Splunk Web screenshot and your new formatted event imply it is--you can use a combination of eval and spath commands to iterate... See more...
Hi @att35, Assuming _raw is properly formatted--and both your original Splunk Web screenshot and your new formatted event imply it is--you can use a combination of eval and spath commands to iterate over the array and create new fields: | eval tmp="{".mvjoin(mvmap(json_array_to_mv(json_extract(json(_raw), "ModifiedProperties")), "\"".spath(_raw, "Name").".NewValue\":\"".spath(_raw, "NewValue")."\",\"".spath(_raw, "Name").".OldValue\":\"".spath(_raw, "OldValue")."\""), ",")."}" | spath input=tmp | fields - tmp The eval command creates a tmp field with the following value: {"Group.ObjectID.NewValue":"111111-2222222-333333-444444","Group.ObjectID.OldValue":"","Group.DisplayName.NewValue":"Group A","Group.DisplayName.OldValue":"","Group.WellKnownObjectName.NewValue":"","Group.WellKnownObjectName.OldValue":""} The spath command extracts the *.NewValue and *.OldValue fields from the tmp field. Note that empty values will be empty strings and null values will have the string value 'null'. If you want null values to be null fields, you can use the foreach command the nullif() eval function to override them: | foreach Group.*.NewValue [ eval "<<FIELD>>"=nullif('<<FIELD>>', "null") ] | foreach Group.*.OldValue [ eval "<<FIELD>>"=nullif('<<FIELD>>', "null") ] Search memory usage may be higher when using temporary fields to store and manipulate JSON objects in this way, and you may need to run multiple searches over smaller time ranges, depending on your user's search limits and workload policy.
@PickleRick  I know that my way of asking queries is wrong in bits and pieces and a master like you did not like it. I value the Splunk Answers platform and I am also familiar with the contributions... See more...
@PickleRick  I know that my way of asking queries is wrong in bits and pieces and a master like you did not like it. I value the Splunk Answers platform and I am also familiar with the contributions you have been making to users on the Splunk Answers platform over the years. You could have simply told me you don't want to respond on my half of the details post. I have been posting at least 100 + queries on Splunk answers throughout my Splunk career and I have not received a reply like today. You are such a valuable member of the Splunk trust, your reply has shattered my confidence. By writing like this type of unpleasant reply you diverted the attention of other users and experts who want to reply me and as a result essence of the query post is lost. I have seen your behaviour from my last two posts.  I exit this thread chat while maintaining the decorum of the Splunk Answers platform. Thanks for all the help.
Hello - I realize this question has been asked several times before and I've tried to implement every solution I've found, but nothing seems to be working. I simply want to update a single value v... See more...
Hello - I realize this question has been asked several times before and I've tried to implement every solution I've found, but nothing seems to be working. I simply want to update a single value visualization based on the text. If "Yes", then green, and if "No", red.  I've tried using older solutions involving rangemap and changing some of the charting options, but I'm not having any luck in v9.3.0.  | inputlookup mylookup.csv | search $time_tok$ $field_tok$=Y | stats max(Distance) AS GuideMiles | appendcols [| mylookup.csv | search $month_tok$ | stats max(TargetMiles)] | rename max(TargetMiles) AS TargetMiles | eval OnTarget=case(GuideMiles>=TargetMiles,"Yes", true(), "No") | table OnTarget  
I'm not saying you're wasting people's time deliberately. It's just that this is one of those cases where someone (in this case you) asks one thing without giving much background info, then it leads ... See more...
I'm not saying you're wasting people's time deliberately. It's just that this is one of those cases where someone (in this case you) asks one thing without giving much background info, then it leads to more and more problems and issues the poster is either unaware of or is not willing to share and only keeps insisting on providing a solution based on a very small piece of the actual information needed for such troubleshooting. I didn't mean to be rude against you but you're repeatedly asking "how to fix that" without actually digging into what we're suggesting. You do some random things (like "removing duplicates" whatever that should mean) instead of really investigating the issue. And then ask "is this the potential cause". We're trying to help here but it quickly gets frustrating. I understand that people have different skill levels and knowledge but you're doing completely different things that are suggested to you and end up asking "why is it so?". That's why I'm saying that this is something you normally pay people for - they come to you, they do things _for you_ and everybody's happy. I cannot say for others but I'm usually trying to be helpful and friendly and if you check other threads when I'm active I take my time to explain my answers so that people not only know _what_ to do but also _why_ it works but in this case... well, if we're telling you "check your f...ascinating sources" then please do check your sources. You can't fix reality - if the sources do send you wrong data, you'll end up with wrong data. No amount of "removing duplicates" will fix that. So don't take it personally, because I don't know you and I don't know who you are. All I know is that this thread as it is leads nowhere for now. That's why I wrote that it's frustrating and it's all getting silly. Of course we could point you to the docs and tell you - here's what should be configured, apparently something is not done properly (most of the time the answer really _is_ in the docs or your config/data) but we're not doing that. But in return we'd (ok I'd) expect some serious effort on your side. Not some random bits and pieces, jumping from one index to another and dropping some screenshots which tell us absolutely nothing. Honestly, I find it more frustrating than if you simply asked "ok, guys, I have no idea what you're talking about, can you explain that?".
Yes, still it does generate proxy logs even when fill fake settings.   The problem with those apps you mentioned is that they dont support authentication. My Elasticsearch database is protecte... See more...
Yes, still it does generate proxy logs even when fill fake settings.   The problem with those apps you mentioned is that they dont support authentication. My Elasticsearch database is protected by authentication.
@PickleRick Thanks for your help so far. I have received this unpleasant response 2 times from you .Let me tell you that I do not post queries to waste anyone's time. If you don't want the response o... See more...
@PickleRick Thanks for your help so far. I have received this unpleasant response 2 times from you .Let me tell you that I do not post queries to waste anyone's time. If you don't want the response on my queries then please don't respond. But this kind of reply from you makes me feel more embarrassed that I am really wasting people's time in Splunk Answers platform. You are not working for me and I am not working for you. According to me, this is a platform where I can ask my query, whoever wants to respond to it should do so.
Again, here is a runanywhere example with your sample data | makeresults | eval _raw="3DS2 Server ARes Response: {\"messageType\":\"ARes\",\"status\":\"INTERNAL_VALIDATION_FAILED\",\"statusMessage\"... See more...
Again, here is a runanywhere example with your sample data | makeresults | eval _raw="3DS2 Server ARes Response: {\"messageType\":\"ARes\",\"status\":\"INTERNAL_VALIDATION_FAILED\",\"statusMessage\":\"invalid message fields, wrong message from ds:[{\\\"threeDSServerTransID\\\":\\\"123\\\",\\\"messageType\\\":\\\"Erro\\\",\\\"messageVersion\\\":\\\"2.2.0\\\",\\\"acsTransID\\\":\\\"345\\\",\\\"dsTransID\\\":\\\"567\\\",\\\"errorCode\\\":\\\"305\\\",\\\"errorComponent\\\":\\\"A\\\",\\\"errorDescription\\\":\\\"Cardholder Account Number is not in a range belonging to Issuer\\\",\\\"errorDetail\\\":\\\"acctNumber\\\",\\\"errorMessageType\\\":\\\"AReq\\\"}]; type[Erro] code[101] component[SERVER]\"}" | rex "Response: (?<response>\{.+\})" | spath input=response | rex field=statusMessage "ds:\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail If it is not working for some of your real data, then your sample is not an accurate representation of said (failing) data.
There are two types of scheduling modes for a saved search - real-time (not to be confused with real-time searches!) and continuous. Oversimplifying (but just a bit): - A real-time scheduled search... See more...
There are two types of scheduling modes for a saved search - real-time (not to be confused with real-time searches!) and continuous. Oversimplifying (but just a bit): - A real-time scheduled search will try to execute a search covering time from t0 till t1 at some point in time tA. It might not get executed at tA because the SH(C) is overloaded. In that case scheduler will try to execute it until tA+(schedule window). If it cannot run the search because the SH(C) is still overloaded, it will finally give up. Next scheduled run of the same search which might occur at some tB in the future will cover time from t2 to t3. - A continuously scheduled search will try to run the search from t0 till t1 at tA. If it cannot find a free "search slot", it will retry the same search (still from t0 till t1) until it finally can. Additional difference here is that for real-time scheduled search if the schedule window is sufficiently big, or if there were sufficiently many skipped occurrences of the search, you might have significant periods of your data not covered by run searches. The point of continuous-scheduled searches is to finally get all your data (hence the "continuous") covered by searches at the expense of "response time" (the more searches you have and the more "clogged" your search heads are, the bigger "lag" you will have because scheduler will search more and more for the opportunity to run queued searches over old data). More information here (the scheduling mechanics works the same for reports and alerts - they are all just scheduled searches). https://docs.splunk.com/Documentation/Splunk/latest/Report/Configurethepriorityofscheduledreports  
Magic. It works. But small issue here. It shows \"errorDetail\": Hmmm  
Try this | rex "Response: (?<response>\{.+\})" | spath input=response | rex field=statusMessage "ds:\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail
This is again an example of data formatted in a way non-friendly to Splunk (at least in your case). While you can use @ITWhisperer 's search to get your results, bear in mind that it's not field extr... See more...
This is again an example of data formatted in a way non-friendly to Splunk (at least in your case). While you can use @ITWhisperer 's search to get your results, bear in mind that it's not field extraction that happens in this search, it's data manipulation. You don't _have_ the fields in your data, you have to create them manually by manipulating and correlating other data extracted from events. It is one of the hard decisions that must be made when exporting data do json. You have two possible approaches - one is to dynamically name the fields like { "Group1":  {  "NewValue":"a", "OldValue":"b" }, "Group2": { "NewValue":"c", "OldValue":"d" }} Another is - as is in your case - to have what you have as field name, exported as a "label" to a constant name. [ { "Name": "Group1", "NewValue": "a", "OldValue":"b"}, { " Name": "Group1", "NewValue": "c", "OldValue": "d" }}] Each of those approaches has its pros and cons. The first form is not very friendly if you want to do some aggregations and other similar manipulations because your fields are not statically named so you might have to do some strange things with foreach and wildcarding in stats when manipulating them. But you can search with conditions like "Group1.NewValue=a". But you can't do "stats count by GroupName" or something like that The second form though doesn't "tie" values with the name - Splunk doesn't support structured data, it flattens the JSONs and XMLs on parsing - so you have to bend over backwards to get your specific values for interesting "field" and you can't simply use conditions like "Group1.NewValue=a" in your search. but you can do "stats count by Name". So it's always something. One way or another your data format will end up being a burden.  
Well. 401 does mean that the authentication was not performed correctly (which means no token provided or wrong token). So I'd start by checking what requests are being sent to Splunk from your both... See more...
Well. 401 does mean that the authentication was not performed correctly (which means no token provided or wrong token). So I'd start by checking what requests are being sent to Splunk from your both scripts (the one working properly and the one not working) and comparing the requests (especially the tokens of course).
You can try #docs channel on Slack
OK. It's starting to get a bit silly. A community is meant to be a help by users for other users. Help in learning the platform and what it can do, checking if your train of thought is correct and so... See more...
OK. It's starting to get a bit silly. A community is meant to be a help by users for other users. Help in learning the platform and what it can do, checking if your train of thought is correct and so on. It is _not_ meant as a free support service. And you're trying to do just that - get your problem solved without trying to understand the underlying issue and providing almost no information about it. You obviously have _some_ problem with your data ingestion process. What it is? We don't know. It's something that should be examined on your site locally by someone who can verify the data as it is ingested into Splunk, who can check the settings across your Splunk infrastructure and who can talk with administrators of your sources to verify the settings on their side and what and how they produce the data you're ingesting into Splunk. This is not something you can do by asking single questions on Answers without any significant effort on your side (true, sometimes Answers can be helpful in diagnostics when the asking person does quite a lot of work on their own and only needs some gentle hints now and then). This is something a skilled Splunk engineer would probably diagnose in a relatively short time compared to ping-ponging scraps of information to Answers and back. People on Answers are volunteers who use their spare time to help others. But that doesn't mean that they are free support service. You want some effort from them - show some serious effort on your side as well. Make the problem interesting, not frustrating because you're asking about stuff they have no idea of knowing because it's your internal information.