All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can't use $value$ and your <condition> elements are wrong - I assume you're trying to make a conditional expression, however, you just have effectively a single condition This is the technique t... See more...
You can't use $value$ and your <condition> elements are wrong - I assume you're trying to make a conditional expression, however, you just have effectively a single condition This is the technique to remove all and add all when using multiselect <change> <condition match="$form.app_fm_entity_id$=&quot;*&quot;"> <eval token="form.app_fm_entity_id">*</eval> </condition> <condition> <eval token="form.app_fm_entity_id">case(mvcount($form.app_fm_entity_id$)="2" AND mvindex($form.app_fm_entity_id$,0)="*", mvindex($form.app_fm_entity_id$,1), mvfind($form.app_fm_entity_id$,"^\\*$$")=mvcount($form.app_fm_entity_id$)-1, "*", true(), $form.app_fm_entity_id$)</eval> </condition> </change>  It will set the token to * not _all because that is the value defined in your default 'All' option.  
Might be a silly question but does anyone possibly know where I can locate lines with pointing arrows at the end? I wanted to use them to point to each panel I had to show a flow diagram of some sort.
I am using the multiselect input definition below: The issue is that it is not setting the token named "app_net_fm_entity_id" properly. The desired behavior is, if the user selects "All" label ... See more...
I am using the multiselect input definition below: The issue is that it is not setting the token named "app_net_fm_entity_id" properly. The desired behavior is, if the user selects "All" label (value=*) then the condition should detect the "*" value and set the "app_net_fm_entity_id" token to "_all" If the user selects anything else other than just the "All" label then the "app_net_fm_entity_id" token should be set to the contents of the selected values. I am using Splunk Enterprise 9.2.1 This is a simple xml dashoard, aka classic dashboard. I am 1month into splunk and learning feverishly but I surely need some help on this.  I've tried using JS to get the desired behavior for this multi, but couldn't get that to work either     <input id="app_nodes_multiselect" type="multiselect" depends="$app_fm_app_id$" token="app_fm_entity_id" searchWhenChanged="true"> <label>Nodes</label> <delimiter> </delimiter> <fieldForLabel>entity_name</fieldForLabel> <fieldForValue>internal_entity_id</fieldForValue> <search> <query> | inputlookup aix_kv_apm_comps WHERE entity_type!=$app_fm_group_nodes$ | search [| makeresults | eval search="internal_parent_id=(".mvjoin($app_fm_app_id$, " OR internal_parent_id=").")" | return $search] | table entity_name, internal_entity_id | sort entity_name </query> </search> <choice value="*">All</choice> <default>*</default> <change> <condition> <eval>len($value$) == 1</eval> <set token="app_net_fm_entity_id">_all</set> </condition> <condition> <eval>len($value$) > 1</eval> <set token="app_net_fm_entity_id">$value$</set> </condition> </change> </input>                
Well, you need to simply find something between your "anchors". Which in simplest form might just be stringstart\s(?<uuid>.*)\sstringend If you know that the uuid has some particular form you can b... See more...
Well, you need to simply find something between your "anchors". Which in simplest form might just be stringstart\s(?<uuid>.*)\sstringend If you know that the uuid has some particular form you can be a bit more specific (for example not to capture wrongly formed uuid) stringstart\s(?<uuid>[0-9a-f]-[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12})\sstringend You can even add more anchoring text in front or at the end if you have more constant parts. So as you have a regex matching and extracting this part, you can - depending on your use case - either use it as @marnall showed with rex command or use it to define a search-time extraction. For example EXTRACT-uuid = stringstart\s(?<uuid>[0-9a-f]-[0-9a-f]{8}-(?:[0-9a-f]{4}-){3}[0-9a-f]{12})\sstringend
Hi @tuts, Use Elasticsearch Data Integrator - Module Input if your requirements match the following: Simple index list or pattern Single date field Less than or equal to 10,000 documents per sea... See more...
Hi @tuts, Use Elasticsearch Data Integrator - Module Input if your requirements match the following: Simple index list or pattern Single date field Less than or equal to 10,000 documents per search The add-on uses the Python Elasticsearch client search() method, which wraps the Elasticsearch Search API. The add-on will search for all documents in the configured index list with configured date field values greater than or equal to now minus the configured offset and less than or equal to now. E.g. Given logs-*,metrics-*, @timestamp, and -24h, respectively, the add-on will retrieve documents in pages of 1,000: GET /logs-*,metrics-*/_search?from=0&size=1000 { "query": { "bool": { "must": [ { "range": { "@timestamp": { "gte": "now-24h", "lte": "now" } } } ] } } } Elasticsearch limits scrolling using the from and size parameters to 10,000 results (10 pages of 1,000 documents). If you need to retrieve more documents per interval or need more control over how search results are presented prior to entering the Splunk ingest pipeline, you should evaluate REST API Module Input or similar solutions. You might also consider writing your own modular input or scripted input. A custom solution would allow to control the query language (Query DSL, ES|QL, SQL, etc.), scrolling, checkpointing, etc. If you have more specific questions, members of the community like me with experience in both Splunk and Elasticsearch can assist.
Hi @sjlaplac, To change the width of inputs move them from "Above canvas" to "In canvas" and arrange them as needed. See https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/inputConfig#Inp... See more...
Hi @sjlaplac, To change the width of inputs move them from "Above canvas" to "In canvas" and arrange them as needed. See https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/inputConfig#Inputs_in_the_canvas for more information.
There can be a small problem: the error message, or "invalid message fields, wrong message from ds" as prefaced in the raw message, is a JSON array.  You want to handle that as an entity. | rex "^[^... See more...
There can be a small problem: the error message, or "invalid message fields, wrong message from ds" as prefaced in the raw message, is a JSON array.  You want to handle that as an entity. | rex "^[^{]+(?<response>.+)" | spath input=response | rename messageType as topMessageType ``` handle namespace conflict ``` | rex field=statusMessage "^[^\[]+(?<message_from_ds>[^\]]+\])" | spath input=message_from_ds path={} | mvexpand {} | spath input={} | dedup errorDetail | table errorDetail
@mistydennis  ### Steps to Use Single Value Visualization in your dashboard. 1. **Run the Query**: Use the query you provided to generate the `OnTarget` value. 2. **Select Visualization**: - Afte... See more...
@mistydennis  ### Steps to Use Single Value Visualization in your dashboard. 1. **Run the Query**: Use the query you provided to generate the `OnTarget` value. 2. **Select Visualization**: - After running the query, go to the **Visualization** tab in the search results. - From the available visualizations, choose **Single Value**. 3. **Configure Conditional Coloring**: - Click on **Format** in the Visualization tab. - Under **Color**, enable **Color by value**. - Add your conditions: - **If value is "Yes"**: Set the color to green. - **If value is "No"**: Set the color to red. 4. **Save and Use**: - Apply the settings, and you will see the value displayed either in green or red based on the result ("Yes" or "No"). - You can then save this as part of your dashboard if needed. upvote is appreciated.
I am not sure specifically what you want to do, but if you have that _raw data in an event, and you would like to extract the uuid into a field, then you can make a regex with a named capture group i... See more...
I am not sure specifically what you want to do, but if you have that _raw data in an event, and you would like to extract the uuid into a field, then you can make a regex with a named capture group in the rex command to extract it during search time. If you would like this to be permanent then you can copy the regex into a Field Extraction. <yoursearch> | rex field=_raw "com.companyname.package: (stringstart\s)?(?<uuid>\S+) (stringend )?for" I made the assumptions that there are no space characters in the uuid string, and that it is surrounded by "com.companyname.package: and "for"
Here is the raw text -  com.companyname.package: stringstart e-38049e11-72b7-4968-b575-ecaa86f54e02 stringend for some.datahere with status FAILED, Yarn appId application_687987, Yarn state FINISH... See more...
Here is the raw text -  com.companyname.package: stringstart e-38049e11-72b7-4968-b575-ecaa86f54e02 stringend for some.datahere with status FAILED, Yarn appId application_687987, Yarn state FINISHED, and Yarn finalStatus FAILED with root cause: samppleDatahere: com.packagenamehere: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: sjhdjksdn;  Need to list down the uuid which is in between stringstart and stringend 
While you have clearly shown your search (which by the way seems perfectly fine), what you haven't shown or described is what you have tried in your dashboard. Please can you provide further informat... See more...
While you have clearly shown your search (which by the way seems perfectly fine), what you haven't shown or described is what you have tried in your dashboard. Please can you provide further information?
OK. If you found a way to feel offended, well that was not my intention. I just wanted to point out that what you were doing in this thread was counterproductive and it was indeed simply impossible t... See more...
OK. If you found a way to feel offended, well that was not my intention. I just wanted to point out that what you were doing in this thread was counterproductive and it was indeed simply impossible to help you this way. Want to help us help you? Fine, do so - check your sources and verify what was already suggested in this thread. Want to just take offense? Well, I'm trully sorry to hear that because we're really trying to create an overally friendly atmosphere here. And again - it was not my intention to make you personally feel bad. The intention was to point out that doing random things and just "splashing" random bits of information you will not get a reasonable answer because it's simply impossible. That's all. Hope you still have fun on Answers.
Hi @att35, Assuming _raw is properly formatted--and both your original Splunk Web screenshot and your new formatted event imply it is--you can use a combination of eval and spath commands to iterate... See more...
Hi @att35, Assuming _raw is properly formatted--and both your original Splunk Web screenshot and your new formatted event imply it is--you can use a combination of eval and spath commands to iterate over the array and create new fields: | eval tmp="{".mvjoin(mvmap(json_array_to_mv(json_extract(json(_raw), "ModifiedProperties")), "\"".spath(_raw, "Name").".NewValue\":\"".spath(_raw, "NewValue")."\",\"".spath(_raw, "Name").".OldValue\":\"".spath(_raw, "OldValue")."\""), ",")."}" | spath input=tmp | fields - tmp The eval command creates a tmp field with the following value: {"Group.ObjectID.NewValue":"111111-2222222-333333-444444","Group.ObjectID.OldValue":"","Group.DisplayName.NewValue":"Group A","Group.DisplayName.OldValue":"","Group.WellKnownObjectName.NewValue":"","Group.WellKnownObjectName.OldValue":""} The spath command extracts the *.NewValue and *.OldValue fields from the tmp field. Note that empty values will be empty strings and null values will have the string value 'null'. If you want null values to be null fields, you can use the foreach command the nullif() eval function to override them: | foreach Group.*.NewValue [ eval "<<FIELD>>"=nullif('<<FIELD>>', "null") ] | foreach Group.*.OldValue [ eval "<<FIELD>>"=nullif('<<FIELD>>', "null") ] Search memory usage may be higher when using temporary fields to store and manipulate JSON objects in this way, and you may need to run multiple searches over smaller time ranges, depending on your user's search limits and workload policy.
@PickleRick  I know that my way of asking queries is wrong in bits and pieces and a master like you did not like it. I value the Splunk Answers platform and I am also familiar with the contributions... See more...
@PickleRick  I know that my way of asking queries is wrong in bits and pieces and a master like you did not like it. I value the Splunk Answers platform and I am also familiar with the contributions you have been making to users on the Splunk Answers platform over the years. You could have simply told me you don't want to respond on my half of the details post. I have been posting at least 100 + queries on Splunk answers throughout my Splunk career and I have not received a reply like today. You are such a valuable member of the Splunk trust, your reply has shattered my confidence. By writing like this type of unpleasant reply you diverted the attention of other users and experts who want to reply me and as a result essence of the query post is lost. I have seen your behaviour from my last two posts.  I exit this thread chat while maintaining the decorum of the Splunk Answers platform. Thanks for all the help.
Hello - I realize this question has been asked several times before and I've tried to implement every solution I've found, but nothing seems to be working. I simply want to update a single value v... See more...
Hello - I realize this question has been asked several times before and I've tried to implement every solution I've found, but nothing seems to be working. I simply want to update a single value visualization based on the text. If "Yes", then green, and if "No", red.  I've tried using older solutions involving rangemap and changing some of the charting options, but I'm not having any luck in v9.3.0.  | inputlookup mylookup.csv | search $time_tok$ $field_tok$=Y | stats max(Distance) AS GuideMiles | appendcols [| mylookup.csv | search $month_tok$ | stats max(TargetMiles)] | rename max(TargetMiles) AS TargetMiles | eval OnTarget=case(GuideMiles>=TargetMiles,"Yes", true(), "No") | table OnTarget  
I'm not saying you're wasting people's time deliberately. It's just that this is one of those cases where someone (in this case you) asks one thing without giving much background info, then it leads ... See more...
I'm not saying you're wasting people's time deliberately. It's just that this is one of those cases where someone (in this case you) asks one thing without giving much background info, then it leads to more and more problems and issues the poster is either unaware of or is not willing to share and only keeps insisting on providing a solution based on a very small piece of the actual information needed for such troubleshooting. I didn't mean to be rude against you but you're repeatedly asking "how to fix that" without actually digging into what we're suggesting. You do some random things (like "removing duplicates" whatever that should mean) instead of really investigating the issue. And then ask "is this the potential cause". We're trying to help here but it quickly gets frustrating. I understand that people have different skill levels and knowledge but you're doing completely different things that are suggested to you and end up asking "why is it so?". That's why I'm saying that this is something you normally pay people for - they come to you, they do things _for you_ and everybody's happy. I cannot say for others but I'm usually trying to be helpful and friendly and if you check other threads when I'm active I take my time to explain my answers so that people not only know _what_ to do but also _why_ it works but in this case... well, if we're telling you "check your f...ascinating sources" then please do check your sources. You can't fix reality - if the sources do send you wrong data, you'll end up with wrong data. No amount of "removing duplicates" will fix that. So don't take it personally, because I don't know you and I don't know who you are. All I know is that this thread as it is leads nowhere for now. That's why I wrote that it's frustrating and it's all getting silly. Of course we could point you to the docs and tell you - here's what should be configured, apparently something is not done properly (most of the time the answer really _is_ in the docs or your config/data) but we're not doing that. But in return we'd (ok I'd) expect some serious effort on your side. Not some random bits and pieces, jumping from one index to another and dropping some screenshots which tell us absolutely nothing. Honestly, I find it more frustrating than if you simply asked "ok, guys, I have no idea what you're talking about, can you explain that?".
Yes, still it does generate proxy logs even when fill fake settings.   The problem with those apps you mentioned is that they dont support authentication. My Elasticsearch database is protecte... See more...
Yes, still it does generate proxy logs even when fill fake settings.   The problem with those apps you mentioned is that they dont support authentication. My Elasticsearch database is protected by authentication.
@PickleRick Thanks for your help so far. I have received this unpleasant response 2 times from you .Let me tell you that I do not post queries to waste anyone's time. If you don't want the response o... See more...
@PickleRick Thanks for your help so far. I have received this unpleasant response 2 times from you .Let me tell you that I do not post queries to waste anyone's time. If you don't want the response on my queries then please don't respond. But this kind of reply from you makes me feel more embarrassed that I am really wasting people's time in Splunk Answers platform. You are not working for me and I am not working for you. According to me, this is a platform where I can ask my query, whoever wants to respond to it should do so.
Again, here is a runanywhere example with your sample data | makeresults | eval _raw="3DS2 Server ARes Response: {\"messageType\":\"ARes\",\"status\":\"INTERNAL_VALIDATION_FAILED\",\"statusMessage\"... See more...
Again, here is a runanywhere example with your sample data | makeresults | eval _raw="3DS2 Server ARes Response: {\"messageType\":\"ARes\",\"status\":\"INTERNAL_VALIDATION_FAILED\",\"statusMessage\":\"invalid message fields, wrong message from ds:[{\\\"threeDSServerTransID\\\":\\\"123\\\",\\\"messageType\\\":\\\"Erro\\\",\\\"messageVersion\\\":\\\"2.2.0\\\",\\\"acsTransID\\\":\\\"345\\\",\\\"dsTransID\\\":\\\"567\\\",\\\"errorCode\\\":\\\"305\\\",\\\"errorComponent\\\":\\\"A\\\",\\\"errorDescription\\\":\\\"Cardholder Account Number is not in a range belonging to Issuer\\\",\\\"errorDetail\\\":\\\"acctNumber\\\",\\\"errorMessageType\\\":\\\"AReq\\\"}]; type[Erro] code[101] component[SERVER]\"}" | rex "Response: (?<response>\{.+\})" | spath input=response | rex field=statusMessage "ds:\[(?<ds_message>[^\]]+)" | spath input=ds_message | stats count by errorDetail If it is not working for some of your real data, then your sample is not an accurate representation of said (failing) data.
There are two types of scheduling modes for a saved search - real-time (not to be confused with real-time searches!) and continuous. Oversimplifying (but just a bit): - A real-time scheduled search... See more...
There are two types of scheduling modes for a saved search - real-time (not to be confused with real-time searches!) and continuous. Oversimplifying (but just a bit): - A real-time scheduled search will try to execute a search covering time from t0 till t1 at some point in time tA. It might not get executed at tA because the SH(C) is overloaded. In that case scheduler will try to execute it until tA+(schedule window). If it cannot run the search because the SH(C) is still overloaded, it will finally give up. Next scheduled run of the same search which might occur at some tB in the future will cover time from t2 to t3. - A continuously scheduled search will try to run the search from t0 till t1 at tA. If it cannot find a free "search slot", it will retry the same search (still from t0 till t1) until it finally can. Additional difference here is that for real-time scheduled search if the schedule window is sufficiently big, or if there were sufficiently many skipped occurrences of the search, you might have significant periods of your data not covered by run searches. The point of continuous-scheduled searches is to finally get all your data (hence the "continuous") covered by searches at the expense of "response time" (the more searches you have and the more "clogged" your search heads are, the bigger "lag" you will have because scheduler will search more and more for the opportunity to run queued searches over old data). More information here (the scheduling mechanics works the same for reports and alerts - they are all just scheduled searches). https://docs.splunk.com/Documentation/Splunk/latest/Report/Configurethepriorityofscheduledreports