All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SN1 , if this is a field (your_field), the easiest way it to use the eval functions, not a regex: | eval date=strftime(strptime(your_field,"%m/%d/%Y"),"%m/%Y") Ciao. Giuseppe
 hi , I want to extract from  this date 12/11/2024 result should be 12/2024
Hi I have a Dashboard in which i am using javascript , so whenever I am making changes on the script and restarting the splunk,  I am not able to see those changes , The only way I am able to see tho... See more...
Hi I have a Dashboard in which i am using javascript , so whenever I am making changes on the script and restarting the splunk,  I am not able to see those changes , The only way I am able to see those changes when I am clearing my cache of my browser.
While I wholeheartedly agree about the "don't fiddle with structured data using regexes" point, it's worth noting that spath is not feasible for search-time extractions on which you'd want to base yo... See more...
While I wholeheartedly agree about the "don't fiddle with structured data using regexes" point, it's worth noting that spath is not feasible for search-time extractions on which you'd want to base your searches because spath has to parse whole event (or a whole given field) as json event and has no notion about fields before that so you don't have any condition like "spath(whatever)=some_value". In other words, while for "first-order" jsons you can do the normal initial search filtering based on field=value conditions, it won't work with more deeply embedded json structures (regardless of whether they are included as strings within an "outer" json or if they are simply a part of a syslog-headered event). Splunk still has to process all events from the preceeding pipeline, push them through spath and only then you can filter the data further. One possible way around it is to limit your processed data by limiting your data in the initial search by searching for the literal value term. It will not help much with fields of low cardinality and terms common across many fields (like in this case - true/false is not a very well-limiting search term) but in other cases when you're searching for a fairly unique term it can mean loads of speedup.  
So you want to match any "string" in any event with any other event and count the number of matches? Apart from this being extremely vague, what is it that you are attempting to determine? What are t... See more...
So you want to match any "string" in any event with any other event and count the number of matches? Apart from this being extremely vague, what is it that you are attempting to determine? What are the boundary conditions for determining which strings to try and match? What do you want to do if an event has more than one "string" which matches other strings in other events, do you double count the events?
Hi Splunkers, Per this documentation - https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/tokens - setting default value is done by navigating to the Interactions section of the Configur... See more...
Hi Splunkers, Per this documentation - https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/tokens - setting default value is done by navigating to the Interactions section of the Configuration panel. This is simple with the given example with the token set as $method$. "tokens": { "default": { "method": { "value": "GET" } } }   Would anyone be able to advise as to how can I set default tokens of a dashboard (created using Dashboard Studio) if the value is of the panel is pointing to a data source whose query has a dependency to another data source's results? Panel A: Data Source: 'Alpha status' 'Alpha status' query: | eval status=$Beta status:result._statusNumber$     e.g. I need to set a default token value for $Beta status:result._statusNumber$ Thanks in advance for the response.
@ITWhisperer Cool, your proposal does exactly what I was looking for. Thank you. 
Hi @anooshac , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Another important point: Your raw data is in JSON.  Do not treat structured data as plain strings.  In other words, instead of using regex, use proper JSON tools Splunk has. When showing structured ... See more...
Another important point: Your raw data is in JSON.  Do not treat structured data as plain strings.  In other words, instead of using regex, use proper JSON tools Splunk has. When showing structured data, it is important to post a compliant structure. Let me reconstruct a compliant JSON out of the illustrated fragment before giving your a shortcut.   {"isthiscorrect": {"somekey": {"Name":null,"Id":null,"WaypointId":null}},"Body":{"APIServiceCall":{"ResponseStatusCode":"200","ResponsePayload":"{\"eligibilityIndicator\":[{\"service\":\"Mobile\",\"eligible\":true,\"successReasonCodes\":[],\"failureReasonCodes\":[]}]}"}}}   If your raw events resemble the above in structure, Splunk would have given you a field named Body.APIServiceCall.ResponsePayload.  Your illustrated fragment contains this value for that field:   {"eligibilityIndicator":[{"service":"Mobile","eligible":true,"successReasonCodes":[],"failureReasonCodes":[]}]}   All you need to do is to use an appropriate tool extract from this.  But before you do, note that eligibilityIndicator is an array.  You most likely want to split the array into their own events. Putting this chain together:   | spath input=Body.APIServiceCall.ResponsePayload path=eligibilityIndicator{} | mvexpand eligibilityIndicator{} | spath input=eligibilityIndicator{}   The field you are trying to extract is now called eligible. Here is an emulation with your fragment as reconstructed above.   | makeresults | eval _raw = "{\"isthiscorrect\": {\"somekey\": {\"Name\":null,\"Id\":null,\"WaypointId\":null}},\"Body\":{\"APIServiceCall\":{\"ResponseStatusCode\":\"200\",\"ResponsePayload\":\"{\\\"eligibilityIndicator\\\":[{\\\"service\\\":\\\"Mobile\\\",\\\"eligible\\\":true,\\\"successReasonCodes\\\":[],\\\"failureReasonCodes\\\":[]}]}\"}}}" | spath ``` data emulation above ```   These are the three fields extracted from eligibilityIndicator{} eligible service successReasonCodes{} true Mobile  
Did you add the CSS and also the table ID? Also yes, the mid/min/max is what you set in this statement | eval v=mvappend(v, tostring(case(perc<=10, 1, perc<=20, 2, perc<=100, 3))) So you set your ... See more...
Did you add the CSS and also the table ID? Also yes, the mid/min/max is what you set in this statement | eval v=mvappend(v, tostring(case(perc<=10, 1, perc<=20, 2, perc<=100, 3))) So you set your ranges with that setting, so add a 1 for min, 2 for mid and 3 for max and then in the format, decide the colours you want.
Check permissions. Both in Splunk (the meta entries) an on filesystem. Check if your script runs if you call it with splunk cmd <app_path>/bin/lenlookup.py  
You are really just repeating the same question all these days without showing your effort.  I have a fairly elaborate response in your other question How to filter events using text box values inclu... See more...
You are really just repeating the same question all these days without showing your effort.  I have a fairly elaborate response in your other question How to filter events using text box values including sample dashboards.  Please delete repeating posts and work on the post where volunteers have provided you with the most information.
Thanks @PickleRick  for your response. I am looking to get the count of browsers which are commonly used like chrome, firefox, safari, edge and opera
@bowesmana, thanks for the reply. It is working perfectly fine. I missed the html part for hiding the multivalue. Thank you so much !
One way you can do it is using the hidden multivalue technique where you store a number (or even the colour) as a second value to that number, which is hidden through CSS. The use a format expressio... See more...
One way you can do it is using the hidden multivalue technique where you store a number (or even the colour) as a second value to that number, which is hidden through CSS. The use a format expression to colour the cell See this example <panel> <title>Colour formatting for ranges of percentages</title> <html depends="$hidden$"> <style> #coloured_cell4 table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> <table id="coloured_cell4"> <search> <query>| makeresults count=10 | fields - _time | eval v=random() % 100 | eventstats sum(v) as totv | eval perc=round(v/totv*100) | eval v=v." (".perc."%)" | eval v=mvappend(v, tostring(case(perc&lt;=10, 1, perc&lt;=20, 2, perc&lt;=100, 3)))</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="refresh.display">progressbar</option> <format type="color" field="v"> <colorPalette type="expression">case(mvindex(value, 1) == "1", "#00FF00", mvindex(value, 1) == "2", "#FFBB00", mvindex(value, 1) == "3", "#FF0000")</colorPalette> </format> </table> </panel>
Hi All, I have few columns with is in the format "21 (31%)" , these are the value and percentage of the value. I want to use MinMidMax for the coloring based on the percentage. But i am not able to ... See more...
Hi All, I have few columns with is in the format "21 (31%)" , these are the value and percentage of the value. I want to use MinMidMax for the coloring based on the percentage. But i am not able to use it directly since it is a customized value. Any one knows any solution for coloring such columns?
Thanks  for your detailed reply. Created the python script, added the transforms.conf file, added the lookup definition, updated the meta file, and restarted Splunk and then ran: | makeresults | ... See more...
Thanks  for your detailed reply. Created the python script, added the transforms.conf file, added the lookup definition, updated the meta file, and restarted Splunk and then ran: | makeresults | eval data="இடும்பைக்கு" | lookup test_lenlookup data Still the error:  Error in 'lookup' command: Could not construct lookup 'test_lenlookup, data'. See search.log for more details. Could not find 'lenlookup.py'. It is required for lookup 'test_lenlookup'. The search job has failed due to an error. You may be able view the job in the - job inspector.   looks like the issue is solved 90%, but some basic issues here, but not sure of what. ok, let me revisit the whole issue tomorrow, thanks a lot for your help. 
Assuming this is the output of a search, then make the search do this with that data - this assumes raw is a field containing that data   | eval json=json_array_to_mv(raw) | fields - raw _time | mv... See more...
Assuming this is the output of a search, then make the search do this with that data - this assumes raw is a field containing that data   | eval json=json_array_to_mv(raw) | fields - raw _time | mvexpand json | spath input=json | fields - json  
You can do this in a number of ways. 1. Use a lookup definition based on a lookup file with the messages you want to match. Create a CSV lookup with the matches you are interested in and prefix/su... See more...
You can do this in a number of ways. 1. Use a lookup definition based on a lookup file with the messages you want to match. Create a CSV lookup with the matches you are interested in and prefix/suffix them with *, e.g. | makeresults format=csv data="match *error events found for key* *Invalid requestTimestamp* *Exception while calling some API ...java.util.concurrent.TimeoutException*" | outputlookup matches.csv Set up a look definition based on that CSV and in Advanced options, define match type as WILDCARD(match)  Then in your search do your search... | lookup matches match as message OUTPUT match | where isnotnull(match) | stats count by match So, you can see this actually working like this | makeresults format=csv data="message error events found for key a1 Invalid requestTimestamp abc error event found for key a2 Invalid requestTimestamp def correlationID - 1234 Exception while calling some API ...java.util.concurrent.TimeoutException correlationID - 2345 Exception while calling some API ...java.util.concurrent.TimeoutException" | lookup matches match as message OUTPUT match | where isnotnull(match) | stats count by match Note that this actually produces on a SINGLE match for the error events found because your second example was event, not events. There are other ways to do the same, but it depends on what your trying to do. Note that your lookup can contain additional fields you could output, e.g. a description, which you could OUTPUT instead to report on. Also note that wildcard is *, so put the wildcard where you want it and it will match anything between.
It would help to know what you've tried already so we don't waste time on that. Consider these props settings [mysourcetype] DATETIME_CONFIG = current SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n... See more...
It would help to know what you've tried already so we don't waste time on that. Consider these props settings [mysourcetype] DATETIME_CONFIG = current SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\{ TRANSFORMS-parse_mysourcetype = parse_mysourcetype with these transforms: [parse_mysourcetype] REGEX = "([^"]+)":"([^"]+) FORMAT = $1::$2