All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For application Name its working .for interface name how to map the application name  Application Name   : Test 1,Test 2 In Test 1 application name have 3 interface name  aa,bb,cc In Test 2 applic... See more...
For application Name its working .for interface name how to map the application name  Application Name   : Test 1,Test 2 In Test 1 application name have 3 interface name  aa,bb,cc In Test 2 application name have 5 interface name  ww,dd,ff,gg,hh. Already i am getting value from inputlookup .How can i map application name to interface name
This other chart seems to be related to a different search, particularly as it appears to have a date on the x-axis which does not appear as a column in your search.
Thanks for you response. Your solution is working fine and create below query for search.  index = **** host=***| spath | eval message="{\"message\":".message."}" | spath input=message message{} out... See more...
Thanks for you response. Your solution is working fine and create below query for search.  index = **** host=***| spath | eval message="{\"message\":".message."}" | spath input=message message{} output=collection | mvexpand collection | spath input=collection | stats sum(TOTAL) as Total, sum(PROCESSED) as Processed sum(SKIPPED) as Skipped by TARGETSYSTEM I am using above query. Below chart is created using above query. Now I want to display inventory with date in chart.  I want display like below    
@NickNguyen Refer the below document.  Resource Usage: CPU Usage - Splunk Documentation  Solved: Example of how to measure server CPU usage? - Splunk Community *** If the above solution helps, an ... See more...
@NickNguyen Refer the below document.  Resource Usage: CPU Usage - Splunk Documentation  Solved: Example of how to measure server CPU usage? - Splunk Community *** If the above solution helps, an upvote is appreciated. ***
Hello @NickNguyen , On the Enterprise instance itself, you can find the Monitoring Console that ships OOTB with the Splunk Enterprise package. You can navigate to Settings > Monitoring Console > Res... See more...
Hello @NickNguyen , On the Enterprise instance itself, you can find the Monitoring Console that ships OOTB with the Splunk Enterprise package. You can navigate to Settings > Monitoring Console > Resource Usage > CPU Usage: Instance dashboard and that'll help you identify the CPU usage of the instance. From the panel also, you can open the search by clicking the magnifying lens icon when you hover through the panel and set an alert as per the required threshold.    Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hey @ShamGowda , What is the concern here? Have you got the data already in the respective index? Also, have you explored Splunkbase already? There are quite lots of apps that helps visualizing the ... See more...
Hey @ShamGowda , What is the concern here? Have you got the data already in the respective index? Also, have you explored Splunkbase already? There are quite lots of apps that helps visualizing the memory and CPU usage.   Thanks, Tejas.
Hi everyone, i have an Enterprise instance installed on a Windows machine. I am trying to monitor the CPU performance of the machine on which the instance is on so that i can generate an alert whenev... See more...
Hi everyone, i have an Enterprise instance installed on a Windows machine. I am trying to monitor the CPU performance of the machine on which the instance is on so that i can generate an alert whenever the CPU exceeds 90% performance. Any help will be greatly appreciated!!!
As you are probably aware, the list of overlay fields is a comma-separated list of field name, so that's what you need in your token. You could try something like this | stats values(machine) as avg... See more...
As you are probably aware, the list of overlay fields is a comma-separated list of field name, so that's what you need in your token. You could try something like this | stats values(machine) as avg_processing_time_per_block | eval avg_processing_time_per_block=mvjoin(avg_processing_time_per_block,",") You would then set your token on the done block of the search, using this field from the (first) results row and use it in your display panel settings <option name="charting.chart.overlayFields">$avg_processing_time_per_block$ </option>  
OK. Back up a little. What does your environment look like? Because I think we have some discrepancy in thinking about your server. I think @gcusello thinks you have a search head cluster but want ... See more...
OK. Back up a little. What does your environment look like? Because I think we have some discrepancy in thinking about your server. I think @gcusello thinks you have a search head cluster but want to delete an app from a single instance (presumably initially installed on thie instance only) whereas I assumed we're dealing with a completely stand-alone search head server. One of us has to be wrong here So do you have a search head cluster or are we talking about a stand-alone search head? If this is a stand-alone search-head is it managed by Deployment Server?
They are already ordered - they are sorted lexicographically (alphabetically) - perhaps not the order you wanted? Try adding this to the end | fields guid start end duration status  
Hi @ITWhisperer  Thanks for reply , I understand , But correct me if i'm wrong 1.If i have a seperate hidden panel that gives my token value (avg_processing_time_per_block ) 2.Then how can i a... See more...
Hi @ITWhisperer  Thanks for reply , I understand , But correct me if i'm wrong 1.If i have a seperate hidden panel that gives my token value (avg_processing_time_per_block ) 2.Then how can i assign the token $avg_processing_time_per_block$ value to overlay Fields like these? <option name="charting.chart.overlayFields">$avg_processing_time_per_block$ </option> or  <option name="charting.chart.overlayFields">avg_processing_time_per_block </option> if i gives these a token then line chart have a single line named avg_processing_time_per_block but the requirement is the avg_processing_time_per_block has dynamic value  My need is to how to assign the avg_processing_time_per_block value as token in  charting.chart.overlayFields thanks,
Thanks it's work great. Is there a way to order the value of the column property?
If you are 100% sure that you only have one result, and always will have just one result, you could try to brute-force it and: 1) spath the whole result field 2) Extract the result's ID with regex ... See more...
If you are 100% sure that you only have one result, and always will have just one result, you could try to brute-force it and: 1) spath the whole result field 2) Extract the result's ID with regex 3) cut the id part from the result field 4) spath the remaining array Might work, might not (especially that if the rest of the data is similarily (dis)organized, you're gonna have more dynamically named fields deeper down), definitely won't be pretty as manipulating structured data with regexes is prone to errors. BTW, I don't understand how you got so many results from my example.
This is not valid JSON, it is a formatted version of the JSON. Perhaps I wasn't specific enough. Please share some anonymised representative sample raw events so we can see what you are dealing with... See more...
This is not valid JSON, it is a formatted version of the JSON. Perhaps I wasn't specific enough. Please share some anonymised representative sample raw events so we can see what you are dealing with, preferably in a code block </> to prevent format information being lost. For example, we would want to be able to use the events in a runanywhere / makeresults style search, much like @PickleRick demonstrated
You could have a separate (hidden) panel which generates the value of the token that you use to set the overlay fields for this  panel
it is working now. thanks!
this is the output of the query :   also, example of my event : browsers: { [+] } coverageResult: { [+] } libraryPath: libs/funnels result: { [-] 82348856: [ [+] ] ... See more...
this is the output of the query :   also, example of my event : browsers: { [+] } coverageResult: { [+] } libraryPath: libs/funnels result: { [-] 82348856: [ [+] ] } summary: { [+] } }
It is working now. Thanks
No, you can't (easily and efficiently) make such "dynamic" extraction. Splunk is very good at dealing with key-value fields, but it doesn't have any notion of "structure" in data. It can parse out j... See more...
No, you can't (easily and efficiently) make such "dynamic" extraction. Splunk is very good at dealing with key-value fields, but it doesn't have any notion of "structure" in data. It can parse out json or xml into flat key-value pairs in several ways (auto_kv, spath/xpath, indexed extractions) but all those methods have some drawbacks as the structure of the data is lost and is only partially retained in field naming. So if you handle json/xml data it's often best idea (if you have the possibility of course) to influence the event-emiting side so that the events are easily parseable and can be processed in Splunk without much overhead. Because your data (which you haven't posted a sample of - shame on you ) most probably contains something like { [... some other part of json ...], "result": { "some_event_id": { [... event data... }, "another_event_id": { [... event data ...] } } } While it would be much better to have it as {    [...]    "result": {          {               "id": "first_id",              [... result details ...]         },       {             "id": "another_id",             [... result details ...]      }    } } It would be much better because then you'd have a static easily accessible field called id Of course from Splunk's point of view if you managed to flatten the events even more (possibly splitting it into several separate ones) would be even better. With this format you have, since it's not getting parsed as a multivalued field, since you don't have an array in your json but separate fields, it's gonna be tough. You might try some clever foreach magic but I can't guarantee success here. An example of such approach is here in the run-anywhere example: | makeresults | eval json="{\"result\":{\"1\":[{\"a\":\"n\"},{\"b\":\"m\"}],\"2\":[{\"a\":\"n\"},{\"b\":\"m\"}]}}" | spath input=json | foreach result.*{}.a [ | eval results=mvappend(results,"<<MATCHSTR>>" . ":" . '<<FIELD>>') ] | mvexpand results | eval resultsexpanded=split(results,":") | eval resultid=mvindex(resultsexpanded,0),resultvalue=mvindex(resultsexpanded,1) | table resultid,resultvalue But as you can see, it's nowhere pretty.
Hi @aasserhifni , did you tried to push apps from the Deployer? the apps not present in the Deployer's $SPLUNK_HOME/etc/shcluster/apps should be removed from the Search Head Cluster. Ciao. Giuseppe