All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This probably means that they are defined as automatic lookups, so will always be executed if the matching conditions are true for that lookup definition, e.g. it is the correct sourcetype. The fact... See more...
This probably means that they are defined as automatic lookups, so will always be executed if the matching conditions are true for that lookup definition, e.g. it is the correct sourcetype. The fact that it is failing could be that you don't have permissions to see some part of the lookup or that the lookup is not present and the definition is trying to refer to a non existent lookup, or that the automatic lookup definition is wrong. For example you can cause this problem by creating a field in the automatic lookup that does not exist in the lookup file and you will get this message. Do you have a Splunk sys admin - they should look at this to find out what is wrong with the automatic lookup.
@bowesmana    I'm using the stats statement to help with debugging the actions I'm doing on the UI.   I've tried adding a html panel to help better understand the various actions from the drill dow... See more...
@bowesmana    I'm using the stats statement to help with debugging the actions I'm doing on the UI.   I've tried adding a html panel to help better understand the various actions from the drill down.  However I'm not seeing what I'm expecting to see.     I currently have a stacked column chart.  I would like to hover or click on any of the data in the stacked chart, to get the x,y, z data.  (i.e. x = build, y = duration time, and z= name of task/column-segment)   Do you happen to know how I can capture this information, when I click on a point in the stacked column?   
You CAN actually do conditional lookup, as long as your lookup is a CSV https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.... See more...
You CAN actually do conditional lookup, as long as your lookup is a CSV https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjson_object.26gt.3B.2C.26lt.3Bjson_array.26gt.3B.29 I don't think it's very commonly used, but works well ... search... | eval output=if(field1!=field2, lookup("mylookup.csv", json_object("department", field2), json_array("output_field1","output_field2")), "{}")  You will get back a field output with a JSON representation of the output fields listed in the JSON array
Hi Splunkers...  Assumptions... The HF we want to deploy.. it should be inside a DMZ network, the license master is outside the DMZ and all necessary ports will be opened as required now the questi... See more...
Hi Splunkers...  Assumptions... The HF we want to deploy.. it should be inside a DMZ network, the license master is outside the DMZ and all necessary ports will be opened as required now the question is.. Can License Master to HF have only one way direction communication(info flow is only from LM to HF... not two way, in the sense... there will be no HF to LM info flow) OR the LM to HF requires two way communication by default.    please suggest, thanks.   
Hi @cjharmening ... (replying for you and for all other new splunkers) Splunk has got its documentation is very neatly created(like a library's bookshelves). 1) pls go to https://docs.splunk.com/ ... See more...
Hi @cjharmening ... (replying for you and for all other new splunkers) Splunk has got its documentation is very neatly created(like a library's bookshelves). 1) pls go to https://docs.splunk.com/ 2) then, pls select the appropriate app you are looking for: 3). under the particular app's tab, you will find the documentation you are looking for: https://docs.splunk.com/Documentation/MC/Current/SplunkPlaybookAPI   hope its helpful... karma points are appreciated by everyone. thanks.  if this solves your question, please accept it as answer. thanks. 
Can anyone provide a link to Splunk Mission Control API documentation?   Thank you 
How to save CPU will depends on the actual flow of your search.  For example, if field1!=<fixed string pattern> is exceedingly rare in a large dataset, you can include the lookup in an append subsear... See more...
How to save CPU will depends on the actual flow of your search.  For example, if field1!=<fixed string pattern> is exceedingly rare in a large dataset, you can include the lookup in an append subsearch, like <your main search> | append [search <somesearch> field1 != <fixed string pattern> | lookup accounts department as field2 OUTPUT] Then  manipulate the combined stream to utilize lookup output. Because field1 != field2 is inapplicable in search command, this technique will not save you index search time.  However, if you have a situation where index search is cheap but lookup is exceedingly expensive (it can happen), you can still do it, like <your main search> | append [search <same main search> | where field1 != field2 | lookup accounts department as field2 OUTPUT] Alternatively, you are ONLY interested in annotating field2 for which field1 != field2, you can use appendpipe (which is very efficient) <your main search> | appendpipe [stats count by field1 field2 | where field1 != field2 | lookup accounts department as field2 OUTPUT | fields - count] In all cases, you will need to massage these annotations back into your final results.  Hope this helps.
<drilldown> <condition field="Name"> <link target="_blank">| inputlookup myfile.csv</link> </condition> <condition field="Organization"> <link ta... See more...
<drilldown> <condition field="Name"> <link target="_blank">| inputlookup myfile.csv</link> </condition> <condition field="Organization"> <link target="_blank">www.Splunk.com</link> </condition> </drilldown> If I select Google in the Name field, it's getting redirect to the www.splunk.com only
Here are some SPL queries from a knowledge object definition term search dashboard I have in my environment. I've been thinking about putting it and some other admin centric dashboards I've created i... See more...
Here are some SPL queries from a knowledge object definition term search dashboard I have in my environment. I've been thinking about putting it and some other admin centric dashboards I've created into an app and adding it to Splunkbase. Maybe I should get on that. Replace the {your_term} parts with your lookup. Saved Search | rest splunk_server=* /servicesNS/-/-/saved/searches add_orphan_field=yes | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing, dispatch.* as * | eval has_term=if(match(search,"{your_term}") OR match(title,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | where has_term="1" | fields splunk_server, app, owner, sharing, disabled, is_scheduled, cron_schedule, earliest_time, latest_time, title, search | sort splunk_server, title   Views | rest splunk_server=* /servicesNS/-/-/data/ui/views | rename eai:acl.app as app, eai:data as data, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(data,"{your_term}") OR match(title,"{your_term}") OR match(label,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, title, label, data | sort splunk_server, title   Data Models | rest splunk_server=* /servicesNS/-/-/data/models | rename eai:acl.app as app, eai:data as data, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(data,"{your_term}") OR match(title,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, title, data | sort splunk_server, title   Fields | rest splunk_server=* /services/data/props/extractions | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(title,"{your_term}") OR match(attribute,"{your_term}") OR match(value,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | eval type="props" | search has_term=1 | append [ | rest splunk_server=* /services/data/transforms/extractions | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(title,"{your_term}") OR match(REGEX,"{your_term}") OR match(SOURCE_KEY,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | eval type="transforms" | fields splunk_server, app, owner, sharing, title, REGEX, SOURCE_KEY ] | append [ | rest splunk_server=* /services/data/props/calcfields | rename eai:acl.app as app, eai:acl.owner as owner, field.name as field_name, eai:acl.sharing as sharing | eval has_term=if(match(title,"{your_term}") OR match(attribute,"{your_term}") OR match(value,"{your_term}") OR match(field_name,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | eval type="calcfields" | fields splunk_server, app, owner, sharing, title, type, attribute, value, field_name ] | append [ | rest splunk_server=* /services/data/props/fieldaliases | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(title,"{your_term}") OR match(attribute,"{your_term}") OR match(value,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | eval type="fieldalias" | fields splunk_server, app, owner, sharing, title, type, attribute, value ] | rename REGEX as regex, SOURCE_KEY as source_key | fields splunk_server, app, owner, sharing, title, type, attribute, value, regex, source_key, field_name   Macros | rest splunk_server=* /servicesNS/-/-/admin/macros | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(definition,"{your_term}") OR match(title,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, title, definition | sort splunk_server, title   Event Types | rest splunk_server=* /servicesNS/-/-/saved/eventtypes | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(search,"{your_term}") OR match(title,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, title, search | sort splunk_server, title   Tags | rest splunk_server=* /servicesNS/-/-/admin/tags | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(field_name_value,"{your_term}") OR match(title,"{your_term}") OR match(tag_name,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, tag_name, field_name_value | sort splunk_server, tag_name   Lookups | rest splunk_server=* /services/data/transforms/lookups | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | append [ | rest splunk_server=* /servicesNS/-/-/data/lookup-table-files | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval filename=title | eval type="file" ] | eval filename=if(isnull(filename), title, filename) | stats values(title) as title, values(fields_array) as fields_array by splunk_server, app, owner, sharing, filename, type | eval filename=if(type!="file" AND type!="geo", "", filename) | eval has_term=if(match(filename,"{your_term}") OR match(title,"{your_term}") OR match(fields_array,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, filename, title, fields_array, type | sort splunk_server, filename
You need to accurately describe your raw data (anonymize as needed) and any relevant characteristics. (As a general rule, always describe data when asking data analytics questions.)  Which field name... See more...
You need to accurately describe your raw data (anonymize as needed) and any relevant characteristics. (As a general rule, always describe data when asking data analytics questions.)  Which field name gives you "account"?  Based on your description, "account" is NOT the top level path in the JSON data; additionally, this path to "account" is inside an array according to your partial reveal.  Is it second level?  Third level? Suppose your top level path is "events", i.e., raw data looks like {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC5"}]} {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC6","account":"verified"}]} {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC7","account":"unverified"}]} {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC8","account":"verified"}]} {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC9"}]} Splunk would have given you flattened field names like events{}.GUID, events{}.account, etc.  If you know that every array events{} contains only a single event{}.account, you can just substitute "account" in solutions with event{}.account.  But as an array, events{}.account could be multivalued.  In that case, you need to make them single-valued first, i.e., | spath path=events{} ``` events{} should be the actual path of that array ``` | mvexpand events{} | spath input=events{} | eval account=if(account=="verified","verified","unverified") | stats count by account Alternatively, use fillnull | spath path=events{} ``` events{} should be the actual path of that array ``` | mvexpand events{} | spath input=events{} | fillnull account value="unverified" | stats count by account If "account" is not second level, or it is not really inside an array as your original description implied, you need to give accurate description of your data.
Hello, I was wondering if it is possible to locate or search in Splunk if a specific lookup table is being used in a dashboard, alert, saved search, report etc. Thank you for your help!
I have a dashboard with three dropdown inputs.  The first is Date Range and it has a default value of last 24 hours.  The dashboard does the initial search fine, but when I change the date range, via... See more...
I have a dashboard with three dropdown inputs.  The first is Date Range and it has a default value of last 24 hours.  The dashboard does the initial search fine, but when I change the date range, via the presets in the dropdown, nothing updates Code for the dropdown: { "type": "input.timerange", "options": { "token": "dateRange", "defaultValue": "-24h@h,now" }, "title": "Date Range" }
Hi, i wanted to avoid doing a lookup if certain conditions are in place if that's not possible will just have to do it which returns the data if it finds any was trying to just save some cpu and time
Also, if you want to do this in an ad hoc search you can use | addinfo to  add the info_max_time, and info_min_time fields to your data to get the ad hoc search time range from the time picker. Editi... See more...
Also, if you want to do this in an ad hoc search you can use | addinfo to  add the info_max_time, and info_min_time fields to your data to get the ad hoc search time range from the time picker. Editing the above answer would look like this. index=... Host=HostName "User ID"=* | addinfo | stats count by "User ID" | stats avg(eval(count*86400/(info_max_time - info_min_time)))  
Add another paren at the end of the stats line. |stats avg(eval(count*86400/($time_tok.latest$ - $time_tok.earliest$)))  
Hello. In monitoring our application VCT and EURT we noticed that for all of Q3 the VCT was taking longer than EURT. Then all of a sudden it switched and now VCT is less than EURT. Seems to me that V... See more...
Hello. In monitoring our application VCT and EURT we noticed that for all of Q3 the VCT was taking longer than EURT. Then all of a sudden it switched and now VCT is less than EURT. Seems to me that VCT should almost always be short than EURT. Is this true? Does this sound like configuration issue that was corrected? If so, should I consider the EURT as the VCT for Q3?  
In drilldown, you have access to $click.value2$ which is the cell value.  You then program <condition/> with exact value.
From my understanding, Indexer Discovery is used on Forwarders to send data to Splunk, not on Search Heads. We don't have it enabled there. The indexers in question are not currently present in the ... See more...
From my understanding, Indexer Discovery is used on Forwarders to send data to Splunk, not on Search Heads. We don't have it enabled there. The indexers in question are not currently present in the Search Peers list on the Search Heads under Settings -> Distributed Search -> Search Peers  - we were under the impression that the cluster manager manages that list and should take care of all of the items there when servers are decommissioned. We'll definitely try removing them from the list beforehand to see if that makes a difference.
Thank you @ITWhisperer That Worked...... BUT !!! But, it is having some defect... When I click the Splunk/google/facebook its going to "www.splunk.com" ONLY. <link target="_blank">www.splunk.com<... See more...
Thank you @ITWhisperer That Worked...... BUT !!! But, it is having some defect... When I click the Splunk/google/facebook its going to "www.splunk.com" ONLY. <link target="_blank">www.splunk.com</link> I have a url as well in my lookup like, what I'm looking is if i click the splunk it has to go with respective url. same way with other organizations Name     Organization       URL                                                                    Count  Bob            splunk               www.splunk.com                                               2 Matt           google              www.google.com                                              15 smith          facebook         www.facebook.com                                        9 
If you're using Indexer Discovery then nothing else should need to be done.  Otherwise, go to each SH and remove the indexer from the Search Peers list (Settings->Distributed search) prior to shuttin... See more...
If you're using Indexer Discovery then nothing else should need to be done.  Otherwise, go to each SH and remove the indexer from the Search Peers list (Settings->Distributed search) prior to shutting down the indexer.