All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @cjharmening ... (replying for you and for all other new splunkers) Splunk has got its documentation is very neatly created(like a library's bookshelves). 1) pls go to https://docs.splunk.com/ ... See more...
Hi @cjharmening ... (replying for you and for all other new splunkers) Splunk has got its documentation is very neatly created(like a library's bookshelves). 1) pls go to https://docs.splunk.com/ 2) then, pls select the appropriate app you are looking for: 3). under the particular app's tab, you will find the documentation you are looking for: https://docs.splunk.com/Documentation/MC/Current/SplunkPlaybookAPI   hope its helpful... karma points are appreciated by everyone. thanks.  if this solves your question, please accept it as answer. thanks. 
Can anyone provide a link to Splunk Mission Control API documentation?   Thank you 
How to save CPU will depends on the actual flow of your search.  For example, if field1!=<fixed string pattern> is exceedingly rare in a large dataset, you can include the lookup in an append subsear... See more...
How to save CPU will depends on the actual flow of your search.  For example, if field1!=<fixed string pattern> is exceedingly rare in a large dataset, you can include the lookup in an append subsearch, like <your main search> | append [search <somesearch> field1 != <fixed string pattern> | lookup accounts department as field2 OUTPUT] Then  manipulate the combined stream to utilize lookup output. Because field1 != field2 is inapplicable in search command, this technique will not save you index search time.  However, if you have a situation where index search is cheap but lookup is exceedingly expensive (it can happen), you can still do it, like <your main search> | append [search <same main search> | where field1 != field2 | lookup accounts department as field2 OUTPUT] Alternatively, you are ONLY interested in annotating field2 for which field1 != field2, you can use appendpipe (which is very efficient) <your main search> | appendpipe [stats count by field1 field2 | where field1 != field2 | lookup accounts department as field2 OUTPUT | fields - count] In all cases, you will need to massage these annotations back into your final results.  Hope this helps.
<drilldown> <condition field="Name"> <link target="_blank">| inputlookup myfile.csv</link> </condition> <condition field="Organization"> <link ta... See more...
<drilldown> <condition field="Name"> <link target="_blank">| inputlookup myfile.csv</link> </condition> <condition field="Organization"> <link target="_blank">www.Splunk.com</link> </condition> </drilldown> If I select Google in the Name field, it's getting redirect to the www.splunk.com only
Here are some SPL queries from a knowledge object definition term search dashboard I have in my environment. I've been thinking about putting it and some other admin centric dashboards I've created i... See more...
Here are some SPL queries from a knowledge object definition term search dashboard I have in my environment. I've been thinking about putting it and some other admin centric dashboards I've created into an app and adding it to Splunkbase. Maybe I should get on that. Replace the {your_term} parts with your lookup. Saved Search | rest splunk_server=* /servicesNS/-/-/saved/searches add_orphan_field=yes | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing, dispatch.* as * | eval has_term=if(match(search,"{your_term}") OR match(title,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | where has_term="1" | fields splunk_server, app, owner, sharing, disabled, is_scheduled, cron_schedule, earliest_time, latest_time, title, search | sort splunk_server, title   Views | rest splunk_server=* /servicesNS/-/-/data/ui/views | rename eai:acl.app as app, eai:data as data, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(data,"{your_term}") OR match(title,"{your_term}") OR match(label,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, title, label, data | sort splunk_server, title   Data Models | rest splunk_server=* /servicesNS/-/-/data/models | rename eai:acl.app as app, eai:data as data, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(data,"{your_term}") OR match(title,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, title, data | sort splunk_server, title   Fields | rest splunk_server=* /services/data/props/extractions | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(title,"{your_term}") OR match(attribute,"{your_term}") OR match(value,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | eval type="props" | search has_term=1 | append [ | rest splunk_server=* /services/data/transforms/extractions | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(title,"{your_term}") OR match(REGEX,"{your_term}") OR match(SOURCE_KEY,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | eval type="transforms" | fields splunk_server, app, owner, sharing, title, REGEX, SOURCE_KEY ] | append [ | rest splunk_server=* /services/data/props/calcfields | rename eai:acl.app as app, eai:acl.owner as owner, field.name as field_name, eai:acl.sharing as sharing | eval has_term=if(match(title,"{your_term}") OR match(attribute,"{your_term}") OR match(value,"{your_term}") OR match(field_name,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | eval type="calcfields" | fields splunk_server, app, owner, sharing, title, type, attribute, value, field_name ] | append [ | rest splunk_server=* /services/data/props/fieldaliases | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(title,"{your_term}") OR match(attribute,"{your_term}") OR match(value,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | eval type="fieldalias" | fields splunk_server, app, owner, sharing, title, type, attribute, value ] | rename REGEX as regex, SOURCE_KEY as source_key | fields splunk_server, app, owner, sharing, title, type, attribute, value, regex, source_key, field_name   Macros | rest splunk_server=* /servicesNS/-/-/admin/macros | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(definition,"{your_term}") OR match(title,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, title, definition | sort splunk_server, title   Event Types | rest splunk_server=* /servicesNS/-/-/saved/eventtypes | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(search,"{your_term}") OR match(title,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, title, search | sort splunk_server, title   Tags | rest splunk_server=* /servicesNS/-/-/admin/tags | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval has_term=if(match(field_name_value,"{your_term}") OR match(title,"{your_term}") OR match(tag_name,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, tag_name, field_name_value | sort splunk_server, tag_name   Lookups | rest splunk_server=* /services/data/transforms/lookups | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | append [ | rest splunk_server=* /servicesNS/-/-/data/lookup-table-files | rename eai:acl.app as app, eai:acl.owner as owner, eai:acl.sharing as sharing | eval filename=title | eval type="file" ] | eval filename=if(isnull(filename), title, filename) | stats values(title) as title, values(fields_array) as fields_array by splunk_server, app, owner, sharing, filename, type | eval filename=if(type!="file" AND type!="geo", "", filename) | eval has_term=if(match(filename,"{your_term}") OR match(title,"{your_term}") OR match(fields_array,"{your_term}") OR match(owner,"{your_term}"), 1, 0) | search has_term=1 | fields splunk_server, app, owner, sharing, filename, title, fields_array, type | sort splunk_server, filename
You need to accurately describe your raw data (anonymize as needed) and any relevant characteristics. (As a general rule, always describe data when asking data analytics questions.)  Which field name... See more...
You need to accurately describe your raw data (anonymize as needed) and any relevant characteristics. (As a general rule, always describe data when asking data analytics questions.)  Which field name gives you "account"?  Based on your description, "account" is NOT the top level path in the JSON data; additionally, this path to "account" is inside an array according to your partial reveal.  Is it second level?  Third level? Suppose your top level path is "events", i.e., raw data looks like {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC5"}]} {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC6","account":"verified"}]} {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC7","account":"unverified"}]} {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC8","account":"verified"}]} {"events" : [{xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC9"}]} Splunk would have given you flattened field names like events{}.GUID, events{}.account, etc.  If you know that every array events{} contains only a single event{}.account, you can just substitute "account" in solutions with event{}.account.  But as an array, events{}.account could be multivalued.  In that case, you need to make them single-valued first, i.e., | spath path=events{} ``` events{} should be the actual path of that array ``` | mvexpand events{} | spath input=events{} | eval account=if(account=="verified","verified","unverified") | stats count by account Alternatively, use fillnull | spath path=events{} ``` events{} should be the actual path of that array ``` | mvexpand events{} | spath input=events{} | fillnull account value="unverified" | stats count by account If "account" is not second level, or it is not really inside an array as your original description implied, you need to give accurate description of your data.
Hello, I was wondering if it is possible to locate or search in Splunk if a specific lookup table is being used in a dashboard, alert, saved search, report etc. Thank you for your help!
I have a dashboard with three dropdown inputs.  The first is Date Range and it has a default value of last 24 hours.  The dashboard does the initial search fine, but when I change the date range, via... See more...
I have a dashboard with three dropdown inputs.  The first is Date Range and it has a default value of last 24 hours.  The dashboard does the initial search fine, but when I change the date range, via the presets in the dropdown, nothing updates Code for the dropdown: { "type": "input.timerange", "options": { "token": "dateRange", "defaultValue": "-24h@h,now" }, "title": "Date Range" }
Hi, i wanted to avoid doing a lookup if certain conditions are in place if that's not possible will just have to do it which returns the data if it finds any was trying to just save some cpu and time
Also, if you want to do this in an ad hoc search you can use | addinfo to  add the info_max_time, and info_min_time fields to your data to get the ad hoc search time range from the time picker. Editi... See more...
Also, if you want to do this in an ad hoc search you can use | addinfo to  add the info_max_time, and info_min_time fields to your data to get the ad hoc search time range from the time picker. Editing the above answer would look like this. index=... Host=HostName "User ID"=* | addinfo | stats count by "User ID" | stats avg(eval(count*86400/(info_max_time - info_min_time)))  
Add another paren at the end of the stats line. |stats avg(eval(count*86400/($time_tok.latest$ - $time_tok.earliest$)))  
Hello. In monitoring our application VCT and EURT we noticed that for all of Q3 the VCT was taking longer than EURT. Then all of a sudden it switched and now VCT is less than EURT. Seems to me that V... See more...
Hello. In monitoring our application VCT and EURT we noticed that for all of Q3 the VCT was taking longer than EURT. Then all of a sudden it switched and now VCT is less than EURT. Seems to me that VCT should almost always be short than EURT. Is this true? Does this sound like configuration issue that was corrected? If so, should I consider the EURT as the VCT for Q3?  
In drilldown, you have access to $click.value2$ which is the cell value.  You then program <condition/> with exact value.
From my understanding, Indexer Discovery is used on Forwarders to send data to Splunk, not on Search Heads. We don't have it enabled there. The indexers in question are not currently present in the ... See more...
From my understanding, Indexer Discovery is used on Forwarders to send data to Splunk, not on Search Heads. We don't have it enabled there. The indexers in question are not currently present in the Search Peers list on the Search Heads under Settings -> Distributed Search -> Search Peers  - we were under the impression that the cluster manager manages that list and should take care of all of the items there when servers are decommissioned. We'll definitely try removing them from the list beforehand to see if that makes a difference.
Thank you @ITWhisperer That Worked...... BUT !!! But, it is having some defect... When I click the Splunk/google/facebook its going to "www.splunk.com" ONLY. <link target="_blank">www.splunk.com<... See more...
Thank you @ITWhisperer That Worked...... BUT !!! But, it is having some defect... When I click the Splunk/google/facebook its going to "www.splunk.com" ONLY. <link target="_blank">www.splunk.com</link> I have a url as well in my lookup like, what I'm looking is if i click the splunk it has to go with respective url. same way with other organizations Name     Organization       URL                                                                    Count  Bob            splunk               www.splunk.com                                               2 Matt           google              www.google.com                                              15 smith          facebook         www.facebook.com                                        9 
If you're using Indexer Discovery then nothing else should need to be done.  Otherwise, go to each SH and remove the indexer from the Search Peers list (Settings->Distributed search) prior to shuttin... See more...
If you're using Indexer Discovery then nothing else should need to be done.  Otherwise, go to each SH and remove the indexer from the Search Peers list (Settings->Distributed search) prior to shutting down the indexer.
An example is attached.  The first line is what I want, but I get a whole bunch of _time lines and I only want the summed up line (previous month total) shown above.  I need to get these numbers for... See more...
An example is attached.  The first line is what I want, but I get a whole bunch of _time lines and I only want the summed up line (previous month total) shown above.  I need to get these numbers for the previous month for our pricing application.  I actually got most of this from the Charge Back application, but have been fiddling with it to get what I need out of it.
We are in the process of a full hardware upgrade of all our indexers in our distributed environment. We have three standalone search heads connected to a cluster of many indexers. In the process, we ... See more...
We are in the process of a full hardware upgrade of all our indexers in our distributed environment. We have three standalone search heads connected to a cluster of many indexers. In the process, we are proceeding one at a time: 1. Loading up a new indexer 2. Integrating it into the cluster 3. Taking an old indexer offline, enforcing counts When the decommissioning process finishes and the old indexers are gracefully shutdown, we have an alert that appears on our search heads in the Splunk Health Report: "The search head lost connection to the following peers: <decommissioned peer>.  If there are unstable peers, confirm that the timeout (connectionTimeout and authTokenConnectionTimeout) settings in distsearch.conf are at appropriate values." I cannot figure out why we are seeing this alert. My conclusion is that we must be missing a step somewhere. To decommission a server, we do the following: 1. On the indexer: splunk offline enforce-counts 2. On the cluster master: splunk remove cluster-peers <GUID> 3. On the indexer: Completely uninstall Splunk. 3. On the cluster master: Rebalance indexes. We have also tried reloading the health.conf configuration by running '|rest /services/configs/conf-health.conf/_reload' on the search heads, to no effect. We cannot figure out where the health report is retaining this old data from, and the _internal logs clearly show that the moment of the GracefulShutdown transition on the Cluster Master is where the PeriodicHealthReporter component on the Search Heads begins to alert. The indexers in question are no longer listed as search peers on the search heads, and they're not listed as search peers on the cluster master either. The monitoring console looks fine. What could we be missing?
Hello all,  We are wanting to enrich events as they become notables in ES before they are sent onto Mission control. Thoughts being, enrich the event via some sort of search ( all the data will be ... See more...
Hello all,  We are wanting to enrich events as they become notables in ES before they are sent onto Mission control. Thoughts being, enrich the event via some sort of search ( all the data will be in splunk already) to add , DNS, DHCP, Threat intel and some endpoint data.     Is it possible to have a search run for the notable index to gather information from other indexes and add them to the notable event?  If so I would love to discuss.    
You cannot do a conditional lookup, but you could do the lookup across all the data and then only conditionally display the data that was looked up.