All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

To add some clarity as that accepted answer is still quite vague or confusing... The easiest way would be to relate these field names to _time and _indextime recentTime = _indextime = last actual t... See more...
To add some clarity as that accepted answer is still quite vague or confusing... The easiest way would be to relate these field names to _time and _indextime recentTime = _indextime = last actual time this host was heard from by index(es) defined in metadata command, or in other more specific terminology, the last time it wrote logs to an index lastTime = _time = time stamp of the events from that host by index(es) defined in metadata command, in other words the latest timestamp in the set of events defined by the search
<form version="1.1" theme="light"> <label>Answers - Classic - Viz toggles</label> <fieldset submitButton="false"> <input type="radio" token="tok_data_labels"> <label>Data Labels</label>... See more...
<form version="1.1" theme="light"> <label>Answers - Classic - Viz toggles</label> <fieldset submitButton="false"> <input type="radio" token="tok_data_labels"> <label>Data Labels</label> <choice value="none">Off</choice> <choice value="all">On</choice> <choice value="minmax">Min/Max</choice> <default>none</default> </input> </fieldset> <row> <panel> <chart> <title>Viz Radio Toggle</title> <search> <query>index=_internal | timechart span=5m limit=5 useother=t count by sourcetype</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.chart">line</option> <option name="charting.chart.showDataLabels">$tok_data_labels$</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>
Thank you very much for your answer !
Do you mean you want to monitor your Splunk infrastructure usage or do you ingest some data regarding "external" hosts? For the former as @dural_yyz mentioned, check Monitoring Console. You can also ... See more...
Do you mean you want to monitor your Splunk infrastructure usage or do you ingest some data regarding "external" hosts? For the former as @dural_yyz mentioned, check Monitoring Console. You can also gather metrics from the _metrics index. For the latter - it depends on your environment. Splunk "just" happily gets the data you throw at it and can manipulate and search it. But it's up to your architects and admins to tell you where they set up the data and what it's made of.
It's not clear what is being asked.  What we know about how the coldToFrozenScript is processed is documented in indexes.conf.spec. If there's something more specific you want to know then please re... See more...
It's not clear what is being asked.  What we know about how the coldToFrozenScript is processed is documented in indexes.conf.spec. If there's something more specific you want to know then please revise the question.
What do you mean by "doesn't work"? It doesn't filter out the values? Because there is a mismatch between field names. A subsearch (unless its results consist (solely?) of fields named "search" or "q... See more...
What do you mean by "doesn't work"? It doesn't filter out the values? Because there is a mismatch between field names. A subsearch (unless its results consist (solely?) of fields named "search" or "query" or you used the format command explicitly) is rendered as set of conditions based on the names of the resulting fields. So your subsearch in example 2 is rendered as ((unique_id="some_value") OR (unique_id="another+value") OR ... ) whereas your subsearch in example 3 is rendered smilarily but the field is called "ignore". You're not creating a field called "ignore" anywhere earlier in the search so you have nothing to filter on. BTW, you are aware that this is a relatively ineffective way to search? (inclusion is better than exclusion!)
According to the documentation, you may upgrade 9.1 directly to 9.3.
https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/HowtoupgradeSplunk
The field (not "column") returned by the subsearch ("unique_id" in Version 1 and "ignore" in Version 2) must exist in the main search.
How splunk calls coldToFrozen.py script automatically once the script is setup in /opt/splunk/bin and indexes.conf file with needed arguements. once cold_db is full how this script gets invoked by sp... See more...
How splunk calls coldToFrozen.py script automatically once the script is setup in /opt/splunk/bin and indexes.conf file with needed arguements. once cold_db is full how this script gets invoked by splunk
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Field_extraction_configuration  SEDCMD-<class> = <sed script> * Only used at index time. * Commonly used to anonymize incoming dat... See more...
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Field_extraction_configuration  SEDCMD-<class> = <sed script> * Only used at index time. * Commonly used to anonymize incoming data at index time, such as credit card or social security numbers. For more information, search the online documentation for "anonymize data." * Used to specify a sed script which Splunk software applies to the _raw field. * A sed script is a space-separated list of sed commands. Currently the following subset of sed commands is supported: * replace (s) and character substitution (y). * Syntax: * replace - s/regex/replacement/flags * regex is a perl regular expression (optionally containing capturing groups). * replacement is a string to replace the regex match. Use \n for back references, where "n" is a single digit. * flags can be either: g to replace all matches, or a number to replace a specified match. * substitute - y/string1/string2/ * substitutes the string1[i] with string2[i] * No default.    
Dear experts Basic idea of what I try to do: the results of a search should be filtered in a way, that only data points are displayed which are not part of a "Blacklist" maintained as lookup table. ... See more...
Dear experts Basic idea of what I try to do: the results of a search should be filtered in a way, that only data points are displayed which are not part of a "Blacklist" maintained as lookup table.  The challenging thing is, there are 3 columns at the same time to be taken into account for filtering.  After a lot of trials, I ended up in creating a key from the 3 columns (which is unique) and then filter on the key.  It is working, I just don't understand why :-(. Question: Has anybody an idea why the Version 1 filter works, and why Version 2 filter fails? Question: What needs to be changed to get Version 2 also to work? index="pm-azlm_internal_prod_events" sourcetype="azlmj" | strcat ocp "_" fr "_" el unique_id | table _time ocp fr el unique_id d_1 | search d_1="DEF ges AZ*" ``` VERSION 1: the working one ``` ``` As long the subsearch returns a table with the column unique_id ``` ``` which is exactly the name of the column I want to filter on, all works great.``` | search NOT [| inputlookup pm-azlm-aufschneidmelder-j | strcat ocp "_" fr "_" sec unique_id | table unique_id] ``` VERSION 2: NOT working ``` ``` As soon I change the name of the column in the subsearch, the filte won't work anymore``` | search NOT [| inputlookup pm-azlm-aufschneidmelder-j | strcat ocp "_" fr "_" sec ignore | table ignore]``` | timechart span=1d limit=0 count by unique_id   And the final question: is there a way for such filtering without going through the key creation? Thank you in advance.
Hello, I would like to confirm if it is possible to upgrade Splunk directly from version 9.1.1 to 9.3 on Linux, without going through version 9.2. Could you please clarify if this is supported and... See more...
Hello, I would like to confirm if it is possible to upgrade Splunk directly from version 9.1.1 to 9.3 on Linux, without going through version 9.2. Could you please clarify if this is supported and if there are any specific considerations for this process? Best regards,
Start with the DMC (Distributed Monitoring Console) to review the License usage broken down by index.  This will share with you the daily ingest records for the last 30 days broken down by index.  Th... See more...
Start with the DMC (Distributed Monitoring Console) to review the License usage broken down by index.  This will share with you the daily ingest records for the last 30 days broken down by index.  This is only a starting point as depending on how your environment was setup you may have very specific indexes or things may have been aggregated into only a few indexes. From there you can start decided what questions come next.
how to integrate microsoft intune in splunk using the connector downloaded from splunk base 
HI @gcusello ,  In splunk we monitor devices or Host and we get logs from them what i need to know how much memory (in GB) has been utilised by those host or log source where does splunk store such ... See more...
HI @gcusello ,  In splunk we monitor devices or Host and we get logs from them what i need to know how much memory (in GB) has been utilised by those host or log source where does splunk store such data in case of Onpremise instance ?
Hi @arjun , what's your requirement: to know the volume for each customer? or what else? Could you better describe your environment and your situation? E.g.: have you a multi-tenant environment or... See more...
Hi @arjun , what's your requirement: to know the volume for each customer? or what else? Could you better describe your environment and your situation? E.g.: have you a multi-tenant environment or not? Ciao. Giuseppe
How can we locate usage related data from splunk, I have onpremise splunk instance and looking for usage and billing related data grouped by day. I am not able to locate data in any index.
How splunk calculates health score of any servicebased on KPIS, does it use any AI model or weightage formula for health score  ?? 
is this props.conf/transform.conf command or in splunk command? the goal is to remove/alter the field prior entering it to splunk.