All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could use the addinfo command then use the info_min_time field to contain the epoch time of your earliest time boundary in your time picker: <your search> | addinfo | eval _time = info_min_time
Hi @siva_kumar0147, The simplest solution is to use the Timeline visualization. You'll need to calculation durations in milliseconds between transitions: | makeresults format=csv data="_time,direct... See more...
Hi @siva_kumar0147, The simplest solution is to use the Timeline visualization. You'll need to calculation durations in milliseconds between transitions: | makeresults format=csv data="_time,direction,polarization 1732782870,TX,L 1732782870,RX,R 1732781700,TX,R 1732781700,RX,L" | sort 0 - _time + direction | eval polarization=case(polarization=="L", "LHCP", polarization=="R", "RHCP") | streamstats global=f window=2 first(_time) as end_time by direction | addinfo | eval duration=if(end_time==_time, 1000*(info_max_time-_time), 1000*(end_time-_time)) | table _time direction polarization duration  
 I have dataset which have field INSERT_DATE now i want to perform search based the date which is match with Global Time Picker Search what i want to is  index = ******* host=transaction source... See more...
 I have dataset which have field INSERT_DATE now i want to perform search based the date which is match with Global Time Picker Search what i want to is  index = ******* host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | eval _time=strptime(INSERT_DATE, "%m/%d/%Y") | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE | where INSERT_DATE =strftime($global_time.latest$, "%m/%d/%Y")  
Try something like this index="pm-azlm_internal_prod_events" sourcetype="azlmj" NOT [| inputlookup pm-azlm-aufschneidmelder-j | table ocp fr sec | format] | table _time ocp fr el d_1 | search d_1="D... See more...
Try something like this index="pm-azlm_internal_prod_events" sourcetype="azlmj" NOT [| inputlookup pm-azlm-aufschneidmelder-j | table ocp fr sec | format] | table _time ocp fr el d_1 | search d_1="DEF ges AZ*"
Hi @gcusello , When I am freshly installing Splunk enterprise  v9.2.1 on windows server 2019 via CLI , I am able to see all other directories except /bin, but at the same time if i download it using... See more...
Hi @gcusello , When I am freshly installing Splunk enterprise  v9.2.1 on windows server 2019 via CLI , I am able to see all other directories except /bin, but at the same time if i download it using UI , it works….. how can i proceed further , any insights on it? In MSI logs we failCA erro, what could be the reason, there is no hardware issues as well  
To add some clarity as that accepted answer is still quite vague or confusing... The easiest way would be to relate these field names to _time and _indextime recentTime = _indextime = last actual t... See more...
To add some clarity as that accepted answer is still quite vague or confusing... The easiest way would be to relate these field names to _time and _indextime recentTime = _indextime = last actual time this host was heard from by index(es) defined in metadata command, or in other more specific terminology, the last time it wrote logs to an index lastTime = _time = time stamp of the events from that host by index(es) defined in metadata command, in other words the latest timestamp in the set of events defined by the search
<form version="1.1" theme="light"> <label>Answers - Classic - Viz toggles</label> <fieldset submitButton="false"> <input type="radio" token="tok_data_labels"> <label>Data Labels</label>... See more...
<form version="1.1" theme="light"> <label>Answers - Classic - Viz toggles</label> <fieldset submitButton="false"> <input type="radio" token="tok_data_labels"> <label>Data Labels</label> <choice value="none">Off</choice> <choice value="all">On</choice> <choice value="minmax">Min/Max</choice> <default>none</default> </input> </fieldset> <row> <panel> <chart> <title>Viz Radio Toggle</title> <search> <query>index=_internal | timechart span=5m limit=5 useother=t count by sourcetype</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.chart">line</option> <option name="charting.chart.showDataLabels">$tok_data_labels$</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>
Thank you very much for your answer !
Do you mean you want to monitor your Splunk infrastructure usage or do you ingest some data regarding "external" hosts? For the former as @dural_yyz mentioned, check Monitoring Console. You can also ... See more...
Do you mean you want to monitor your Splunk infrastructure usage or do you ingest some data regarding "external" hosts? For the former as @dural_yyz mentioned, check Monitoring Console. You can also gather metrics from the _metrics index. For the latter - it depends on your environment. Splunk "just" happily gets the data you throw at it and can manipulate and search it. But it's up to your architects and admins to tell you where they set up the data and what it's made of.
It's not clear what is being asked.  What we know about how the coldToFrozenScript is processed is documented in indexes.conf.spec. If there's something more specific you want to know then please re... See more...
It's not clear what is being asked.  What we know about how the coldToFrozenScript is processed is documented in indexes.conf.spec. If there's something more specific you want to know then please revise the question.
What do you mean by "doesn't work"? It doesn't filter out the values? Because there is a mismatch between field names. A subsearch (unless its results consist (solely?) of fields named "search" or "q... See more...
What do you mean by "doesn't work"? It doesn't filter out the values? Because there is a mismatch between field names. A subsearch (unless its results consist (solely?) of fields named "search" or "query" or you used the format command explicitly) is rendered as set of conditions based on the names of the resulting fields. So your subsearch in example 2 is rendered as ((unique_id="some_value") OR (unique_id="another+value") OR ... ) whereas your subsearch in example 3 is rendered smilarily but the field is called "ignore". You're not creating a field called "ignore" anywhere earlier in the search so you have nothing to filter on. BTW, you are aware that this is a relatively ineffective way to search? (inclusion is better than exclusion!)
According to the documentation, you may upgrade 9.1 directly to 9.3.
https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/HowtoupgradeSplunk
The field (not "column") returned by the subsearch ("unique_id" in Version 1 and "ignore" in Version 2) must exist in the main search.
How splunk calls coldToFrozen.py script automatically once the script is setup in /opt/splunk/bin and indexes.conf file with needed arguements. once cold_db is full how this script gets invoked by sp... See more...
How splunk calls coldToFrozen.py script automatically once the script is setup in /opt/splunk/bin and indexes.conf file with needed arguements. once cold_db is full how this script gets invoked by splunk
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Field_extraction_configuration  SEDCMD-<class> = <sed script> * Only used at index time. * Commonly used to anonymize incoming dat... See more...
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf#Field_extraction_configuration  SEDCMD-<class> = <sed script> * Only used at index time. * Commonly used to anonymize incoming data at index time, such as credit card or social security numbers. For more information, search the online documentation for "anonymize data." * Used to specify a sed script which Splunk software applies to the _raw field. * A sed script is a space-separated list of sed commands. Currently the following subset of sed commands is supported: * replace (s) and character substitution (y). * Syntax: * replace - s/regex/replacement/flags * regex is a perl regular expression (optionally containing capturing groups). * replacement is a string to replace the regex match. Use \n for back references, where "n" is a single digit. * flags can be either: g to replace all matches, or a number to replace a specified match. * substitute - y/string1/string2/ * substitutes the string1[i] with string2[i] * No default.    
Dear experts Basic idea of what I try to do: the results of a search should be filtered in a way, that only data points are displayed which are not part of a "Blacklist" maintained as lookup table. ... See more...
Dear experts Basic idea of what I try to do: the results of a search should be filtered in a way, that only data points are displayed which are not part of a "Blacklist" maintained as lookup table.  The challenging thing is, there are 3 columns at the same time to be taken into account for filtering.  After a lot of trials, I ended up in creating a key from the 3 columns (which is unique) and then filter on the key.  It is working, I just don't understand why :-(. Question: Has anybody an idea why the Version 1 filter works, and why Version 2 filter fails? Question: What needs to be changed to get Version 2 also to work? index="pm-azlm_internal_prod_events" sourcetype="azlmj" | strcat ocp "_" fr "_" el unique_id | table _time ocp fr el unique_id d_1 | search d_1="DEF ges AZ*" ``` VERSION 1: the working one ``` ``` As long the subsearch returns a table with the column unique_id ``` ``` which is exactly the name of the column I want to filter on, all works great.``` | search NOT [| inputlookup pm-azlm-aufschneidmelder-j | strcat ocp "_" fr "_" sec unique_id | table unique_id] ``` VERSION 2: NOT working ``` ``` As soon I change the name of the column in the subsearch, the filte won't work anymore``` | search NOT [| inputlookup pm-azlm-aufschneidmelder-j | strcat ocp "_" fr "_" sec ignore | table ignore]``` | timechart span=1d limit=0 count by unique_id   And the final question: is there a way for such filtering without going through the key creation? Thank you in advance.
Hello, I would like to confirm if it is possible to upgrade Splunk directly from version 9.1.1 to 9.3 on Linux, without going through version 9.2. Could you please clarify if this is supported and... See more...
Hello, I would like to confirm if it is possible to upgrade Splunk directly from version 9.1.1 to 9.3 on Linux, without going through version 9.2. Could you please clarify if this is supported and if there are any specific considerations for this process? Best regards,
Start with the DMC (Distributed Monitoring Console) to review the License usage broken down by index.  This will share with you the daily ingest records for the last 30 days broken down by index.  Th... See more...
Start with the DMC (Distributed Monitoring Console) to review the License usage broken down by index.  This will share with you the daily ingest records for the last 30 days broken down by index.  This is only a starting point as depending on how your environment was setup you may have very specific indexes or things may have been aggregated into only a few indexes. From there you can start decided what questions come next.
how to integrate microsoft intune in splunk using the connector downloaded from splunk base