All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can you run btool on the machine as the splunk user to make sure that the server_pkcs1.pem certificate is indeed the one used by splunk? /opt/splunk/bin/splunk btool server list sslConfig  Look for... See more...
Can you run btool on the machine as the splunk user to make sure that the server_pkcs1.pem certificate is indeed the one used by splunk? /opt/splunk/bin/splunk btool server list sslConfig  Look for the serverCert variable
Another thing to check would be if Splunk is freezing buckets because they are older than are allowed by frozenTimePeriodInSecs. If the evtx data is older than your index retention policy then Splunk... See more...
Another thing to check would be if Splunk is freezing buckets because they are older than are allowed by frozenTimePeriodInSecs. If the evtx data is older than your index retention policy then Splunk will index and then freeze them. Do you see any _internal logs indicating freezing buckets for your index that should contain the evtx data? (replace <yourindex> with your index name below) index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" sourcetype=splunkd component=BucketMover bkt="'/opt/splunk/var/lib/splunk/<yourindex>*" freeze  
You can use the python+pip installation in the splunk bin directory to check the module version: /opt/splunk/bin/python -m pip freeze | grep -i cherrypy  
It uses the formula described in this article: https://docs.splunk.com/Documentation/ITSI/4.19.1/SI/KPIImportance#How_service_health_scores_are_calculated The health score calculation is based on t... See more...
It uses the formula described in this article: https://docs.splunk.com/Documentation/ITSI/4.19.1/SI/KPIImportance#How_service_health_scores_are_calculated The health score calculation is based on the current severity level of service KPIs (Critical, High, Medium, Low, and Normal) and the weighted average of the importance values of all KPIs in a service.
Hi Team, I can see events related to all hosts in internal index but the only few hosts data is available in newly created index. Please help me to troubleshoot the issue. Thanks in advance.
Can you share your command used to install it via CLI? Also I assume you are running with Administrator privileges when installing.
That connector is for Splunk SOAR, the SOAR product offered by Splunk. It will not work with Splunk Enterprise, the SIEM product offered by Splunk. Currently it seems the go-to way to get intune log... See more...
That connector is for Splunk SOAR, the SOAR product offered by Splunk. It will not work with Splunk Enterprise, the SIEM product offered by Splunk. Currently it seems the go-to way to get intune logs into Splunk is to send them to a Microsoft Event Hub, and then use the Splunk Add-on for Microsoft Cloud Services to ingest them into Splunk. https://splunkbase.splunk.com/app/3110
You could use the addinfo command then use the info_min_time field to contain the epoch time of your earliest time boundary in your time picker: <your search> | addinfo | eval _time = info_min_time
Hi @siva_kumar0147, The simplest solution is to use the Timeline visualization. You'll need to calculation durations in milliseconds between transitions: | makeresults format=csv data="_time,direct... See more...
Hi @siva_kumar0147, The simplest solution is to use the Timeline visualization. You'll need to calculation durations in milliseconds between transitions: | makeresults format=csv data="_time,direction,polarization 1732782870,TX,L 1732782870,RX,R 1732781700,TX,R 1732781700,RX,L" | sort 0 - _time + direction | eval polarization=case(polarization=="L", "LHCP", polarization=="R", "RHCP") | streamstats global=f window=2 first(_time) as end_time by direction | addinfo | eval duration=if(end_time==_time, 1000*(info_max_time-_time), 1000*(end_time-_time)) | table _time direction polarization duration  
 I have dataset which have field INSERT_DATE now i want to perform search based the date which is match with Global Time Picker Search what i want to is  index = ******* host=transaction source... See more...
 I have dataset which have field INSERT_DATE now i want to perform search based the date which is match with Global Time Picker Search what i want to is  index = ******* host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | eval _time=strptime(INSERT_DATE, "%m/%d/%Y") | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE | where INSERT_DATE =strftime($global_time.latest$, "%m/%d/%Y")  
Try something like this index="pm-azlm_internal_prod_events" sourcetype="azlmj" NOT [| inputlookup pm-azlm-aufschneidmelder-j | table ocp fr sec | format] | table _time ocp fr el d_1 | search d_1="D... See more...
Try something like this index="pm-azlm_internal_prod_events" sourcetype="azlmj" NOT [| inputlookup pm-azlm-aufschneidmelder-j | table ocp fr sec | format] | table _time ocp fr el d_1 | search d_1="DEF ges AZ*"
Hi @gcusello , When I am freshly installing Splunk enterprise  v9.2.1 on windows server 2019 via CLI , I am able to see all other directories except /bin, but at the same time if i download it using... See more...
Hi @gcusello , When I am freshly installing Splunk enterprise  v9.2.1 on windows server 2019 via CLI , I am able to see all other directories except /bin, but at the same time if i download it using UI , it works….. how can i proceed further , any insights on it? In MSI logs we failCA erro, what could be the reason, there is no hardware issues as well  
To add some clarity as that accepted answer is still quite vague or confusing... The easiest way would be to relate these field names to _time and _indextime recentTime = _indextime = last actual t... See more...
To add some clarity as that accepted answer is still quite vague or confusing... The easiest way would be to relate these field names to _time and _indextime recentTime = _indextime = last actual time this host was heard from by index(es) defined in metadata command, or in other more specific terminology, the last time it wrote logs to an index lastTime = _time = time stamp of the events from that host by index(es) defined in metadata command, in other words the latest timestamp in the set of events defined by the search
<form version="1.1" theme="light"> <label>Answers - Classic - Viz toggles</label> <fieldset submitButton="false"> <input type="radio" token="tok_data_labels"> <label>Data Labels</label>... See more...
<form version="1.1" theme="light"> <label>Answers - Classic - Viz toggles</label> <fieldset submitButton="false"> <input type="radio" token="tok_data_labels"> <label>Data Labels</label> <choice value="none">Off</choice> <choice value="all">On</choice> <choice value="minmax">Min/Max</choice> <default>none</default> </input> </fieldset> <row> <panel> <chart> <title>Viz Radio Toggle</title> <search> <query>index=_internal | timechart span=5m limit=5 useother=t count by sourcetype</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.chart">line</option> <option name="charting.chart.showDataLabels">$tok_data_labels$</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>
Thank you very much for your answer !
Do you mean you want to monitor your Splunk infrastructure usage or do you ingest some data regarding "external" hosts? For the former as @dural_yyz mentioned, check Monitoring Console. You can also ... See more...
Do you mean you want to monitor your Splunk infrastructure usage or do you ingest some data regarding "external" hosts? For the former as @dural_yyz mentioned, check Monitoring Console. You can also gather metrics from the _metrics index. For the latter - it depends on your environment. Splunk "just" happily gets the data you throw at it and can manipulate and search it. But it's up to your architects and admins to tell you where they set up the data and what it's made of.
It's not clear what is being asked.  What we know about how the coldToFrozenScript is processed is documented in indexes.conf.spec. If there's something more specific you want to know then please re... See more...
It's not clear what is being asked.  What we know about how the coldToFrozenScript is processed is documented in indexes.conf.spec. If there's something more specific you want to know then please revise the question.
What do you mean by "doesn't work"? It doesn't filter out the values? Because there is a mismatch between field names. A subsearch (unless its results consist (solely?) of fields named "search" or "q... See more...
What do you mean by "doesn't work"? It doesn't filter out the values? Because there is a mismatch between field names. A subsearch (unless its results consist (solely?) of fields named "search" or "query" or you used the format command explicitly) is rendered as set of conditions based on the names of the resulting fields. So your subsearch in example 2 is rendered as ((unique_id="some_value") OR (unique_id="another+value") OR ... ) whereas your subsearch in example 3 is rendered smilarily but the field is called "ignore". You're not creating a field called "ignore" anywhere earlier in the search so you have nothing to filter on. BTW, you are aware that this is a relatively ineffective way to search? (inclusion is better than exclusion!)
According to the documentation, you may upgrade 9.1 directly to 9.3.
https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/HowtoupgradeSplunk