All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

if you are in the episode you’ll see by which notable event aggregation policy your episode is created. Were both created by the same NEAP? When you take a look into the timeline in each of your epis... See more...
if you are in the episode you’ll see by which notable event aggregation policy your episode is created. Were both created by the same NEAP? When you take a look into the timeline in each of your episode you‘ll also see what’s alerted. Could it be you have different Episodes because you have a notable event for KPI and for the service health score? Maybe your KPI has a critical status and your service is middle because you have a second KPI in your service and they are both weighted with 5?
I wouldn’t do it with the multi KPi alert. If you install the content pack for monitoring and alerting in ITSI there will be some new correlation searches which are monitoring a sustained status for ... See more...
I wouldn’t do it with the multi KPi alert. If you install the content pack for monitoring and alerting in ITSI there will be some new correlation searches which are monitoring a sustained status for Entities or KPIs or services. This searches can be modified if needed.
It might be that they forgot it or didn't consider it important for the primary use case. This add-on is Splunk supported, so if you have a support contract then you could reach out to Splunk support.
finally its working . Thank you all for your help  | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath ... See more...
finally its working . Thank you all for your help  | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | eval lookupfiledatestart =strftime(INSERT_DATE,"%m/%d/%Y") | addinfo | eval _time = strftime(info_min_time,"%m/%d/%Y") | where _time=INSERT_DATE | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE
Splunk doesn't by default come with a cert file called server_pkcs1.pem. It must be a piece of configuration explicitly done in your deployment. So you have to find 1) Where (if anywhere) its use is... See more...
Splunk doesn't by default come with a cert file called server_pkcs1.pem. It must be a piece of configuration explicitly done in your deployment. So you have to find 1) Where (if anywhere) its use is defined ( @marnall 's hint can help but it doesn't have to contain all possible references to certs - some addons can have use their own cert settings). 2) Where this cert comes from. As far as I remember, only the default cert can be automatically (re)created.
I think @marnall meant | where _time=info_max_time (Or whatever other meta field) instead of eval.
i modified my search  but not getting any result  index = ****** host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} ... See more...
i modified my search  but not getting any result  index = ****** host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE | addinfo | eval _time =info_min_time | where INSERT_DATE=_time My ROW Data: [{"ID":"115918","TARGET_SYSTEM":"EAS","REVIEW":"CPW_00011H","TOTAL_INVENTORY":0,"TOTAL_HITS":0,"TRANSACTION_TYPE":"MQ","TRANSACTION_NAME":"HO620I","TRANSACTION_COUNT":4,"PROCESS_DATE":"11/26/2024","INSERT_DATE":"11/27/2024"} ,{"ID":"115919","TARGET_SYSTEM":"EAS","REVIEW":"CPW_00011H","TOTAL_INVENTORY":0,"TOTAL_HITS":0,"TRANSACTION_TYPE":"MQ","TRANSACTION_NAME":"HO626I","TRANSACTION_COUNT":39,"PROCESS_DATE":"11/26/2024","INSERT_DATE":"11/27/2024"}] When i am not using where condition its giving me data.  index = **** host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE | addinfo | eval _time =info_min_time   
The $notation$ is only used within dashboards and with map command and it's substituted with a value before a (sub)search is spawned. The normal search interface doesn't have this functionality. You ... See more...
The $notation$ is only used within dashboards and with map command and it's substituted with a value before a (sub)search is spawned. The normal search interface doesn't have this functionality. You need to use @marnall 's way to add the search metadata to the results.
Can you run btool on the machine as the splunk user to make sure that the server_pkcs1.pem certificate is indeed the one used by splunk? /opt/splunk/bin/splunk btool server list sslConfig  Look for... See more...
Can you run btool on the machine as the splunk user to make sure that the server_pkcs1.pem certificate is indeed the one used by splunk? /opt/splunk/bin/splunk btool server list sslConfig  Look for the serverCert variable
Another thing to check would be if Splunk is freezing buckets because they are older than are allowed by frozenTimePeriodInSecs. If the evtx data is older than your index retention policy then Splunk... See more...
Another thing to check would be if Splunk is freezing buckets because they are older than are allowed by frozenTimePeriodInSecs. If the evtx data is older than your index retention policy then Splunk will index and then freeze them. Do you see any _internal logs indicating freezing buckets for your index that should contain the evtx data? (replace <yourindex> with your index name below) index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" sourcetype=splunkd component=BucketMover bkt="'/opt/splunk/var/lib/splunk/<yourindex>*" freeze  
You can use the python+pip installation in the splunk bin directory to check the module version: /opt/splunk/bin/python -m pip freeze | grep -i cherrypy  
It uses the formula described in this article: https://docs.splunk.com/Documentation/ITSI/4.19.1/SI/KPIImportance#How_service_health_scores_are_calculated The health score calculation is based on t... See more...
It uses the formula described in this article: https://docs.splunk.com/Documentation/ITSI/4.19.1/SI/KPIImportance#How_service_health_scores_are_calculated The health score calculation is based on the current severity level of service KPIs (Critical, High, Medium, Low, and Normal) and the weighted average of the importance values of all KPIs in a service.
Hi Team, I can see events related to all hosts in internal index but the only few hosts data is available in newly created index. Please help me to troubleshoot the issue. Thanks in advance.
Can you share your command used to install it via CLI? Also I assume you are running with Administrator privileges when installing.
That connector is for Splunk SOAR, the SOAR product offered by Splunk. It will not work with Splunk Enterprise, the SIEM product offered by Splunk. Currently it seems the go-to way to get intune log... See more...
That connector is for Splunk SOAR, the SOAR product offered by Splunk. It will not work with Splunk Enterprise, the SIEM product offered by Splunk. Currently it seems the go-to way to get intune logs into Splunk is to send them to a Microsoft Event Hub, and then use the Splunk Add-on for Microsoft Cloud Services to ingest them into Splunk. https://splunkbase.splunk.com/app/3110
You could use the addinfo command then use the info_min_time field to contain the epoch time of your earliest time boundary in your time picker: <your search> | addinfo | eval _time = info_min_time
Hi @siva_kumar0147, The simplest solution is to use the Timeline visualization. You'll need to calculation durations in milliseconds between transitions: | makeresults format=csv data="_time,direct... See more...
Hi @siva_kumar0147, The simplest solution is to use the Timeline visualization. You'll need to calculation durations in milliseconds between transitions: | makeresults format=csv data="_time,direction,polarization 1732782870,TX,L 1732782870,RX,R 1732781700,TX,R 1732781700,RX,L" | sort 0 - _time + direction | eval polarization=case(polarization=="L", "LHCP", polarization=="R", "RHCP") | streamstats global=f window=2 first(_time) as end_time by direction | addinfo | eval duration=if(end_time==_time, 1000*(info_max_time-_time), 1000*(end_time-_time)) | table _time direction polarization duration  
 I have dataset which have field INSERT_DATE now i want to perform search based the date which is match with Global Time Picker Search what i want to is  index = ******* host=transaction source... See more...
 I have dataset which have field INSERT_DATE now i want to perform search based the date which is match with Global Time Picker Search what i want to is  index = ******* host=transaction source=prd | spath | mvexpand message | rename message as _raw | fields - {}.* ``` optional ``` | spath path={} | mvexpand {} | fields - _* ``` optional ``` | spath input={} | search TARGET_SYSTEM="EAS" | eval _time=strptime(INSERT_DATE, "%m/%d/%Y") | chart sum(TRANSACTION_COUNT) as TRANSACTION_COUNT by INSERT_DATE | where INSERT_DATE =strftime($global_time.latest$, "%m/%d/%Y")  
Try something like this index="pm-azlm_internal_prod_events" sourcetype="azlmj" NOT [| inputlookup pm-azlm-aufschneidmelder-j | table ocp fr sec | format] | table _time ocp fr el d_1 | search d_1="D... See more...
Try something like this index="pm-azlm_internal_prod_events" sourcetype="azlmj" NOT [| inputlookup pm-azlm-aufschneidmelder-j | table ocp fr sec | format] | table _time ocp fr el d_1 | search d_1="DEF ges AZ*"
Hi @gcusello , When I am freshly installing Splunk enterprise  v9.2.1 on windows server 2019 via CLI , I am able to see all other directories except /bin, but at the same time if i download it using... See more...
Hi @gcusello , When I am freshly installing Splunk enterprise  v9.2.1 on windows server 2019 via CLI , I am able to see all other directories except /bin, but at the same time if i download it using UI , it works….. how can i proceed further , any insights on it? In MSI logs we failCA erro, what could be the reason, there is no hardware issues as well