All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @llopreiato  Unfortunately no it isnt. The only supported way is via the status code - I cant really think of many other options either, you could put something like haproxy/nginx on the CM serve... See more...
Hi @llopreiato  Unfortunately no it isnt. The only supported way is via the status code - I cant really think of many other options either, you could put something like haproxy/nginx on the CM server to proxy the requests and modify the output but obviously wouldnt be a supported approach (and outside my area of expertise these days, sorry!)   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @Leonardo1998  Good find - I havent used this app for a while so unsure, but does the input allow you / ask you for a list of operators to apply to the metrics, or even dimensions? I know some of... See more...
Hi @Leonardo1998  Good find - I havent used this app for a while so unsure, but does the input allow you / ask you for a list of operators to apply to the metrics, or even dimensions? I know some of the AWS Cloudwatch metrics ask for this so wondering if its the same. If so, it could be that these arent quite what its expecting? It sounds like you're on the right track with the debugging - you might need to print out the actual response from the API into the logs so you can see what is being returned!   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
H i@Sai-08  Have you been able to identify multiple events in the `notable` response for the same event_id? Can you confirm that you can see the different statuses in the (Closed/Resolved etc)? Thi... See more...
H i@Sai-08  Have you been able to identify multiple events in the `notable` response for the same event_id? Can you confirm that you can see the different statuses in the (Closed/Resolved etc)? This is needed in order to calculate the MTTM however Im not sure the data is in the notable events you're referring to? Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
We have installed Akamai add-on (https://splunkbase.splunk.com/app/4310) on our HF and installed Java and configured data input in HF by creating index in HF just for dropdown purpose and create the ... See more...
We have installed Akamai add-on (https://splunkbase.splunk.com/app/4310) on our HF and installed Java and configured data input in HF by creating index in HF just for dropdown purpose and create the same index in CM and pushed to indexers. But we are not receiving any data now.  When we are checking in splunkd.log below: 04-02-2025 11:08:27.529 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, begin streamEvents 04-02-2025 11:08:27.646 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, inputName=TA-Akamai_SIEM://WAF_AKAMAI_SIEM_DEV 04-02-2025 11:08:27.646 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, inputName(String)=TA-Akamai_SIEM://WAF_AKAMAI_SIEM_DEV 04-02-2025 11:08:27.653 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg streamEvents Service connect to Akamai_SIEM App... 04-02-2025 11:08:27.900 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Processing Data... 04-02-2025 11:08:27.900 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=KV Service get... 04-02-2025 11:08:27.902 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data... 04-02-2025 11:08:27.946 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data...Complete 04-02-2025 11:08:27.946 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" urlToRequest=https://akab-hg3zdmaay4bq4n5w-ljwg5vtmjxs5ukg2.luna.akamaiapis.net/siem/v1/configs/108115;107918?offset=fd2ba;oj2ETReWQtqhoYX8yuFwqtycwtzWgKUIa_hXJeP06170pYL_XCOdDTR_8u7mXpcuzAfAbBrlVyYQpgwhoHKPYpRQL4dWnY7TENjBhJv0WlUKy1oaCxYa_dEz5w68Rf4RKLqk&limit=150000 04-02-2025 11:08:28.820 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" status code=200 04-02-2025 11:08:28.822 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" awaiting shutdown... 04-02-2025 11:08:28.850 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" found new offset: fd2ba;-kKV2wsV1oLesFFgkhv-dUAfVlC09trNuJWPKUOI8wCVnPWtwMjhld_MIgN84uv9OcFL6Fq5EwOs-wwKHLC1hUDvjBAhG7ZeROQ4kxLcdDwYSFhmF_iTYqmW8EE26VWd9cW1 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" termination complete.... 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Cores: 8 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Consumer CPU time: 0.03 s 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" EdgeGrid time: 0.88 s 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Real time: 1.21 s 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Consumer CPU utilization: 14.15% 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Lines Processed: 1 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=KV Service get... 04-02-2025 11:08:28.854 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data... 04-02-2025 11:08:28.855 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data...Complete 04-02-2025 11:08:28.870 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, end streamEvents 04-02-2025 11:08:28.870 +0000 ERROR ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 Not sure what are these errors are but when we are checking with index=<created index> in SH no data is showing. Please help me in this case. Even installed this add-on in deployer by removing inputs.conf and pushed to SHs as it has props and transforms to be performed at search time.
Hi @Treize  In the Match type box you would do CIDR(fieldName) where fieldName is the name of the field in your lookup which contains the CIDR values.  
when running my bamboo paln i am unable to generate splunk log json file  this is log  build 02-Apr-2025 11:57:27 /home/bamboo-agent/xml-data/build-dir/CBPPOC-SLPIB-JOB1/dbscripts build 02-Apr-2025... See more...
when running my bamboo paln i am unable to generate splunk log json file  this is log  build 02-Apr-2025 11:57:27 /home/bamboo-agent/xml-data/build-dir/CBPPOC-SLPIB-JOB1/dbscripts build 02-Apr-2025 11:57:27 _bamboo_build.sh build 02-Apr-2025 11:57:27 _build.sh build 02-Apr-2025 11:57:27 licensecode.py build 02-Apr-2025 11:57:27 _push2release.sh build 02-Apr-2025 11:57:27 _push2snapshot.sh build 02-Apr-2025 11:57:27 splunkQueries.txt build 02-Apr-2025 11:57:28 [licensecode.py:43 - login() ] Logging in to Splunk API initiated build 02-Apr-2025 11:57:28 [licensecode.py:62 - login() ] Logged in as: M022754 build 02-Apr-2025 11:57:28 [licensecode.py:257 - main() ] Command line param queryFile has value: splunkQueries.txt build 02-Apr-2025 11:57:28 [licensecode.py:159 - processQueryFile() ] Query: search eventtype="cba-env-prod" NNO.API.LOG.PM.UPDATE latest=now earliest=-7d build 02-Apr-2025 11:57:28 [licensecode.py:161 - processQueryFile() ] Number of queries in queue: 1 build 02-Apr-2025 11:57:28 [licensecode.py:193 - triggerSearch() ] Triggering search for query: search eventtype="cba-env-prod" NNO.API.LOG.PM.UPDATE latest=now earliest=-7d build 02-Apr-2025 11:57:28 [licensecode.py:201 - triggerSearch() ] Search initiated with SID: 1743587848.2389422_84B919DD-8E60-47EE-AF06-F6EE20B95178 build 02-Apr-2025 11:57:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:57:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:57:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:58 [licensecode.py:268 - main() ] Execution timeout of 200 seconds has passed, exiting simple 02-Apr-2025 12:00:58 Failing task since return code of [/home/bamboo-agent/temp/CBPPOC-SLPIB-JOB1-268-ScriptBuildTask-11226880290426353947.sh] was 1 while expected 0 02-Apr-2025 12:00:58 Failing as no matching files has been found and empty artifacts are not allowed. after completing waiting time logs file json not generating help me to how to resolve this issue  
I don't understand why a summary index would be better? We use 2 lookups: - 1st because it comes from a third party - 2nd because we need to increment it after treating this IP as an alert ... See more...
I don't understand why a summary index would be better? We use 2 lookups: - 1st because it comes from a third party - 2nd because we need to increment it after treating this IP as an alert  
Hey @livehybrid ,   Thank you for your time,  I tried the above query but it didn’t show any results. The unique identifier is event_id and I changed it.  also I haver replaced with my base s... See more...
Hey @livehybrid ,   Thank you for your time,  I tried the above query but it didn’t show any results. The unique identifier is event_id and I changed it.  also I haver replaced with my base search which was   ‘notable’  | search owner_realname= “ analyst name “  Please have in mind that I am looking for avg time spent on the alerts , in the past 30 days ( I use the time range ) 
Hi @Treize , it could run, but you should add another field to use for the check. But, having the issue of so many rows, why you don't use a summary index, outting it in the main search so you don'... See more...
Hi @Treize , it could run, but you should add another field to use for the check. But, having the issue of so many rows, why you don't use a summary index, outting it in the main search so you don't have limits? something like this: (<my search>) OR (index=new_summary_index) | eval ip=coalesce(ip,IP) | stats values(index) AS index dc(index) AS index_count BY ip | where index_count=1 AND index=<your_index> | fields ip | outputlookup append=true override_if_empty=false 2.csv Ciao. Giuseppe
In the meantime, I've come up with a simple idea: a subsearch for the lookup with 1000 lines and a simple "| lookup" command for the lookup with 50,000 lines.
Hi @Sai-08 , You can calculate the average time difference between the "In Progress" status and the "Closed" or "Resolved" status using the stats command. Here is an example query using makeresults... See more...
Hi @Sai-08 , You can calculate the average time difference between the "In Progress" status and the "Closed" or "Resolved" status using the stats command. Here is an example query using makeresults for sample data. Replace the makeresults part with your base search. | makeresults count=4 | streamstats count as alert_id | eval _time = case( alert_id=1, now() - 3600, alert_id=2, now() - 7200, alert_id=3, now() - 10800, alert_id=4, now() - 14400 ) | eval status_label="New" | append [| makeresults count=4 | streamstats count as alert_id | eval _time = case(alert_id=1, now() - 3000, alert_id=2, now() - 6000, alert_id=3, now() - 9000, alert_id=4, now() - 12000) | eval status_label="In Progress"] | append [| makeresults count=4 | streamstats count as alert_id | eval _time = case(alert_id=1, now() - 600, alert_id=2, now() - 1200, alert_id=3, now() - 1800, alert_id=4, now() - 2400) | eval status_label=if(alert_id%2=0, "Closed", "Resolved")] | sort 0 _time ``` Replace above makeresults block with your base search: index= sourcetype= status_label IN ("In Progress", "Closed", "Resolved")``` ``` Ensure you have a unique identifier for each alert (e.g., alert_id)``` ``` Filter for relevant status transitions``` | where status_label IN ("In Progress", "Closed", "Resolved") ``` Capture the timestamp for "In Progress" and "Closed/Resolved" statuses``` | eval in_progress_time = if(status_label="In Progress", _time, null()) | eval closed_resolved_time = if(status_label="Closed" OR status_label="Resolved", _time, null()) ``` Group by alert_id and find the earliest "In Progress" time and latest "Closed/Resolved" time``` | stats earliest(in_progress_time) as start_time latest(closed_resolved_time) as end_time by alert_id ``` Filter out alerts that didn't complete the transition or where times are illogical``` | where isnotnull(start_time) AND isnotnull(end_time) AND end_time > start_time ``` Calculate the duration for each alert``` | eval duration_seconds = end_time - start_time ``` Calculate the average duration (MTTM) across all alerts``` | stats avg(duration_seconds) as mttm_seconds ``` Optional: Format the result for readability``` | eval mttm_readable = tostring(mttm_seconds, "duration") | fields mttm_seconds mttm_readable     How it works: The search first filters events for the relevant statuses ("In Progress", "Closed", "Resolved"). You need a unique field (alert_id in the example) to identify each alert instance. It uses eval to create fields holding the timestamp (_time) only when the event matches the specific status ("In Progress" or "Closed"/"Resolved"). stats groups the events by alert_id and finds the earliest time the alert was "In Progress" (start_time) and the latest time it was "Closed" or "Resolved" (end_time). It filters out any alerts that haven't reached a final state or have inconsistent timestamps. The duration_secondsis calculated for each alert. Finally, stats calculates the mean time across all valid alert durations. The result is optionally formatted into a human-readable duration string (e.g., HH:MM:SS). Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@marnall, please
That solution isn't perfect but it's a good tips, thanks dude
The lookup has already been defined. The variable is not really “ip” so in the definition should we put CIDR(ip) because it's an IP or CIDR() to define the variable it should take into account? In bo... See more...
The lookup has already been defined. The variable is not really “ip” so in the definition should we put CIDR(ip) because it's an IP or CIDR() to define the variable it should take into account? In both cases, this solution doesn't work. It can't find the IPs in the lookup's CIDRs...
Hello everyone,    I need help with determining the time needed from an analyst to investigate the alert and close it .  for more clarity I want to calculate the time spent from when the statu... See more...
Hello everyone,    I need help with determining the time needed from an analyst to investigate the alert and close it .  for more clarity I want to calculate the time spent from when the status_label field value updated from In Progress to (Closed or Resolved ) Hence that the default value of this field in New.  I am new at splunk so please write full query and I will adjust it for my needs. 
Hi Simon, OS: linux Splunk version: 9.1.8 AME version: 3.2.3
 I have tested the query and can confirm that some metrics are being processed, as I mentioned in my initial question. For example: 04-02-2025 08:17:01.787 +0000 INFO Metrics - group=per_source... See more...
 I have tested the query and can confirm that some metrics are being processed, as I mentioned in my initial question. For example: 04-02-2025 08:17:01.787 +0000 INFO Metrics - group=per_source_thruput, ingest_pipe=1, series="/opt/splunk/var/log/splunk/splunk_ta_microsoft-cloudservices_azure_resource.log", kbps="1.382", eps="3.766", kb="41.450", ev=113, avg_age="2.867", max_age=4   However, I still don’t see any logs related to Load Balancers. Regarding your question: Yes, the index for this metric input is a metric index, not an event index. That said, I have downloaded the source code of the add-on and I suspect the issue might be on the Azure side. Here’s why: I debugged the execution flow: _index_resource_metrics: This function prepares the request. Here, I do see references to the Load Balancer. _index_metrics: This function calls _fetch_resource_metrics, which sends the request to Azure and processes the results in a loop. If the response is empty, there is no log message indicating that Azure returned no data. No relevant errors or exceptions were found in the logs.
My guess would be no - it is likely that return value of zero is used in multiple unrelated events. Perhaps you have over sanitised the events which has hidden some clues which might help us suggest... See more...
My guess would be no - it is likely that return value of zero is used in multiple unrelated events. Perhaps you have over sanitised the events which has hidden some clues which might help us suggest other ways to group the events. Please share at least two sets of events, related to at least two different business events, e.g. downloading different external documents, with as little sanitisation as possible. Obviously, try not to give away any sensitive or proprietary information. Given that you have three accessDates for the sets of events, and assuming DocumentId is unique for your business event / download, you could also try something like this | eval parameters=json_array_to_mv(json_extract_exact(_raw,"parameters")) | mvexpand parameters | spath input=parameters | spath accessDate | eval name="field_".trim(name,"@") | eval {name}=value | stats values(field_*) as * by accessDate | stats values(*) as * by DocumentId  
Thank so much!!!
Hi @Treize  If your "ip" field in the lookup is a CIDR then configure it as an lookup definition (rather than referencing it as <lookup1>.csv and then under Advanced Options set the Match type to CI... See more...
Hi @Treize  If your "ip" field in the lookup is a CIDR then configure it as an lookup definition (rather than referencing it as <lookup1>.csv and then under Advanced Options set the Match type to CIDR(ip) as below:   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issu Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.