All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SplunkExplorer  When changes to apps are made on your SHC - the changes are applied to the ./local folder within the app on the SHC, whereas the content pushed from your deployer generally lands... See more...
Hi @SplunkExplorer  When changes to apps are made on your SHC - the changes are applied to the ./local folder within the app on the SHC, whereas the content pushed from your deployer generally lands in the ./default directory. This means that if users have modified any of the knowledge objects since it was pushed from the deployer, they wont be overwritten when a subsequent deployment is done. Check out the docs for more info "Because of how configuration file precedence works, changes that users make to apps at runtime get maintained in the apps through subsequent upgrades."   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi Please assist how to build Splunk deployment servers clustering with minimum requirement. 
@isoutamo  or Someone from Splunk Team  Can you please help to provide me a solution for this type of result.   
Hi Splunkers, today I have the following issue: on our SHC, there is a small app subset that is managed, and so modified, from their user directly on SHs. What does it means for us? Of course, that ... See more...
Hi Splunkers, today I have the following issue: on our SHC, there is a small app subset that is managed, and so modified, from their user directly on SHs. What does it means for us? Of course, that we need to perform version updates from SH to Deployer before perform a new app bundle push. Otherwise, older version on Deployer will override the updated one on SH. My wondering is: is there any way, on Splunk version 9.2.1, to avoid this app update when Deployer is used?  The final purpose, just to make an example, is: ehy, if deployer has 100 apps in $SPLUNK_HOME$/etc/shcluster/apps, we want that on a bundle push, 95 of them must be updated, if SH version is NOT equal to Deployer one; remaining 5, should not be updated by Deployer.
Hi @Hemanth35 , Based on the logs, your Python script (licensecode.py) successfully initiates a Splunk search but then gets stuck in a waiting loop (Waiting next 10 seconds for all queries to comple... See more...
Hi @Hemanth35 , Based on the logs, your Python script (licensecode.py) successfully initiates a Splunk search but then gets stuck in a waiting loop (Waiting next 10 seconds for all queries to complete) until it hits its internal 200-second timeout. This usually means the script is either not correctly checking if the Splunk search job has finished, or the search itself is taking longer than 200 seconds to complete. Check Search Performance: Run the search search eventtype="cba-env-prod" NNO.API.LOG.PM.UPDATE latest=now earliest=-7d directly in the Splunk Search UI. Note how long it takes to complete. If it takes longer than 200 seconds, you'll need to either optimize the search or increase the timeout in your Python script. Review Python Script Logic: Examine the licensecode.py script, particularly the loop that waits for the search to complete (around line 265) and the logic that triggers the search and should check its status (around lines 193-201). Ensure the script correctly polls the Splunk search job's status using its SID (1743587848.2389422_84B919DD-8E60-47EE-AF06-F6EE20B95178). It should check if the job's dispatchState is DONE, FAILED, or still running (See https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTTUT/RESTsearches#:~:text=dispatchState-,dispatchState,-is%20one%20of) Verify that once the job is DONE, the script proceeds to retrieve the results using the appropriate Splunk SDK method or REST API endpoint. Add more detailed logging within the loop to print the actual status returned by Splunk for the job SID during each check. This will help diagnose if the status check logic is flawed. Increase Script Timeout: If the search legitimately takes longer than 200 seconds, modify the timeout value within your script. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @llopreiato  Unfortunately no it isnt. The only supported way is via the status code - I cant really think of many other options either, you could put something like haproxy/nginx on the CM serve... See more...
Hi @llopreiato  Unfortunately no it isnt. The only supported way is via the status code - I cant really think of many other options either, you could put something like haproxy/nginx on the CM server to proxy the requests and modify the output but obviously wouldnt be a supported approach (and outside my area of expertise these days, sorry!)   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @Leonardo1998  Good find - I havent used this app for a while so unsure, but does the input allow you / ask you for a list of operators to apply to the metrics, or even dimensions? I know some of... See more...
Hi @Leonardo1998  Good find - I havent used this app for a while so unsure, but does the input allow you / ask you for a list of operators to apply to the metrics, or even dimensions? I know some of the AWS Cloudwatch metrics ask for this so wondering if its the same. If so, it could be that these arent quite what its expecting? It sounds like you're on the right track with the debugging - you might need to print out the actual response from the API into the logs so you can see what is being returned!   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
H i@Sai-08  Have you been able to identify multiple events in the `notable` response for the same event_id? Can you confirm that you can see the different statuses in the (Closed/Resolved etc)? Thi... See more...
H i@Sai-08  Have you been able to identify multiple events in the `notable` response for the same event_id? Can you confirm that you can see the different statuses in the (Closed/Resolved etc)? This is needed in order to calculate the MTTM however Im not sure the data is in the notable events you're referring to? Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
We have installed Akamai add-on (https://splunkbase.splunk.com/app/4310) on our HF and installed Java and configured data input in HF by creating index in HF just for dropdown purpose and create the ... See more...
We have installed Akamai add-on (https://splunkbase.splunk.com/app/4310) on our HF and installed Java and configured data input in HF by creating index in HF just for dropdown purpose and create the same index in CM and pushed to indexers. But we are not receiving any data now.  When we are checking in splunkd.log below: 04-02-2025 11:08:27.529 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, begin streamEvents 04-02-2025 11:08:27.646 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, inputName=TA-Akamai_SIEM://WAF_AKAMAI_SIEM_DEV 04-02-2025 11:08:27.646 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, inputName(String)=TA-Akamai_SIEM://WAF_AKAMAI_SIEM_DEV 04-02-2025 11:08:27.653 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg streamEvents Service connect to Akamai_SIEM App... 04-02-2025 11:08:27.900 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Processing Data... 04-02-2025 11:08:27.900 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=KV Service get... 04-02-2025 11:08:27.902 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data... 04-02-2025 11:08:27.946 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data...Complete 04-02-2025 11:08:27.946 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" urlToRequest=https://akab-hg3zdmaay4bq4n5w-ljwg5vtmjxs5ukg2.luna.akamaiapis.net/siem/v1/configs/108115;107918?offset=fd2ba;oj2ETReWQtqhoYX8yuFwqtycwtzWgKUIa_hXJeP06170pYL_XCOdDTR_8u7mXpcuzAfAbBrlVyYQpgwhoHKPYpRQL4dWnY7TENjBhJv0WlUKy1oaCxYa_dEz5w68Rf4RKLqk&limit=150000 04-02-2025 11:08:28.820 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" status code=200 04-02-2025 11:08:28.822 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" awaiting shutdown... 04-02-2025 11:08:28.850 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" found new offset: fd2ba;-kKV2wsV1oLesFFgkhv-dUAfVlC09trNuJWPKUOI8wCVnPWtwMjhld_MIgN84uv9OcFL6Fq5EwOs-wwKHLC1hUDvjBAhG7ZeROQ4kxLcdDwYSFhmF_iTYqmW8EE26VWd9cW1 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" termination complete.... 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Cores: 8 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Consumer CPU time: 0.03 s 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" EdgeGrid time: 0.88 s 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Real time: 1.21 s 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Consumer CPU utilization: 14.15% 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Lines Processed: 1 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=KV Service get... 04-02-2025 11:08:28.854 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data... 04-02-2025 11:08:28.855 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data...Complete 04-02-2025 11:08:28.870 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, end streamEvents 04-02-2025 11:08:28.870 +0000 ERROR ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 Not sure what are these errors are but when we are checking with index=<created index> in SH no data is showing. Please help me in this case. Even installed this add-on in deployer by removing inputs.conf and pushed to SHs as it has props and transforms to be performed at search time.
Hi @Treize  In the Match type box you would do CIDR(fieldName) where fieldName is the name of the field in your lookup which contains the CIDR values.  
when running my bamboo paln i am unable to generate splunk log json file  this is log  build 02-Apr-2025 11:57:27 /home/bamboo-agent/xml-data/build-dir/CBPPOC-SLPIB-JOB1/dbscripts build 02-Apr-2025... See more...
when running my bamboo paln i am unable to generate splunk log json file  this is log  build 02-Apr-2025 11:57:27 /home/bamboo-agent/xml-data/build-dir/CBPPOC-SLPIB-JOB1/dbscripts build 02-Apr-2025 11:57:27 _bamboo_build.sh build 02-Apr-2025 11:57:27 _build.sh build 02-Apr-2025 11:57:27 licensecode.py build 02-Apr-2025 11:57:27 _push2release.sh build 02-Apr-2025 11:57:27 _push2snapshot.sh build 02-Apr-2025 11:57:27 splunkQueries.txt build 02-Apr-2025 11:57:28 [licensecode.py:43 - login() ] Logging in to Splunk API initiated build 02-Apr-2025 11:57:28 [licensecode.py:62 - login() ] Logged in as: M022754 build 02-Apr-2025 11:57:28 [licensecode.py:257 - main() ] Command line param queryFile has value: splunkQueries.txt build 02-Apr-2025 11:57:28 [licensecode.py:159 - processQueryFile() ] Query: search eventtype="cba-env-prod" NNO.API.LOG.PM.UPDATE latest=now earliest=-7d build 02-Apr-2025 11:57:28 [licensecode.py:161 - processQueryFile() ] Number of queries in queue: 1 build 02-Apr-2025 11:57:28 [licensecode.py:193 - triggerSearch() ] Triggering search for query: search eventtype="cba-env-prod" NNO.API.LOG.PM.UPDATE latest=now earliest=-7d build 02-Apr-2025 11:57:28 [licensecode.py:201 - triggerSearch() ] Search initiated with SID: 1743587848.2389422_84B919DD-8E60-47EE-AF06-F6EE20B95178 build 02-Apr-2025 11:57:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:57:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:57:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:58 [licensecode.py:268 - main() ] Execution timeout of 200 seconds has passed, exiting simple 02-Apr-2025 12:00:58 Failing task since return code of [/home/bamboo-agent/temp/CBPPOC-SLPIB-JOB1-268-ScriptBuildTask-11226880290426353947.sh] was 1 while expected 0 02-Apr-2025 12:00:58 Failing as no matching files has been found and empty artifacts are not allowed. after completing waiting time logs file json not generating help me to how to resolve this issue  
I don't understand why a summary index would be better? We use 2 lookups: - 1st because it comes from a third party - 2nd because we need to increment it after treating this IP as an alert ... See more...
I don't understand why a summary index would be better? We use 2 lookups: - 1st because it comes from a third party - 2nd because we need to increment it after treating this IP as an alert  
Hey @livehybrid ,   Thank you for your time,  I tried the above query but it didn’t show any results. The unique identifier is event_id and I changed it.  also I haver replaced with my base s... See more...
Hey @livehybrid ,   Thank you for your time,  I tried the above query but it didn’t show any results. The unique identifier is event_id and I changed it.  also I haver replaced with my base search which was   ‘notable’  | search owner_realname= “ analyst name “  Please have in mind that I am looking for avg time spent on the alerts , in the past 30 days ( I use the time range ) 
Hi @Treize , it could run, but you should add another field to use for the check. But, having the issue of so many rows, why you don't use a summary index, outting it in the main search so you don'... See more...
Hi @Treize , it could run, but you should add another field to use for the check. But, having the issue of so many rows, why you don't use a summary index, outting it in the main search so you don't have limits? something like this: (<my search>) OR (index=new_summary_index) | eval ip=coalesce(ip,IP) | stats values(index) AS index dc(index) AS index_count BY ip | where index_count=1 AND index=<your_index> | fields ip | outputlookup append=true override_if_empty=false 2.csv Ciao. Giuseppe
In the meantime, I've come up with a simple idea: a subsearch for the lookup with 1000 lines and a simple "| lookup" command for the lookup with 50,000 lines.
Hi @Sai-08 , You can calculate the average time difference between the "In Progress" status and the "Closed" or "Resolved" status using the stats command. Here is an example query using makeresults... See more...
Hi @Sai-08 , You can calculate the average time difference between the "In Progress" status and the "Closed" or "Resolved" status using the stats command. Here is an example query using makeresults for sample data. Replace the makeresults part with your base search. | makeresults count=4 | streamstats count as alert_id | eval _time = case( alert_id=1, now() - 3600, alert_id=2, now() - 7200, alert_id=3, now() - 10800, alert_id=4, now() - 14400 ) | eval status_label="New" | append [| makeresults count=4 | streamstats count as alert_id | eval _time = case(alert_id=1, now() - 3000, alert_id=2, now() - 6000, alert_id=3, now() - 9000, alert_id=4, now() - 12000) | eval status_label="In Progress"] | append [| makeresults count=4 | streamstats count as alert_id | eval _time = case(alert_id=1, now() - 600, alert_id=2, now() - 1200, alert_id=3, now() - 1800, alert_id=4, now() - 2400) | eval status_label=if(alert_id%2=0, "Closed", "Resolved")] | sort 0 _time ``` Replace above makeresults block with your base search: index= sourcetype= status_label IN ("In Progress", "Closed", "Resolved")``` ``` Ensure you have a unique identifier for each alert (e.g., alert_id)``` ``` Filter for relevant status transitions``` | where status_label IN ("In Progress", "Closed", "Resolved") ``` Capture the timestamp for "In Progress" and "Closed/Resolved" statuses``` | eval in_progress_time = if(status_label="In Progress", _time, null()) | eval closed_resolved_time = if(status_label="Closed" OR status_label="Resolved", _time, null()) ``` Group by alert_id and find the earliest "In Progress" time and latest "Closed/Resolved" time``` | stats earliest(in_progress_time) as start_time latest(closed_resolved_time) as end_time by alert_id ``` Filter out alerts that didn't complete the transition or where times are illogical``` | where isnotnull(start_time) AND isnotnull(end_time) AND end_time > start_time ``` Calculate the duration for each alert``` | eval duration_seconds = end_time - start_time ``` Calculate the average duration (MTTM) across all alerts``` | stats avg(duration_seconds) as mttm_seconds ``` Optional: Format the result for readability``` | eval mttm_readable = tostring(mttm_seconds, "duration") | fields mttm_seconds mttm_readable     How it works: The search first filters events for the relevant statuses ("In Progress", "Closed", "Resolved"). You need a unique field (alert_id in the example) to identify each alert instance. It uses eval to create fields holding the timestamp (_time) only when the event matches the specific status ("In Progress" or "Closed"/"Resolved"). stats groups the events by alert_id and finds the earliest time the alert was "In Progress" (start_time) and the latest time it was "Closed" or "Resolved" (end_time). It filters out any alerts that haven't reached a final state or have inconsistent timestamps. The duration_secondsis calculated for each alert. Finally, stats calculates the mean time across all valid alert durations. The result is optionally formatted into a human-readable duration string (e.g., HH:MM:SS). Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@marnall, please
That solution isn't perfect but it's a good tips, thanks dude
The lookup has already been defined. The variable is not really “ip” so in the definition should we put CIDR(ip) because it's an IP or CIDR() to define the variable it should take into account? In bo... See more...
The lookup has already been defined. The variable is not really “ip” so in the definition should we put CIDR(ip) because it's an IP or CIDR() to define the variable it should take into account? In both cases, this solution doesn't work. It can't find the IPs in the lookup's CIDRs...
Hello everyone,    I need help with determining the time needed from an analyst to investigate the alert and close it .  for more clarity I want to calculate the time spent from when the statu... See more...
Hello everyone,    I need help with determining the time needed from an analyst to investigate the alert and close it .  for more clarity I want to calculate the time spent from when the status_label field value updated from In Progress to (Closed or Resolved ) Hence that the default value of this field in New.  I am new at splunk so please write full query and I will adjust it for my needs.