All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

you can replace "code_names" with "custom_function_names", it will pass linting and work, but like you said, it will force you out of the GUI editor. I've put in a support ticket.  they should just u... See more...
you can replace "code_names" with "custom_function_names", it will pass linting and work, but like you said, it will force you out of the GUI editor. I've put in a support ticket.  they should just update that GUI editor to use the documented parameter and we'll be back on track.
I am getting this today too. This code_name is not valid per the docs, but its being placed there by the gui editor behind the scence.  I cannot edit the join_ function without breaking the gui re... See more...
I am getting this today too. This code_name is not valid per the docs, but its being placed there by the gui editor behind the scence.  I cannot edit the join_ function without breaking the gui restrictions and going straight code. (classic playbook) here is example: @phanTom.playbook_block() def code_7(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("code_7() called") ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ join_code_9(container=container) return @phantom.playbook_block() def code_8(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("code_8() called") ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ join_code_9(container=container) return **** This part is auto generated by the SOAR GUI editor***** @phantom.playbook_block() def join_code_9(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("join_code_9() called") if phantom.completed(code_names=["code_7", "code_8"]): # call connected block "code_9" code_9(container=container, handle=handle) return  
Hi,  I've solved my case,  basically there are 2 step : 1. Build correct job output on saved search, better to specific time format : | eval timeframe = strftime(_time, "%Y-%m-%d %H:%M:%S %Z") Ou... See more...
Hi,  I've solved my case,  basically there are 2 step : 1. Build correct job output on saved search, better to specific time format : | eval timeframe = strftime(_time, "%Y-%m-%d %H:%M:%S %Z") Output : timeframe hostname availability 2025-03-08 00:00:00 +07 HostA 100 2025-03-09 00:00:00 +07 HostA 100   2. On dashboard add time range panel rename token to $time_range$, then create handling to convert time to epoch  | loadjob savedsearch="admin:itsi:sla_availability" artifact_offset=0 | eval timeframe_epoch = strptime(timeframe, "%Y-%m-%d %H:%M:%S %z") | eval earliest_epoch = strptime("$time_range.earliest$", "%Y-%m-%dT%H:%M:%S.%3NZ") | eval latest_epoch = strptime("$time_range.latest$", "%Y-%m-%dT%H:%M:%S.%3NZ") | where timeframe_epoch >= earliest_epoch AND timeframe_epoch <= latest_epoch | table *   So, the output data will be specific time only :), as of know only date range selection only works, will test for dynamic selection like last 7 days, last 30 day, etc
Hello @lrader, Can you take a look at this Splunk documentation? https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/ConfigureSSOPing Hope this helps!
Thanks for confirming there shouldn't be a limit. I agree the cluster master decided it was good enough but I don't understand how it could have hit an "ideal distribution" and then minutes later in ... See more...
Thanks for confirming there shouldn't be a limit. I agree the cluster master decided it was good enough but I don't understand how it could have hit an "ideal distribution" and then minutes later in another balancing run was able to recognize that another ~40k+ buckets per indexer needed to be moved to the same indexers again. It isn't too important because I just restarted the balancing runs until it was actually balanced, but it makes me wonder if this is the only bucket based operation that has gremlins. 
Are you using SOAR? If so, are you running the playbook from the container view?
The Hashicorp Vault App for Splunk also is unsupported, but has a different license that (as I read it) does not permit modifications.  It's also been almost a year since the last app update so it ma... See more...
The Hashicorp Vault App for Splunk also is unsupported, but has a different license that (as I read it) does not permit modifications.  It's also been almost a year since the last app update so it may be abandoned. With any luck, however, someone from the vendor will see this question and respond.
Thank you for your quick response. I have to apologize, I was working with the wrong app. We need to upgrade the HashiCorp Vault App for Splunk (app #5093) and I was working with the Hashicorp Vault ... See more...
Thank you for your quick response. I have to apologize, I was working with the wrong app. We need to upgrade the HashiCorp Vault App for Splunk (app #5093) and I was working with the Hashicorp Vault app (app #6027). I was able to upload the HashiCorp Vault App for Splunk in our Dev environment. However, app validation failed with a message that the app does not support search head cluster deployments. We have version 1.0.3 installed and we are trying to upgrade to version 1.0.5. Since we have a version running on our search head clusters in both non-prod and prod has something changed in this app? Thank you very much for your help.
That app is unsupported so any changes will be at the discretion of the developer(s) and may not happen at all. However, the terms of the license allow you to modify the app so you could make the ne... See more...
That app is unsupported so any changes will be at the discretion of the developer(s) and may not happen at all. However, the terms of the license allow you to modify the app so you could make the necessary changes and submit them as your own.
The end goal is to find what was happening during the time between those two event ids, thank you for your help. I am very new to splunk so any thoughts on the best way to do that would be appreciated!
Hi @N3gativeSpace  Do you only want  event_id=4732 and event_id=4733? If so I'd look at doing something like this index=example sourcetype=wineventlog computer_name="example" event_id IN (4732,473... See more...
Hi @N3gativeSpace  Do you only want  event_id=4732 and event_id=4733? If so I'd look at doing something like this index=example sourcetype=wineventlog computer_name="example" event_id IN (4732,4733) | eval is{event_id}=1 | stats sum(is4732) as count4732, sum(is4733) as count4733, values(user.name), earliest(_time) as startTime, latest(_time) as endTime, values(event_id) by computer_name | where count4732>=1 AND count4733>=1  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
"A security-enabled local group membership was enumerated." sounds like represented by a unique event_id.  Get rid of them in the search.  If there are specific key words/phrases that cannot be repre... See more...
"A security-enabled local group membership was enumerated." sounds like represented by a unique event_id.  Get rid of them in the search.  If there are specific key words/phrases that cannot be represented by event_id, the best is to use search term to eliminate.  Finally, if there are feature words that are too hard to construct using search terms, you can use regex to eliminate.  index=example sourcetype=wineventlog computer_name="example" NOT event_id IN (fluff_id1, fluff_id2, fluff_id3) NOT "fluff term1" NOT "fluff term2" NOT "fluff term3" | where NOT match(_raw, "fluff[r]egex1|fluf[f]regex2|fluf[fr]egex3") Not sure how transaction gets into the picture, however.
Here is my code: index=example sourcetype=wineventlog computer_name="example" | transaction computer_name startswith="event_id=4732" endswith="event_id=4733" maxspan=15m mvraw=true mvlist=true | t... See more...
Here is my code: index=example sourcetype=wineventlog computer_name="example" | transaction computer_name startswith="event_id=4732" endswith="event_id=4733" maxspan=15m mvraw=true mvlist=true | table _time, user.name, computer_name, event_id, _raw  I am trying to separate each event that occurs in order to get rid of fluff content such as "A security-enabled local group membership was enumerated." appearing hundreds of times. What would be the best way to do this? mvexpand has not worked for me so far.
Can you confirm that the app indeed has a default/app.conf file, and that it does not contain files or folders outside its app folder named "hashicorpvault", and that its file permissions are in the ... See more...
Can you confirm that the app indeed has a default/app.conf file, and that it does not contain files or folders outside its app folder named "hashicorpvault", and that its file permissions are in the expected state?
@livehybrid , drilling further, saw this message message from "/opt/splunk/bin/python3.9 /opt/splunk/etc/apps/cloud_administration/bin/aging_out.py" Cannot extract stack id from host, reason=File ... See more...
@livehybrid , drilling further, saw this message message from "/opt/splunk/bin/python3.9 /opt/splunk/etc/apps/cloud_administration/bin/aging_out.py" Cannot extract stack id from host, reason=File at /opt/splunk/etc/system/local/data_archive.conf does not exist.  
Do you know your application path always starts with /experience?  If so, @livehybrid 's method should work, just replace url with uri. index="my_index" uri="*/experience/*" | rex field=uri "(?<uni... See more...
Do you know your application path always starts with /experience?  If so, @livehybrid 's method should work, just replace url with uri. index="my_index" uri="*/experience/*" | rex field=uri "(?<uniqueURI>/experience/.*)" | stats count as hits by uniqueURI | sort -hits | head 20  If not, you can enumerate, or use some other methods to determine the beginning of application path.
We are trying to upgrade the Hashicorp Vault app to version 1.1.3. When we upload it through Manage Apps it fails vetting with the following failures: Can we please get these fixed? Thank you. ... See more...
We are trying to upgrade the Hashicorp Vault app to version 1.1.3. When we upload it through Manage Apps it fails vetting with the following failures: Can we please get these fixed? Thank you.  
Hi @gitau_gm  This ExecProcessor error indicates that the Splunk DB Connect modular input script (server.sh) failed during execution. The Java stack trace suggests an issue occurred while the DB Con... See more...
Hi @gitau_gm  This ExecProcessor error indicates that the Splunk DB Connect modular input script (server.sh) failed during execution. The Java stack trace suggests an issue occurred while the DB Connect app was processing or writing events, likely after fetching data from the database. To troubleshoot this: Check DB Connect Internal Logs: Look for more detailed error messages within the DB Connect app's internal logs. Search index=_internal sourcetype="dbx*" Verify Database Connection: Ensure the database connection configured in DB Connect is still valid and accessible from the Splunk server. Check credentials, host, port, and network connectivity. Review the Input Query: Examine the SQL query used by the failing input. Test the query directly against the database to ensure it runs without errors and returns data as expected. Large result sets or specific data types might sometimes cause issues. Check Splunk Resources: Monitor the Splunk server's resource usage (CPU, memory) when the input is scheduled to run. Resource exhaustion can sometimes lead to process failures. Restart DB Connect: Try restarting the DB Connect app from the Splunk UI or by restarting Splunk. The detailed error message in the DB Connect internal logs will provide more specific clues about the root cause, such as a database error, a data processing issue, or a configuration problem. Are you getting _internal logs from the HF running the DB Connect app? If not then restarting this would be the first thing to try, and then check the splunk logs directly on the HF if still not sending data to Splunk. For more info check out https://help.splunk.com/en/splunk-cloud-platform/connect-relational-databases/deploy-and-use-splunk-db-connect/3.18/troubleshooting/common-issues-for-splunk-db-connect  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
*That date corresponds to the last day the host was seen.
Good day team. Getting this error. That is date corresponds to the last day the host was seen. 05-28-2025 11:51:03.469 +0000 ERROR ExecProcessor [9317 ExecProcessor] - message from "/opt/splunk/et... See more...
Good day team. Getting this error. That is date corresponds to the last day the host was seen. 05-28-2025 11:51:03.469 +0000 ERROR ExecProcessor [9317 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/server.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.DefaultServerStart.streamEvents(DefaultServerStart.java:66)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:36)\\