All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@livehybrid , drilling further, saw this message message from "/opt/splunk/bin/python3.9 /opt/splunk/etc/apps/cloud_administration/bin/aging_out.py" Cannot extract stack id from host, reason=File ... See more...
@livehybrid , drilling further, saw this message message from "/opt/splunk/bin/python3.9 /opt/splunk/etc/apps/cloud_administration/bin/aging_out.py" Cannot extract stack id from host, reason=File at /opt/splunk/etc/system/local/data_archive.conf does not exist.  
Do you know your application path always starts with /experience?  If so, @livehybrid 's method should work, just replace url with uri. index="my_index" uri="*/experience/*" | rex field=uri "(?<uni... See more...
Do you know your application path always starts with /experience?  If so, @livehybrid 's method should work, just replace url with uri. index="my_index" uri="*/experience/*" | rex field=uri "(?<uniqueURI>/experience/.*)" | stats count as hits by uniqueURI | sort -hits | head 20  If not, you can enumerate, or use some other methods to determine the beginning of application path.
We are trying to upgrade the Hashicorp Vault app to version 1.1.3. When we upload it through Manage Apps it fails vetting with the following failures: Can we please get these fixed? Thank you. ... See more...
We are trying to upgrade the Hashicorp Vault app to version 1.1.3. When we upload it through Manage Apps it fails vetting with the following failures: Can we please get these fixed? Thank you.  
Hi @gitau_gm  This ExecProcessor error indicates that the Splunk DB Connect modular input script (server.sh) failed during execution. The Java stack trace suggests an issue occurred while the DB Con... See more...
Hi @gitau_gm  This ExecProcessor error indicates that the Splunk DB Connect modular input script (server.sh) failed during execution. The Java stack trace suggests an issue occurred while the DB Connect app was processing or writing events, likely after fetching data from the database. To troubleshoot this: Check DB Connect Internal Logs: Look for more detailed error messages within the DB Connect app's internal logs. Search index=_internal sourcetype="dbx*" Verify Database Connection: Ensure the database connection configured in DB Connect is still valid and accessible from the Splunk server. Check credentials, host, port, and network connectivity. Review the Input Query: Examine the SQL query used by the failing input. Test the query directly against the database to ensure it runs without errors and returns data as expected. Large result sets or specific data types might sometimes cause issues. Check Splunk Resources: Monitor the Splunk server's resource usage (CPU, memory) when the input is scheduled to run. Resource exhaustion can sometimes lead to process failures. Restart DB Connect: Try restarting the DB Connect app from the Splunk UI or by restarting Splunk. The detailed error message in the DB Connect internal logs will provide more specific clues about the root cause, such as a database error, a data processing issue, or a configuration problem. Are you getting _internal logs from the HF running the DB Connect app? If not then restarting this would be the first thing to try, and then check the splunk logs directly on the HF if still not sending data to Splunk. For more info check out https://help.splunk.com/en/splunk-cloud-platform/connect-relational-databases/deploy-and-use-splunk-db-connect/3.18/troubleshooting/common-issues-for-splunk-db-connect  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
*That date corresponds to the last day the host was seen.
Good day team. Getting this error. That is date corresponds to the last day the host was seen. 05-28-2025 11:51:03.469 +0000 ERROR ExecProcessor [9317 ExecProcessor] - message from "/opt/splunk/et... See more...
Good day team. Getting this error. That is date corresponds to the last day the host was seen. 05-28-2025 11:51:03.469 +0000 ERROR ExecProcessor [9317 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/server.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.DefaultServerStart.streamEvents(DefaultServerStart.java:66)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:36)\\
You will need some compromise one way or another.  Any specific reason why array_field{} is unacceptable?  If anything, you can use field alias to allow use of array_field.  Alternatively you can use... See more...
You will need some compromise one way or another.  Any specific reason why array_field{} is unacceptable?  If anything, you can use field alias to allow use of array_field.  Alternatively you can use calculated field to alter a key-value entry ("classic"), e.g., comma_delimited_field="1,2", then use split to calculate array_field.
Hello, I put this regex on SHC inline extraction :  "<(?<pri>\d+)>1\s(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?[+-]\d{2}:\d{2})\s(?<hostname>[^\s]+)\s(?<appname>[^\s]+)\s(?<procid>[^... See more...
Hello, I put this regex on SHC inline extraction :  "<(?<pri>\d+)>1\s(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?[+-]\d{2}:\d{2})\s(?<hostname>[^\s]+)\s(?<appname>[^\s]+)\s(?<procid>[^\s]+)\s(?<msgid>[^\s]+)\s(?<structured_data>\S+)\s(?<json_msg>\{.*\})" however json_msg needs | spath input=json_msg Is it possible to auto extract fields contained in json_msg to avoid adding | spath input=json_msg at search time? Thanks. 
what constitutes those as "common"? The onboard-menu url hits same service. Its only accessed from different "markets" which are:  /ae/english , /uk/english , /us/english , /ae/arabic and /english ... See more...
what constitutes those as "common"? The onboard-menu url hits same service. Its only accessed from different "markets" which are:  /ae/english , /uk/english , /us/english , /ae/arabic and /english   like that we will have multiple markets starts /country_code/english or arabic/
Hi @kn450 , open a case to Splunk Support! there isn't any other solution! I had an experience in UBA installation and only when installed by Splunk PS it runned! Ciao. Giuseppe
Hi Fellow Splunkers, How can I add multi-value field (array) directly to the index through `/var/spool/splunk`. I tried multiple approaches: 1. Dict ==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##== { "... See more...
Hi Fellow Splunkers, How can I add multi-value field (array) directly to the index through `/var/spool/splunk`. I tried multiple approaches: 1. Dict ==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##== { "array_field":["1", "2"], "count": "2", ... } 2. Classic ==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##== ... , array_field=["1", "2"], count="2", ...  I achieved best results with Dict approach. Added field correctly has multiple values, however ... to key ("array_field") splunk adds {}, resulting in incorrect key ("array_field{}") Do you have any suggestions?
In this case values(SourceIP) might be more desirable than list(SourceIP). The former will show unique values while the latter will show a list of fields, however many times they appear.
That might be a bit more complicated than that. The main premise that for thawing data you're not ingesting anything is of course true but. 1) If you don't have a specific license, Splunk Enterpris... See more...
That might be a bit more complicated than that. The main premise that for thawing data you're not ingesting anything is of course true but. 1) If you don't have a specific license, Splunk Enterprise installs with the default trial license. It has all (ok, most) of the features but it is time-limited. 2) After the trial period ends - you end up with the free license which doesn't let you schedule searches or define roles/users. You might try to run the zero-bytes license normally meant for forwarders.
But what constitutes those as "common"? As long as you can answer this question, adjusting your results will be relatively easy.
Add to this the fact that searches can be created dynamically by means of subsearches and/or map command and there is no way to find all indexes (not) accessed by looking at searches. One could hypo... See more...
Add to this the fact that searches can be created dynamically by means of subsearches and/or map command and there is no way to find all indexes (not) accessed by looking at searches. One could hypotesize that you could try to leverage some OS-level monitoring to find whether the actual index directories are accessed but that could also not yield reasonable results since Splunk's housekeeping threads must access the indexes to enforce retention policies and data lifecycle. Having said that - you can search _internal and _audit logs for executed searches and try to build a list of indexes which were used and thus limit your investigation whether anyone uses the ingested data to only a subset of indexes not mentioned in that list.
I see. Now I know why did the Validate Python report an error.  However, as mentioned earlier. This block of code is automatically generated when I ammend the visual editor. Changing the "code_nam... See more...
I see. Now I know why did the Validate Python report an error.  However, as mentioned earlier. This block of code is automatically generated when I ammend the visual editor. Changing the "code_names" to either "action_names" or "custom_function_names" will result in disabling the visual editor. Which would create a big trouble for my future development of this playbook. 
Hi @shangxuan_shi  The phantom.completed method doesnt take a code_names param, the function accepts the following: phantom.completed(action_names=None, playbook_names=None, custom_function_names=N... See more...
Hi @shangxuan_shi  The phantom.completed method doesnt take a code_names param, the function accepts the following: phantom.completed(action_names=None, playbook_names=None, custom_function_names=None, trace=False)  Check out https://docs.splunk.com/Documentation/Phantom/4.10.7/PlaybookAPI/PlaybookAPI#:~:text=action%20and%20callbacks.-,completed,-The%20completed%20API for more details on the phantom.completed method  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have not encounter this error previously. When I join two code block to an action block using the visual editor. A join_***_***_1 block will be created.  This auto generated block is using the the... See more...
I have not encounter this error previously. When I join two code block to an action block using the visual editor. A join_***_***_1 block will be created.  This auto generated block is using the the "code_name" parameter which is triggering the unexpected-keyword-arg error.  I believe by deleting this auto generated block would be able to resolve the problem. But making changes to this auto generated block, it will disable the visual editor, which is not the right situation.  Any other alternative solution to resolve this problem?    
Hi @Praz_123 , could you share a sample of your logs in text format? Ciao. Giuseppe
This is actually similar to another question I responded to recently at https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-time-range-input/m-p/745721#M58657 This is the snip... See more...
This is actually similar to another question I responded to recently at https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-time-range-input/m-p/745721#M58657 This is the snippet which calculated the time string from the time picker: | makeresults | eval earliest=$global_time.earliest|s$, latest=$global_time.latest|s$ | eval earliest_epoch = IF(match(earliest,"[0-9]T[0-9]"),strptime(earliest, "%Y-%m-%dT%H:%M:%S.%3N%Z"),earliest), latest_epoch = IF(match(latest,"[0-9]T[0-9]"),strptime(latest, "%Y-%m-%dT%H:%M:%S.%3N%Z"),latest)   @livehybrid wrote: Hi @abhishekP  This is an interesting one. When selecting a relative time window the earliest/latest are values like "-1d@d" which are valid for the earliest/latest field in a search - however when you select specific dates/between dates etc then it returns the full date string such as "2025-05-07T18:47:22.565Z" Such a value is not supported by the earliest/latest field in a Splunk search, to get around this I have put together a table off the side of the display with a search which converts dates into epoch where required. you can then use "$timetoken:result.earliest_epoch$" and "$timetoken:result.latest_epoch$" as tokens in your other searches like this:   Below is the full JSON of the dashboard so you can have a play around with it - hopefully this helps! { "title": "testing", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_2FDRkepv": { "dataSources": { "primary": "ds_IPGx8Y5Y" }, "options": {}, "type": "splunk.events" }, "viz_V1oldcrB": { "options": { "markdown": "earliest: $global_time.earliest$ \nlatest: $global_time.latest$ \nearliest_epoch: $timetoken:result.earliest_epoch$ \nlatest_epoch:$timetoken:result.latest_epoch$" }, "type": "splunk.markdown" }, "viz_bhZcZ5Cz": { "containerOptions": {}, "context": {}, "dataSources": { "primary": "ds_KXR2SF6V" }, "options": {}, "showLastUpdated": false, "showProgressBar": false, "type": "splunk.table" } }, "dataSources": { "ds_IPGx8Y5Y": { "name": "timetoken", "options": { "enableSmartSources": true, "query": "| makeresults \n| eval earliest=$global_time.earliest|s$, latest=$global_time.latest|s$\n| eval earliest_epoch = IF(match(earliest,\"[0-9]T[0-9]\"),strptime(earliest, \"%Y-%m-%dT%H:%M:%S.%3N%Z\"),earliest), latest_epoch = IF(match(latest,\"[0-9]T[0-9]\"),strptime(latest, \"%Y-%m-%dT%H:%M:%S.%3N%Z\"),latest)" }, "type": "ds.search" }, "ds_KXR2SF6V": { "name": "Search_1", "options": { "query": "index=_internal earliest=$timetoken:result.earliest_epoch$ latest=$timetoken:result.latest_epoch$\n| stats count by host" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_V1oldcrB", "position": { "h": 80, "w": 310, "x": 20, "y": 20 }, "type": "block" }, { "item": "viz_2FDRkepv", "position": { "h": 260, "w": 460, "x": 1500, "y": 20 }, "type": "block" }, { "item": "viz_bhZcZ5Cz", "position": { "h": 380, "w": 1420, "x": 10, "y": 140 }, "type": "block" } ], "type": "absolute" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing