All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone, we’re currently working on integrating our network devices (such as routers, switches, and firewalls) into Splunk to enable centralized monitoring and log collection. As these are n... See more...
Hello everyone, we’re currently working on integrating our network devices (such as routers, switches, and firewalls) into Splunk to enable centralized monitoring and log collection. As these are network appliances, we’re required to proceed in agentless mode, since installing agents or forwarders directly on the devices is not an option. We would really appreciate any guidance or suggestions on: The best approaches for agentless integration (e.g., Syslog, SNMP, NetFlow, APIs) Any recommended Splunk add-ons or apps to support this Best practices or examples from similar implementations Thanks in advance for your help and insights!
Thanks @livehybrid  I have an  embed link for a widget on the external website and want to display in on my dashboard. I have a <iframe></iframe> link. how do I add it go my dashboard ?
Thanks.  I am looking to add a embded widget from external site and I have a <ifram><i/frame> link. How do i add it to my dashboard
Hi @tgulgund  Using dashboard studio you can create a link by using the Markdown object: The following markdown is what was used in this example: This is a link to [google](https://www.google.... See more...
Hi @tgulgund  Using dashboard studio you can create a link by using the Markdown object: The following markdown is what was used in this example: This is a link to [google](https://www.google.com) If unfamiliar with markdown, essentially you put the text in square brackets followed by the URL in regular brackets. Here is a full dashboard example: { "title": "testing", "description": "", "inputs": {}, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "-24h@h", "latest": "now" } } } } }, "visualizations": { "viz_hwZoBg6m": { "options": { "fontSize": "extraLarge", "markdown": "This is a link to [google](https://www.google.com)" }, "type": "splunk.markdown" } }, "dataSources": { "ds_UUxjD5lL": { "name": "Search_1", "options": { "query": "index=cultivar* clientip!=\"\\\"-\\\"\" | iplocation clientip | geostats latfield=lat longfield=lon count by method " }, "type": "ds.search" }, "search1": { "name": "search1", "options": { "query": "| makeresults \n| eval msg=\"Search 1\"" }, "type": "ds.search" }, "search2": { "name": "search2", "options": { "query": "| makeresults \n| eval msg=\"Search2\"" }, "type": "ds.search" } }, "layout": { "globalInputs": [], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_hwZoBg6m", "position": { "h": 190, "w": 830, "x": 0, "y": 0 }, "type": "block" } ], "type": "absolute" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Amd what events are you expecting? As per the TA description, it provides custom search commands, not inputs.
Thank you for raising the support ticket. But I am quite new to Splunk. May I know how long will splunk take to response to the ticket and resolve the bug.  Also, Just to check with you, this is a... See more...
Thank you for raising the support ticket. But I am quite new to Splunk. May I know how long will splunk take to response to the ticket and resolve the bug.  Also, Just to check with you, this is a new error that just occur right? Because, I did not encounter this error weeks ago.
Am trying to set the kvstore captain in maintenance mode, but when i try set the kvstore captain in maintenance mode, it says that maintenance mode can only be set on the "static captain" I can sw... See more...
Am trying to set the kvstore captain in maintenance mode, but when i try set the kvstore captain in maintenance mode, it says that maintenance mode can only be set on the "static captain" I can switch to static captain using the following command successfully splunk edit shcluster-config -mode captain -captain_uri <URI>:<management_port> -election false But i have to set election to false on  non-captain members for which am trying to the curl command which fails curl -k -u username:password -X POST -d '{"mode":"member", "captain_uri":"https://captain.example.net:port","election":"false","target_uri":"https://member.example.net:port"}' https://member.example.net:port/services/shcluster/config/ <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Cannot perform action "POST" without a target name to act on.</msg> </messages> </response> As per the documentation I can only see the get call (https://docs.splunk.com/Documentation/Splunk/9.4.2/RESTREF/RESTcluster#shcluster.2Fconfig) Are there any other alternatives to set the election=false for the non-captain members remotely  
I have a dashboard built using dashboard studio and I need to embed external link but I am unable to do us.  How do I add an external embed link
Is there a rest call to set the election as false from the captain for each cluster member , instead of logging into each seach head member and running the command  /opt/splunk/bin/splunk edit s... See more...
Is there a rest call to set the election as false from the captain for each cluster member , instead of logging into each seach head member and running the command  /opt/splunk/bin/splunk edit shcluster-config -mode member -captain_uri https://your-Captain-SH-address:8089 -election false  
Hey everyone I am using the misp42slunk app but can't get the events and I don't see any errors what am I doing wrong  
Amazing!, thanks brother @squared_away , got your idea, i'm prefer using saved search because applicable for form.token , instead of put SPL query under value because required add backslash \  it's ... See more...
Amazing!, thanks brother @squared_away , got your idea, i'm prefer using saved search because applicable for form.token , instead of put SPL query under value because required add backslash \  it's very simple we just create simple form { "type": "input.dropdown", "title": "Testing Multi Query", "options": { "items": [ { "label": "Query 1", "value": "| makeresults | eval key=\"value\" | table *" }, { "label": "Query 2", "value": "| loadjob savedsearch=\"admin:itsi:sla_availability\" artifact_offset=0" }, { "label": "Query 3", "value": "| savedsearch \"AWS - IPs available\"" } ], "token": "tos", "defaultValue": "" } }   And create the visualization with my example token $tos$ bellow :  
Hi @harpr86  Which app is the first request creating the search in?  I would recommend trying to update both of the API calls to using the servicesNS endpoints instead: /servicesNS/<user>/<app>/sa... See more...
Hi @harpr86  Which app is the first request creating the search in?  I would recommend trying to update both of the API calls to using the servicesNS endpoints instead: /servicesNS/<user>/<app>/saved/searches and /servicesNS/<user>/<app>/saved/searches/<alertName>/acl e.g. curl --location --request POST 'https://splunkHost:8089/servicesNS/automation/search/saved/searches' \ --header 'Authorization: Basic Auth' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'name=test_alert_harpreet07' \ --data-urlencode 'cron_schedule=*/30 * * * *' \ --data-urlencode 'description=This alert will be triggered if proxy has 4x,5x errors' \ --data-urlencode 'dispatch.earliest_time=-30@m' \ --data-urlencode 'dispatch.latest_time=now' \ --data-urlencode 'search=search index="federated:some-index" statusCode">3*"' \ --data-urlencode 'alert_type=number of events' \ --data-urlencode 'alert.expires=730d' \ --data-urlencode 'action.email.to=xyz.abc@def.com' \ --data-urlencode 'action.email.maxresults=50' \ --data-urlencode 'action.email.subject=some-Errors' \ --data-urlencode 'dispatchAs=user' \ --data-urlencode 'action.email.from=Splunk' curl --location --request POST 'https://splunkHost:8089/servicesNS/automation/search/saved/searches/test_alert_harpreet07/acl' \ --header 'Authorization: Basic Auth' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'sharing=app' \ --data-urlencode 'app=search' \ --data-urlencode 'perms.read=user' \ --data-urlencode 'perms.write=user' \ --data-urlencode 'owner=automation'  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi,  I am trying to create alert using api, alert is not getting created in shared mode. I need to run acl command separately to give r+w access  to user.   Command to create alert. curl --locati... See more...
Hi,  I am trying to create alert using api, alert is not getting created in shared mode. I need to run acl command separately to give r+w access  to user.   Command to create alert. curl --location --request POST 'https://splunkHost:8089/services/saved/searches' \ --header 'Authorization: Basic Auth' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'name=test_alert_harpreet07' \ --data-urlencode 'cron_schedule=*/30 * * * *' \ --data-urlencode 'description=This alert will be triggered if proxy has 4x,5x errors' \ --data-urlencode 'dispatch.earliest_time=-30@m' \ --data-urlencode 'dispatch.latest_time=now' \ --data-urlencode 'search=search index="federated:some-index" statusCode">3*'' \ --data-urlencode 'alert_type=number of events' \ --data-urlencode 'alert.expires=730d' \ --data-urlencode 'action.email.to=xyz.abc@def.com' \ --data-urlencode 'action.email.maxresults=50' \ --data-urlencode 'action.email.subject=some-Errors' \ --data-urlencode 'dispatchAs=user' \ --data-urlencode 'action.email.from=Splunk'     to give permission to user    curl --location --request POST 'https://splunkHOST"8089/services/saved/searches/<alertName>/acl' \ --header 'Authorization: Basic Auth' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'sharing=app' \ --data-urlencode 'app=search' \ --data-urlencode 'perms.read=user' \ --data-urlencode 'perms.write=user' \ --data-urlencode 'owner=automation'     #splunk #cloud    is there a way, that alert should be created in shared mode with  r+w access to user.
You might simply cut the prefix from your URI. Something like this | rex mode=sed field=uri "s/^\\/\S+((arabic|english)\\/)?//"  @yuanliu 's pooint about /experience/ part is also valid. But search... See more...
You might simply cut the prefix from your URI. Something like this | rex mode=sed field=uri "s/^\\/\S+((arabic|english)\\/)?//"  @yuanliu 's pooint about /experience/ part is also valid. But searching for */experience/* is not a best idea (search terms with wildcards at the beginning are usually best avoided).
you can replace "code_names" with "custom_function_names", it will pass linting and work, but like you said, it will force you out of the GUI editor. I've put in a support ticket.  they should just u... See more...
you can replace "code_names" with "custom_function_names", it will pass linting and work, but like you said, it will force you out of the GUI editor. I've put in a support ticket.  they should just update that GUI editor to use the documented parameter and we'll be back on track.
I am getting this today too. This code_name is not valid per the docs, but its being placed there by the gui editor behind the scence.  I cannot edit the join_ function without breaking the gui re... See more...
I am getting this today too. This code_name is not valid per the docs, but its being placed there by the gui editor behind the scence.  I cannot edit the join_ function without breaking the gui restrictions and going straight code. (classic playbook) here is example: @phanTom.playbook_block() def code_7(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("code_7() called") ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ join_code_9(container=container) return @phantom.playbook_block() def code_8(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("code_8() called") ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ join_code_9(container=container) return **** This part is auto generated by the SOAR GUI editor***** @phantom.playbook_block() def join_code_9(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("join_code_9() called") if phantom.completed(code_names=["code_7", "code_8"]): # call connected block "code_9" code_9(container=container, handle=handle) return  
Hi,  I've solved my case,  basically there are 2 step : 1. Build correct job output on saved search, better to specific time format : | eval timeframe = strftime(_time, "%Y-%m-%d %H:%M:%S %Z") Ou... See more...
Hi,  I've solved my case,  basically there are 2 step : 1. Build correct job output on saved search, better to specific time format : | eval timeframe = strftime(_time, "%Y-%m-%d %H:%M:%S %Z") Output : timeframe hostname availability 2025-03-08 00:00:00 +07 HostA 100 2025-03-09 00:00:00 +07 HostA 100   2. On dashboard add time range panel rename token to $time_range$, then create handling to convert time to epoch  | loadjob savedsearch="admin:itsi:sla_availability" artifact_offset=0 | eval timeframe_epoch = strptime(timeframe, "%Y-%m-%d %H:%M:%S %z") | eval earliest_epoch = strptime("$time_range.earliest$", "%Y-%m-%dT%H:%M:%S.%3NZ") | eval latest_epoch = strptime("$time_range.latest$", "%Y-%m-%dT%H:%M:%S.%3NZ") | where timeframe_epoch >= earliest_epoch AND timeframe_epoch <= latest_epoch | table *   So, the output data will be specific time only :), as of know only date range selection only works, will test for dynamic selection like last 7 days, last 30 day, etc
Hello @lrader, Can you take a look at this Splunk documentation? https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/ConfigureSSOPing Hope this helps!
Thanks for confirming there shouldn't be a limit. I agree the cluster master decided it was good enough but I don't understand how it could have hit an "ideal distribution" and then minutes later in ... See more...
Thanks for confirming there shouldn't be a limit. I agree the cluster master decided it was good enough but I don't understand how it could have hit an "ideal distribution" and then minutes later in another balancing run was able to recognize that another ~40k+ buckets per indexer needed to be moved to the same indexers again. It isn't too important because I just restarted the balancing runs until it was actually balanced, but it makes me wonder if this is the only bucket based operation that has gremlins. 
Are you using SOAR? If so, are you running the playbook from the container view?