All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, we’re currently working on integrating our network devices (such as routers, switches, and firewalls) into Splunk to enable centralized monitoring and log collection. As these are n... See more...
Hello everyone, we’re currently working on integrating our network devices (such as routers, switches, and firewalls) into Splunk to enable centralized monitoring and log collection. As these are network appliances, we’re required to proceed in agentless mode, since installing agents or forwarders directly on the devices is not an option. We would really appreciate any guidance or suggestions on: The best approaches for agentless integration (e.g., Syslog, SNMP, NetFlow, APIs) Any recommended Splunk add-ons or apps to support this Best practices or examples from similar implementations Thanks in advance for your help and insights!
I have a dashboard built using dashboard studio and I need to embed external link but I am unable to do us.  How do I add an external embed link
Hey everyone I am using the misp42slunk app but can't get the events and I don't see any errors what am I doing wrong  
Hi,  I am trying to create alert using api, alert is not getting created in shared mode. I need to run acl command separately to give r+w access  to user.   Command to create alert. curl --locati... See more...
Hi,  I am trying to create alert using api, alert is not getting created in shared mode. I need to run acl command separately to give r+w access  to user.   Command to create alert. curl --location --request POST 'https://splunkHost:8089/services/saved/searches' \ --header 'Authorization: Basic Auth' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'name=test_alert_harpreet07' \ --data-urlencode 'cron_schedule=*/30 * * * *' \ --data-urlencode 'description=This alert will be triggered if proxy has 4x,5x errors' \ --data-urlencode 'dispatch.earliest_time=-30@m' \ --data-urlencode 'dispatch.latest_time=now' \ --data-urlencode 'search=search index="federated:some-index" statusCode">3*'' \ --data-urlencode 'alert_type=number of events' \ --data-urlencode 'alert.expires=730d' \ --data-urlencode 'action.email.to=xyz.abc@def.com' \ --data-urlencode 'action.email.maxresults=50' \ --data-urlencode 'action.email.subject=some-Errors' \ --data-urlencode 'dispatchAs=user' \ --data-urlencode 'action.email.from=Splunk'     to give permission to user    curl --location --request POST 'https://splunkHOST"8089/services/saved/searches/<alertName>/acl' \ --header 'Authorization: Basic Auth' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'sharing=app' \ --data-urlencode 'app=search' \ --data-urlencode 'perms.read=user' \ --data-urlencode 'perms.write=user' \ --data-urlencode 'owner=automation'     #splunk #cloud    is there a way, that alert should be created in shared mode with  r+w access to user.
Here is my code: index=example sourcetype=wineventlog computer_name="example" | transaction computer_name startswith="event_id=4732" endswith="event_id=4733" maxspan=15m mvraw=true mvlist=true | t... See more...
Here is my code: index=example sourcetype=wineventlog computer_name="example" | transaction computer_name startswith="event_id=4732" endswith="event_id=4733" maxspan=15m mvraw=true mvlist=true | table _time, user.name, computer_name, event_id, _raw  I am trying to separate each event that occurs in order to get rid of fluff content such as "A security-enabled local group membership was enumerated." appearing hundreds of times. What would be the best way to do this? mvexpand has not worked for me so far.
We are trying to upgrade the Hashicorp Vault app to version 1.1.3. When we upload it through Manage Apps it fails vetting with the following failures: Can we please get these fixed? Thank you. ... See more...
We are trying to upgrade the Hashicorp Vault app to version 1.1.3. When we upload it through Manage Apps it fails vetting with the following failures: Can we please get these fixed? Thank you.  
Good day team. Getting this error. That is date corresponds to the last day the host was seen. 05-28-2025 11:51:03.469 +0000 ERROR ExecProcessor [9317 ExecProcessor] - message from "/opt/splunk/et... See more...
Good day team. Getting this error. That is date corresponds to the last day the host was seen. 05-28-2025 11:51:03.469 +0000 ERROR ExecProcessor [9317 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/bin/server.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.DefaultServerStart.streamEvents(DefaultServerStart.java:66)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:36)\\
Hello, I put this regex on SHC inline extraction :  "<(?<pri>\d+)>1\s(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?[+-]\d{2}:\d{2})\s(?<hostname>[^\s]+)\s(?<appname>[^\s]+)\s(?<procid>[^... See more...
Hello, I put this regex on SHC inline extraction :  "<(?<pri>\d+)>1\s(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?[+-]\d{2}:\d{2})\s(?<hostname>[^\s]+)\s(?<appname>[^\s]+)\s(?<procid>[^\s]+)\s(?<msgid>[^\s]+)\s(?<structured_data>\S+)\s(?<json_msg>\{.*\})" however json_msg needs | spath input=json_msg Is it possible to auto extract fields contained in json_msg to avoid adding | spath input=json_msg at search time? Thanks. 
Hi Fellow Splunkers, How can I add multi-value field (array) directly to the index through `/var/spool/splunk`. I tried multiple approaches: 1. Dict ==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##== { "... See more...
Hi Fellow Splunkers, How can I add multi-value field (array) directly to the index through `/var/spool/splunk`. I tried multiple approaches: 1. Dict ==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##== { "array_field":["1", "2"], "count": "2", ... } 2. Classic ==##~~##~~ 1E8N3D4E6V5E7N2T9 ~~##~~##== ... , array_field=["1", "2"], count="2", ...  I achieved best results with Dict approach. Added field correctly has multiple values, however ... to key ("array_field") splunk adds {}, resulting in incorrect key ("array_field{}") Do you have any suggestions?
I have not encounter this error previously. When I join two code block to an action block using the visual editor. A join_***_***_1 block will be created.  This auto generated block is using the the... See more...
I have not encounter this error previously. When I join two code block to an action block using the visual editor. A join_***_***_1 block will be created.  This auto generated block is using the the "code_name" parameter which is triggering the unexpected-keyword-arg error.  I believe by deleting this auto generated block would be able to resolve the problem. But making changes to this auto generated block, it will disable the visual editor, which is not the right situation.  Any other alternative solution to resolve this problem?    
Hi Team  Can you please let me know why i am not able fetch the base_date in the dashoard using the below logic.  Please help me to fix this issue. Splunk query :  <input type="time" token="tim... See more...
Hi Team  Can you please let me know why i am not able fetch the base_date in the dashoard using the below logic.  Please help me to fix this issue. Splunk query :  <input type="time" token="time_token"> <label>TIME</label> <default> <earliest>-1d@d</earliest> <latest>@d</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query> | inputlookup V19_Job_data.csv | eval base_date = strftime(strptime("$time_token.earliest$", "%Y-%m-%dT%H:%M:%S"), "%Y-%m-%d") | eval expected_epoch = strptime(base_date . " " . expected_time, "%Y-%m-%d %H:%M") | eval deadline_epoch = strptime(base_date . " " . deadline_time, "%Y-%m-%d %H:%M") | join type=left job_name run_id [ search index = events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console system = EOCA host = ddebmfr.beprod01.eoc.net (( TERM(JobA) OR TERM(JobB) ) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") | eval Function = case(like(TEXT, "%ENDED - ABEND%"), "ABEND" , like(TEXT, "%ENDED - TIME%"), "ENDED" , like(TEXT, "%STARTED - TIME%"), "STARTED") | eval _time_epoch = _time | eval run_id=case( date_hour &lt; 14, "morning", date_hour &gt;= 14, "evening" ) | eval job_name=if(searchmatch("JobA"), "JobA", "JobB") | stats latest(_time_epoch) as job_time by job_name, run_id ] | eval buffer = 60 | eval status=case( isnull(job_time), "Not Run", job_time &gt; deadline_epoch, "Late", job_time &gt;= expected_epoch AND job_time &lt;= deadline_epoch, "On Time", job_time &lt; expected_epoch, "Early" ) | convert ctime(job_time) | table job_name, run_id, expected_time, expected_epoch , base_date, deadline_time, job_time, status</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest>
Few event logs are getting truncated while others are getting perfectly. We are using akamai add-on to pull logs to Splunk. HF (akamai input configured) ---> sent to indexers in DS all apps will be... See more...
Few event logs are getting truncated while others are getting perfectly. We are using akamai add-on to pull logs to Splunk. HF (akamai input configured) ---> sent to indexers in DS all apps will be there (where all props and transforms) which will be pushed to CM and from CM will be pushing to individual indexers. props.conf in DS (Ds --> CM --> IND) [sony_waf]  TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK = true EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = False TRUNCATE = 50000   Few logs are getting perfectly. what to do now? Please suggest.
Hi  I need the same time in events and _time  while importing the data getting the time difference what to write in time_prefix field   
Is there a way to detect unused indexes in Splunk via a query? Also, how can we control the growth of log sizes effectively?
Hello Everyone, Below is my splunk query: index="my_index" uri="*/experience/*" | stats count as hits by uri | sort -hits | head 20 which returns me the output as below /ae/english/experience... See more...
Hello Everyone, Below is my splunk query: index="my_index" uri="*/experience/*" | stats count as hits by uri | sort -hits | head 20 which returns me the output as below /ae/english/experience/dining/onboard-menu/ 1 /ae/english/experience/woyf/ 2 /uk/english/experience/dining/onboard-menu/ 1 /us/english/experience/dining/onboard-menu/ 1 /ae/arabic/experience/dining/onboard-menu/ 1 /english/experience/dining/onboard-menu/ 1   I need to aggregate the url count into common url. For example: /experience/dining/onboard-menu/ 5 /experience/woyf/ 2   Appreciate your help on this. Thanks in advance
I'm working with a CSV lookup  that contains multiple fields which may include wildcard (*) values. The lookup is structured such that some rows are very specific and others are generic (e.g. *, *, ... See more...
I'm working with a CSV lookup  that contains multiple fields which may include wildcard (*) values. The lookup is structured such that some rows are very specific and others are generic (e.g. *, *, *, HOST, *). I want to enrich events from my base search with the best matching Offset (name of the field) from the lookup. Challenges: Using lookup definition with match_type=WILDCARD(...) only works well if there’s a unique match — but in my case, I need to evaluate multiple potential matches and choose the most specific one. Using | map works correctly, but it's too slow.    
After running out of disk space on a search head (part of a cluster), now fixed and all SH's rebooted. I get this error: ConfReplicationException Error pulling configurations from the search head c... See more...
After running out of disk space on a search head (part of a cluster), now fixed and all SH's rebooted. I get this error: ConfReplicationException Error pulling configurations from the search head cluster captain (SH2:8089); Error in fetchFrom, at=: Non-200 status_code=500: refuse request without valid baseline; snapshot exists at op_id=xxxx6e8e for repo=SH2:8089". Search head cluster member (SH3:8089) is having trouble pulling configs from the captain (SH2:8089). xxxxx Consider performing a destructive configuration resync on this search head cluster member.   Ran "splunk resync shcluster-replicated-config"  and get this: ConfReplicationException : Error downloading snapshot: Non-200 status_code=400: Error opening snapshot_file' /opt/splunk/var/run/snapshot/174xxxxxxxx82aca.bundle: No such file or directory.    In the snapshot folder there is nothing, sometimes a few files, they don't match the other search heads. 'splunk show bundle-replication-status'  is all green and the same as the other 2 SH's.   Is there a force resync switch?  Really can't remove this SH and run 'clean all'.   Thank you!    
Hello folks, We use Splunk cloud platform (managed by Splunk) for our logging system. We want to implement role based search filtering to mask JWT tokens and Emails in the logs for certain users. E... See more...
Hello folks, We use Splunk cloud platform (managed by Splunk) for our logging system. We want to implement role based search filtering to mask JWT tokens and Emails in the logs for certain users. Ex.  Roles: User, RestrictedUser Both roles have access to the same index: main Users can query as normal, but if a RestrictedUser searches the logs then they should get the logs with the token and email data masked. Documentation/community posts/gemini recommended adding regex for filtering in transforms conf and updating some other conf files like so # transforms.conf [redact_jwt_searchtime] REGEX = (token=([A-Za-z0-9-]+\.[A-Za-z0-9-]+\.[A-Za-z0-9-_]+)) FORMAT = token=xxx.xxx.xxx SOURCE_KEY = _raw [redact_email_searchtime] REGEX = ([A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}) FORMAT = xxx@xxx.xxx SOURCE_KEY = _raw # props.conf [*] TRANSFORMS-redact_for_search = redact_jwt_searchtime, redact_email_searchtime # authorize.conf [test_masked_data] srchFilter = search_filters = redact_for_search creating an app and uploading it on the cloud platform. Since the platform is managed by Splunk, I'm not sure if that would be sufficient or even work.   Anyone have suggestions on the best way to apply the role based search filters when on Splunk Cloud rather than on premise?
Hello all   Is the Nutanix TA (version 2.5.0) compatible with Splunk 9.3.4+? It is listed as such on the splunk base (https://splunkbase.splunk.com/app/3103) but when I attempted to upgrade Unable ... See more...
Hello all   Is the Nutanix TA (version 2.5.0) compatible with Splunk 9.3.4+? It is listed as such on the splunk base (https://splunkbase.splunk.com/app/3103) but when I attempted to upgrade Unable to initialize modular input "abc" defined in the app "TA-nutanix": Introspecting scheme=nutanix_health: script running failed (PID 2607550 exited with code 1)..  
Hi I have created a playbook and am trying to run it from an event. But the playbook does not populate when I click on run playbook. what is that I am doing wrong?