All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Team, I am configuring Splunk, but the UF (Universal Forwarder) details are not reflecting in the Deployment Server's client list. I have added the following stanza in the UF's `deployment... See more...
Hello Team, I am configuring Splunk, but the UF (Universal Forwarder) details are not reflecting in the Deployment Server's client list. I have added the following stanza in the UF's `deploymentclient.conf` file: ``` [deployment-client] clientName = UF phoneHomeIntervalInSecs = 60 [target-broker:deploy] targetUri = 10.128.0.5:8089 ``` (10.128.0.5 is the IP of the Deployment Server) And in the Deployment Server's `server.conf`, the following details are present: ``` [general] serverName = deploy pass4SymmKey = $7$k63bewtZlaVREpHJcD6fGt6hysZ/GvxJ0Tfq0BW5PhmF/qItBTzTA== [sslConfig] sslPassword = $7$boaNPEqR2Gmt9DQPKp9ZJ0iho9HdJFoRuxVZMwBu/q8g/v9ZKzsEvw== enableSplunkdSSL = false [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial peers = * quota = MAX stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free [deploymentServer] disabled = false ``` And in the `serverclass.conf`, I have added the following details: ``` [global] [serverClass:uf_class] whitelist.0 = uf [serverClass:uf_class:app:forwarder_app] ``` Even after adding these details, the issue persists. Please suggest some solutions.
Hello everyone, I'm facing challenges with integrating Splunk and Jira using the Splunk Add-on for Jira Cloud. I've set up a new input using a token from an admin account and created an indexer to c... See more...
Hello everyone, I'm facing challenges with integrating Splunk and Jira using the Splunk Add-on for Jira Cloud. I've set up a new input using a token from an admin account and created an indexer to centralize the data. However, despite these configurations, I am unable to retrieve the events as expected. I have verified that the token is valid and have ensured the input configurations in Splunk are correct, but nothing seems to work properly. The Jira admin account has the necessary permissions to access the required events, yet no data is being collected in Splunk. I am seeking advice or suggestions on what might be causing this issue. If anyone has encountered similar problems or has ideas on steps to resolve this, I would greatly appreciate your insights. Thank you in advance for your help!
Hello Folk, I am getting multiple error on application error on application & customer want to disable for error logger with suppression. can you please guide me how to the class & method name for e... See more...
Hello Folk, I am getting multiple error on application error on application & customer want to disable for error logger with suppression. can you please guide me how to the class & method name for error suppression.   Thanks 
I want to have result in table with 2 or 3 log events combined based on unique key in all events and return 1 single row for all those events having unique key in them. for all my log events I have ... See more...
I want to have result in table with 2 or 3 log events combined based on unique key in all events and return 1 single row for all those events having unique key in them. for all my log events I have a common unique key for which I want to combine them and get in table as single row for that unique key and if value for any column is not present then null for that particular cell in table.   Log event 1:  Message="Taken the response",UniqueId="329wey98fywe",Status=Pending Log event 2:  Message="Process completed",UniqueId="329wey98fywe",Status=Finalized Log event 3:  Message=,UniqueId="329wey98fywe",Status=Pending
We found that the search job size becomes extremely large during searches. My Splunk instance is a newly installed testing lab with the following limits.conf only. Anyone else have any idea on this ... See more...
We found that the search job size becomes extremely large during searches. My Splunk instance is a newly installed testing lab with the following limits.conf only. Anyone else have any idea on this situation? /opt/splunk/etc/system/local/limits.conf [search] read_final_results_from_timeliner = 1 Update: When I change the read_final_results_from_timeliner = 1 to read_final_results_from_timeliner = true The Job size reduced and I don't know why.    
We're looking to block outgoing traffic from a specific client or group, using the Microsoft Defender for Endpoint-app. If we were to implement this ourselves using the MS api, it would be something... See more...
We're looking to block outgoing traffic from a specific client or group, using the Microsoft Defender for Endpoint-app. If we were to implement this ourselves using the MS api, it would be something like: POST https://api.securitycenter.microsoft.com/api/machines/{machineId}/restrict Authorization: Bearer {your_access_token} Content-Type: application/json { "action": "Block", "destination": "IP_ADDRESS_OR_DOMAIN", "protocol": "TCP", "port": "443" } However, I haven't been able to find a corresponding call in the app source code. Am I missing something, or isn't this currently supported?
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the Splunk IT Service Intelligence app, but I saw that the download is restricted to authoriz... See more...
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the Splunk IT Service Intelligence app, but I saw that the download is restricted to authorized users. How do I get authorized?
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the PowerConnect for SAP solution app, but I saw that the download is restricted to authorize... See more...
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the PowerConnect for SAP solution app, but I saw that the download is restricted to authorized users. How do we get authorized?
so i have a dashboard with 4 panels and there is checkbox with 2 options of solved and unsolved , so for unsolved the colour of the panels should remain red when the count is greater than 0. which i ... See more...
so i have a dashboard with 4 panels and there is checkbox with 2 options of solved and unsolved , so for unsolved the colour of the panels should remain red when the count is greater than 0. which i am able to do with splunk dashboard feature itself. But for solved option every panels should be green . so how should i approach this.
I have this kind of weird custom app (and dangerous too) that changes the UF Instance GUID.  Basically, I created a .sh file, which utilizes "sed" command on Linux, to change the UUID value of the /o... See more...
I have this kind of weird custom app (and dangerous too) that changes the UF Instance GUID.  Basically, I created a .sh file, which utilizes "sed" command on Linux, to change the UUID value of the /opt/splunkforwarder/etc/instance.cfg file. To use a .sh script and make changes to SPLUNK_HOME directory is quite a dangerous task, I advised not to, however, this task is quite simple, I tested so I decided to deploy an app called REGEN_GUID with a single inputs.conf file that have the stanza to run the script. [script://./bin/regenerate_guid.sh] interval = 900 source = regenerate_guid sourcetype = regenerate_guid index = <REDACTED> disabled = 0 In general, quite simple, and it run. I could change the instance UUID and nothing critical happened. However, of course after I see that the UUID has been changed, I would remove the client from the app. I used the deployment server UI, go into the app section and remove the IP of the instance from the whitelist. Checking the splunkd.log, I could see the log when it say it is removing the app However, after that, I check again and see the log and see it is still finding the script to run, the log appear every 15 minutes, which is equal to the script interval, so basically the UF is still interpreting the task of running the script. The log is like this: 05-07-2025 11:00:07.938 +0700 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/REGEN_GUID/bin/regenerate_guid.sh" /bin/sh: 1: /opt/splunkforwarder/etc/apps/REGEN_GUID/bin/regenerate_guid.sh: not found Does anyone know the reason? I think the reason might be the way Splunk monitor script inputs is through some kinds of cron file, and my app failed to update that when it was removed?
I am running tstats command with span of 2hrs for index and source. It returns the data for every 2hrs. But I want to include the results only if it's available for every 2hrs in last 24hrs search.... See more...
I am running tstats command with span of 2hrs for index and source. It returns the data for every 2hrs. But I want to include the results only if it's available for every 2hrs in last 24hrs search. So basically which is not having continuous data, want to ignore it. How can I do this.  
Hello All ,  I am running one query  and exactly sme query I am trying to run from search but I am getting diff counts .  ```query for apigateway call``` index=aws_np earliest=1746540480 latest=174... See more...
Hello All ,  I am running one query  and exactly sme query I am trying to run from search but I am getting diff counts .  ```query for apigateway call``` index=aws_np earliest=1746540480 latest=1746544140 Method response body : sourcetype="aws:apigateway" | rex field=_raw "Method response body : (?<json>[^$]+)" | spath input=json path="header.messageID " output=messageID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 | rename _time as request_time ```dedupe is added to remove duplicates ``` | dedup messageID | append [ search index="aws_np" earliest=1746540480 latest=1746558480 | rex field=_raw "messageID \": String\(\"(?<messageID >[^\"]+)" | rex field=_raw "source\": String\(\"(?<source>[^\"]+)" | rex field=_raw "type\": String\(\"(?<type>[^\"]+)" | rex field=_raw "detail-type\": String\(\"(?<detail_type>[^\"]+)" | where source="XXX" and type="XXXXX" and detail_type="XXXX" | stats distinct_count( messageID ) as cnt_guid by messageID ,_time ``` by time is added because we are duplicate records of same time and guid ``` | stats count(cnt_guid) as published_count by messageID | dedup messageID | fields messageID , published_count ] | stats values(action) as request_type sum(published_count) as published_count2 by messageID | where isnotnull(request_type) | eventstats sum(published_count2) by request_type| dedup request_type | search request_type="Create" OR request_type="Update" | head 2 | fields sum(published_count2) request_type     So I ran query from dashboard panel  and then used RUN Search option to run it direclty but I am getting diff count . Search is giving correct result . Dashboard is giving less 
Hi All, Help please. Can I get people to agree with me that the following is a bug/design flaw - as my splunk case is getting no where.   Please try this, it only takes a moment, promise... In t... See more...
Hi All, Help please. Can I get people to agree with me that the following is a bug/design flaw - as my splunk case is getting no where.   Please try this, it only takes a moment, promise... In the splunk gui go to sourcetypes Click New Source Type give it a name - maybe test Click advanced Delete the  LINE_BREAKER setting Add New Setting:   BREAK_ONLY_BEFORE and set value to AAAA Check/Set SHOULD_LINEMERGE is true Save Run this search to confirm your settings look good: | rest /servicesNS/-/-/configs/conf-props| where title = "test" | fields BREAK_ONLY_BEFORE LINE_BREAKER SHOULD_LINEMERGE eai:acl.removable eai:acl.sharing eai:appName title​ In the search results confirm values have been saved as expected Re-edit the source type in the gui Click Advanced Notice that SHOULD_LINEMERGE has been changed to false and LINE_BREAKER has been returned and set to AAAA So the GUI is changing settings when a user re-edits the sourcetype.  Perhaps the user just wanted to changed the sourcetype descriptions and they saved that would mean the sourectype no longer works. I reckon this is a bug or design flaw but Splunk Support are trying to say it is expected behaviour. Please feel free to agree with Splunk Support if you think I am missing something. Thanks, Keith  
Hi! I am creating a basic dashboard which shows the total number of firewall blocked for 3 sourcetypes using data model "network_traffic". Query is- | tstats `security_content_summariesonly` count ... See more...
Hi! I am creating a basic dashboard which shows the total number of firewall blocked for 3 sourcetypes using data model "network_traffic". Query is- | tstats `security_content_summariesonly` count from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") All_Traffic.action="blocked" Now I am trying to add one more panel which will show what is causing the block activity (error message) for each sourcetype with respect to count, but I am unable to figure out the appropriate field (or query) from the data model which is related to the error message. Can someone help me understand which field to group by to get the error message. P.S I am new to splunk
I'm trying to track the duration of user sessions to a server.   I want to know WHICH users are connecting, and for how long each session is.   The problem is, with multiple users, I'm having nested ... See more...
I'm trying to track the duration of user sessions to a server.   I want to know WHICH users are connecting, and for how long each session is.   The problem is, with multiple users, I'm having nested transactions happen , where USER001 joins, but USER004 Leaves, and that creates an event.   I want it to ONLY look at only scenarios in which the same user that Joins, Leaves.   I can't seem to get it to do this.   Eventcode=44  is the event code for these particular events I want to track UserXXID is a Field Extraction I've built to show each userID, as it is not a standard username that Splunk automatically understood.   The two primary types of logs I'm looking for is when they've "joined" or "left" the event.   Here is the command I'm using -  host="XXComputer04" EventCode=44 | transaction startswith="joined" endswith="left" |eval Hours=duration/3600 |timechart count by UserXXID   Sample of the log entry I'm trying to parse.   LogName=Application EventCode=120 EventType=44 ComputerName=XXcomputer004 SourceName=EventXService Type=Information RecordNumber=1234427 Keywords=Classic TaskCategory=State Transition OpCode=Info Message= [0x0x]::ProcessLeave()  (xxUSER002xx) left event33001 --------- I have also tried simply - |transaction USERXXID to keep unique userID's together - and while that works, it then somehow ignores ALL "left event" messages and only shows "joined" for any given user.   Any help would be appreciated!
Hi, I completed a course titled “Intro to Superman Mission Control” earlier, but it no longer appears in the free courses list. Could you please confirm if this course has been retired or renamed, an... See more...
Hi, I completed a course titled “Intro to Superman Mission Control” earlier, but it no longer appears in the free courses list. Could you please confirm if this course has been retired or renamed, and what its current version is (if any)?
Hello Splunk Community! Welcome to the first post of the Splunk Answers Content Calendar This week, I'll be spotlighting three standout topics from the #Getting-Data-In board, sharing solution... See more...
Hello Splunk Community! Welcome to the first post of the Splunk Answers Content Calendar This week, I'll be spotlighting three standout topics from the #Getting-Data-In board, sharing solutions from our experts and best practices to help you bring data into Splunk more effectively.     Here are some of the most popular topics that caught the community's attention, each one solved with insights and expertise from our Splunk experts! Ensuring Consistent _time Extraction During File Indexing The topic is about event timestamp parsing during indexing in Splunk Enterprise, specifically related to incorrect _time assignments after new events are added to a file. The user @punkle64 wants to make sure that Splunk consistently extracts the correct timestamp from the event data instead of using the file’s modification time. Our brilliant experts helped the user troubleshoot the problem. @PickleRick was able to provide a solution that by removing datetime config from your props.conf, it will fall back to its default timestamp extraction behavior.  Link to the original post          2. Index-Time Routing with Selective Cloning Based on Event Content This question is about log routing and filtering using props.conf and transforms.conf in Splunk Enterprise, specifically: Cloning logs to a distant heavy forwarder (HF), filtering out specific logs, and using index-time routing for selective forwarding and duplication of events. The user @Nicolas2203 is looking for guidance on whether this approach is reliable and correctly implemented. Our experts @livehybrid and @isoutamo  provided some solutions that helped the user.  Explicit cloning, conditional override and cleaner approach using input-level clone and that all events are cloned to both by default.  Link to the original post              3. Splitting JSON Array into multiple events This question is about parsing and event breaking of JSON arrays during data ingestion in Splunk. The user @ws is trying to split a JSON array into multiple distinct events, but Splunk is indexing the entire array as a single event, despite attempts to configure props.conf. Our experts @kiran_panchavat, @PickleRick, and @livehybrid help the user create his own solution by suggesting a props.conf configuration for properly splitting a JSON array into multiple events at index time.  This configuration aims to break the JSON array into multiple distinct events based on a key field, improving parsing and field extraction. Link to the original post Thanks to all our experts @PickleRick @livehybrid @kiran_panchavat and @isoutamo for sharing your Splunk knowledge, guiding users with clarity, and consistently going above and beyond. Your contributions truly make our Splunk Community smarter and more supportive every day! Beyond Splunk Answers, the Splunk Community offers a wealth of valuable resources to deepen your knowledge and connect with other professionals. Here are some great ways to get involved and expand your Splunk expertise: Role-Based Learning Paths: Tailored to help you master various aspects of the Splunk Data Platform and enhance your skills. Splunk Training & Certifications: A fantastic place to connect with like-minded individuals and access top-notch educational content. Community Blogs: Stay up-to-date with the latest news, insights, and updates from the Splunk community. User Groups: Join meetups and connect with other Splunk practitioners in your area. Splunk Community Programs: Get involved in exclusive programs like SplunkTrust, Super Users, and Answers Badges (coming soon!), where you can earn recognition and contribute to the community. And don’t forget, you can connect with Splunk users and experts in real-time by joining the Slack channel. Dive into these resources today and make the most of your Splunk journey!
I have a search where I am doing 2 inputlookups for 2 different lookups and appending them. Then I search them. Can I table the lookup name as a field for where the result was found? Thanks.
Id like to create table of results, and convert each row into an unordered bullet list using html.  Such as:  | table result   (which has n number of results)  Then in the output, i want to sh... See more...
Id like to create table of results, and convert each row into an unordered bullet list using html.  Such as:  | table result   (which has n number of results)  Then in the output, i want to show each in an unordered bullet list. <html> <ul>     <li> row.result.1     <li> row.result.2     <li> row.result.3 etc...  </ul> </html>  Possible? 
2025-05-06T13:50:00.857Z error helper/transformer.go:118 Failed to process entry {"otelcol.component.id": "filelog", "otelcol.component.kind": "receiver", "otelcol.signal": "logs", "operator_id": "mo... See more...
2025-05-06T13:50:00.857Z error helper/transformer.go:118 Failed to process entry {"otelcol.component.id": "filelog", "otelcol.component.kind": "receiver", "otelcol.signal": "logs", "operator_id": "move", "operator_type": "move", "error": "move: field does not exist: attributes.uid", "action": "send", "entry.timestamp": "2025-05-06T13:49:09.153Z", "time": "2025-05-06T13:49:09.153467683+00:00", "log.file.path": "/var/log/containers/splunk-otel-collector-agent-46r6g_openshift-logging_otel-collector-1eb5729e9591a5a6b6b3142b8cbbd754b24f8239fad4d2df28c268cf8158e61e.log", "stream": "stderr", "logtag": "F", "log": "2025-05-06T13:49:09.153Z\terror\thelper/transformer.go:118\tFailed to process entry\t{\"otelcol.component.id\": \"filelog\", \"otelcol.component.kind\": \"receiver\", \"otelcol.signal\": \"logs\", \"operator_id\": \"add\", \"operator_type\": \"add\", \"error\": \"evaluate value_expr: invalid operation: string + <nil> (1:18)\\n | \\\"kube:container:\\\"+resource[\\\"k8s.container.name\\\"]\\n | .................^\", \"action\": \"send\", \"entry.timestamp\": \"2025-05-06T13:48:59.854Z\", \"log.file.path\": \"/var/log/containers/splunk-otel-collector-agent-46r6g_openshift-logging_otel-collector-1eb5729e9591a5a6b6b3142b8cbbd754b24f8239fad4d2df28c268cf8158e61e.log\", \"stream\": \"stderr\", \"logtag\": \"F\", \"log\": \"github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/transformer/move.(*Transformer).Process\", \"time\": \"2025-05-06T13:48:59.854