All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The following link provides the common format for CEF log format, assuming that's your format. https://splunk.github.io/splunk-connect-for-syslog/main/sources/base/cef/#splunk-metadata-with-cef-e...
The following link provides the common format for CEF log format, assuming that's your format. https://splunk.github.io/splunk-connect-for-syslog/main/sources/base/cef/#splunk-metadata-with-cef-ev... See more...
The following link provides the common format for CEF log format, assuming that's your format. https://splunk.github.io/splunk-connect-for-syslog/main/sources/base/cef/#splunk-metadata-with-cef-events
HI @PickleRick we're using /event in the HEC endpoint ; but even with that some of the events are getting transformed (splitting as shared in screenshots earlier). 
Hi! I am creating a basic dashboard which shows the total number of firewall blocked for 3 sourcetypes using data model "network_traffic". Query is- | tstats `security_content_summariesonly` count ... See more...
Hi! I am creating a basic dashboard which shows the total number of firewall blocked for 3 sourcetypes using data model "network_traffic". Query is- | tstats `security_content_summariesonly` count from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") All_Traffic.action="blocked" Now I am trying to add one more panel which will show what is causing the block activity (error message) for each sourcetype with respect to count, but I am unable to figure out the appropriate field (or query) from the data model which is related to the error message. Can someone help me understand which field to group by to get the error message. P.S I am new to splunk
I'm trying to track the duration of user sessions to a server.   I want to know WHICH users are connecting, and for how long each session is.   The problem is, with multiple users, I'm having nested ... See more...
I'm trying to track the duration of user sessions to a server.   I want to know WHICH users are connecting, and for how long each session is.   The problem is, with multiple users, I'm having nested transactions happen , where USER001 joins, but USER004 Leaves, and that creates an event.   I want it to ONLY look at only scenarios in which the same user that Joins, Leaves.   I can't seem to get it to do this.   Eventcode=44  is the event code for these particular events I want to track UserXXID is a Field Extraction I've built to show each userID, as it is not a standard username that Splunk automatically understood.   The two primary types of logs I'm looking for is when they've "joined" or "left" the event.   Here is the command I'm using -  host="XXComputer04" EventCode=44 | transaction startswith="joined" endswith="left" |eval Hours=duration/3600 |timechart count by UserXXID   Sample of the log entry I'm trying to parse.   LogName=Application EventCode=120 EventType=44 ComputerName=XXcomputer004 SourceName=EventXService Type=Information RecordNumber=1234427 Keywords=Classic TaskCategory=State Transition OpCode=Info Message= [0x0x]::ProcessLeave()  (xxUSER002xx) left event33001 --------- I have also tried simply - |transaction USERXXID to keep unique userID's together - and while that works, it then somehow ignores ALL "left event" messages and only shows "joined" for any given user.   Any help would be appreciated!
This is the result but it is still not what I am looking for. I have been trying some stuff on my end as well and I got an result that is close to what I am looking for but not.  index=os sourc... See more...
This is the result but it is still not what I am looking for. I have been trying some stuff on my end as well and I got an result that is close to what I am looking for but not.  index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" | timechart span=1m eval((sum(RSZ_KB)/1024/1024)) as Mem_Used_GB by USER useother=no WHERE max in top20 | sort USER desc | head 20 This is the result. It is displaying the results in the way I am looking for, just not the right results. I am looking for the middle 20 instead of the top 20 or bottom 20. Is there an way or command to just display the middle 20 using the search query above?
Thank you so much for the information . 
@irfanarif  It looks like the course "Intro to Superman Mission Control" has been discontinued as of April 10, 2025. There doesn't appear to be a direct replacement or renamed version listed at this... See more...
@irfanarif  It looks like the course "Intro to Superman Mission Control" has been discontinued as of April 10, 2025. There doesn't appear to be a direct replacement or renamed version listed at this time STEP | Splunk Training and Enablement Platform: Course and Class Details "Intro to Mission Control: Superman Login Use Case" is listed among the sunsetting courses. This means it will no longer be available as a free eLearning option. https://www.splunk.com/en_us/pdfs/training/splunk-education-new-course-releases.pdf   
Hi, I completed a course titled “Intro to Superman Mission Control” earlier, but it no longer appears in the free courses list. Could you please confirm if this course has been retired or renamed, an... See more...
Hi, I completed a course titled “Intro to Superman Mission Control” earlier, but it no longer appears in the free courses list. Could you please confirm if this course has been retired or renamed, and what its current version is (if any)?
Hello Splunk Community! Welcome to the first post of the Splunk Answers Content Calendar This week, I'll be spotlighting three standout topics from the #Getting-Data-In board, sharing solution... See more...
Hello Splunk Community! Welcome to the first post of the Splunk Answers Content Calendar This week, I'll be spotlighting three standout topics from the #Getting-Data-In board, sharing solutions from our experts and best practices to help you bring data into Splunk more effectively.     Here are some of the most popular topics that caught the community's attention, each one solved with insights and expertise from our Splunk experts! Ensuring Consistent _time Extraction During File Indexing The topic is about event timestamp parsing during indexing in Splunk Enterprise, specifically related to incorrect _time assignments after new events are added to a file. The user @punkle64 wants to make sure that Splunk consistently extracts the correct timestamp from the event data instead of using the file’s modification time. Our brilliant experts helped the user troubleshoot the problem. @PickleRick was able to provide a solution that by removing datetime config from your props.conf, it will fall back to its default timestamp extraction behavior.  Link to the original post          2. Index-Time Routing with Selective Cloning Based on Event Content This question is about log routing and filtering using props.conf and transforms.conf in Splunk Enterprise, specifically: Cloning logs to a distant heavy forwarder (HF), filtering out specific logs, and using index-time routing for selective forwarding and duplication of events. The user @Nicolas2203 is looking for guidance on whether this approach is reliable and correctly implemented. Our experts @livehybrid and @isoutamo  provided some solutions that helped the user.  Explicit cloning, conditional override and cleaner approach using input-level clone and that all events are cloned to both by default.  Link to the original post              3. Splitting JSON Array into multiple events This question is about parsing and event breaking of JSON arrays during data ingestion in Splunk. The user @ws is trying to split a JSON array into multiple distinct events, but Splunk is indexing the entire array as a single event, despite attempts to configure props.conf. Our experts @kiran_panchavat, @PickleRick, and @livehybrid help the user create his own solution by suggesting a props.conf configuration for properly splitting a JSON array into multiple events at index time.  This configuration aims to break the JSON array into multiple distinct events based on a key field, improving parsing and field extraction. Link to the original post Thanks to all our experts @PickleRick @livehybrid @kiran_panchavat and @isoutamo for sharing your Splunk knowledge, guiding users with clarity, and consistently going above and beyond. Your contributions truly make our Splunk Community smarter and more supportive every day! Beyond Splunk Answers, the Splunk Community offers a wealth of valuable resources to deepen your knowledge and connect with other professionals. Here are some great ways to get involved and expand your Splunk expertise: Role-Based Learning Paths: Tailored to help you master various aspects of the Splunk Data Platform and enhance your skills. Splunk Training & Certifications: A fantastic place to connect with like-minded individuals and access top-notch educational content. Community Blogs: Stay up-to-date with the latest news, insights, and updates from the Splunk community. User Groups: Join meetups and connect with other Splunk practitioners in your area. Splunk Community Programs: Get involved in exclusive programs like SplunkTrust, Super Users, and Answers Badges (coming soon!), where you can earn recognition and contribute to the community. And don’t forget, you can connect with Splunk users and experts in real-time by joining the Slack channel. Dive into these resources today and make the most of your Splunk journey!
Sorry - try with USER (instead of user) index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" ``` Calculate memory used by each user each minute ``` | timechart span=1m eval((su... See more...
Sorry - try with USER (instead of user) index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" ``` Calculate memory used by each user each minute ``` | timechart span=1m eval((sum(RSZ_KB)/1024/1024)) as Mem_Used_GB by USER useother=false ``` Convert to a table ``` | untable _time USER Mem_Used_GB ``` Find memory usage in range ``` | where Mem_Used_GB >= 128 AND Mem_Used_GB <= 256 ``` Find top 20 ``` | sort 20 Mem_Used_GB desc ``` Convert back to chart format ``` | xyseries _time USER Mem_Used_GB
From a nice bloke on reddit: Example if you are using lookups normally: | lookup my_lookup1.csv field1 OUTPUT outfield1 | lookup my_lookup2.csv field1 OUTPUT outfield2 | eval outfield = coalesce(ou... See more...
From a nice bloke on reddit: Example if you are using lookups normally: | lookup my_lookup1.csv field1 OUTPUT outfield1 | lookup my_lookup2.csv field1 OUTPUT outfield2 | eval outfield = coalesce(outfield1,outfield2,"not found") | eval tablesource = case(isnotnull(outfield1),"my_lookup1.csv", isnotnull(outfield2),"my_lookup2.csv", true(),"not found") Example if you are using inputlookup: | inputlookup my_lookup1.csv | eval tablesource="my_lookup1.csv" | inputlookup my_lookup2.csv append=true | eval tablesource=coalesce(tablesource,"my_lookup2.csv")
I am not getting any results back with this search.  
I have a search where I am doing 2 inputlookups for 2 different lookups and appending them. Then I search them. Can I table the lookup name as a field for where the result was found? Thanks.
Hi @timgren  Something like this? | eval row="<li>".result."</li>" | stats values(row) as html_list | eval html_list = "<ul>".mvjoin(html_list, "")."</ul>" | table html_list Full example | ... See more...
Hi @timgren  Something like this? | eval row="<li>".result."</li>" | stats values(row) as html_list | eval html_list = "<ul>".mvjoin(html_list, "")."</ul>" | table html_list Full example | makeresults count=3 | streamstats count | eval result = "result".count | table result | eval row="<li>".result."</li>" | stats values(row) as html_list | eval html_list = "<ul>".mvjoin(html_list, "")."</ul>" | table html_list  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ArunkumarKarmeg  To extract the complete User list with their associated roles and groups in AppDynamics, you can leverage the AppDynamics REST API. Although you've mentioned that you're able to... See more...
Hi @ArunkumarKarmeg  To extract the complete User list with their associated roles and groups in AppDynamics, you can leverage the AppDynamics REST API. Although you've mentioned that you're able to extract the user list using the API, you can then use the API to fetch the roles associated with each user. The AppDynamics API provides endpoints to retrieve user information and their associated roles. You can use the following API endpoints: GET /controller/api/rbac/v1/users - to retrieve a list of users GET /controller/api/rbac/v1/users/{userId}/roles - to retrieve the roles associated with a specific user You can use a scripting language like Python to make API calls to these endpoints, first fetching the list of users and then iterating through the list to fetch the roles for each user. Here's a sample Python code snippet you could use a starting point: import requests # AppDynamics controller details controller_url = 'https://your-controller-url.com' username = 'your-username' password = 'your-password' # Get the list of users users_response = requests.get(f'{controller_url}/controller/api/rbac/v1/users', auth=(username, password)) users = users_response.json() # Iterate through the users and fetch their roles for user in users: user_id = user['id'] roles_response = requests.get(f'{controller_url}/controller/api/rbac/v1/users/{user_id}', auth=(username, password)) roles = roles_response.json() print(f"User: {user['name']}, Roles: {roles}") Depending on your end-goal you can update the python accordingly, e.g. create a CSV or list out as required. For more information on the AppDynamics REST API, refer to the AppDynamics documentation.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Id like to create table of results, and convert each row into an unordered bullet list using html.  Such as:  | table result   (which has n number of results)  Then in the output, i want to sh... See more...
Id like to create table of results, and convert each row into an unordered bullet list using html.  Such as:  | table result   (which has n number of results)  Then in the output, i want to show each in an unordered bullet list. <html> <ul>     <li> row.result.1     <li> row.result.2     <li> row.result.3 etc...  </ul> </html>  Possible? 
Hi @fatsug  Since the previous message I've had an idea - I havent been able to check it yet but if you added a local/app.conf inside the PSC app with the following: [shclustering] deployer_push_mo... See more...
Hi @fatsug  Since the previous message I've had an idea - I havent been able to check it yet but if you added a local/app.conf inside the PSC app with the following: [shclustering] deployer_push_mode = local_only Then I think it would push only local content from the PSC app on the deployer, Im assuming this would exclude the large bin directory out of the bundle? Might be worth a try!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tawfiq15  Are you able to share your helm chart so I can check over it and compare to the log, please? Below is the formatted version of the log to make it easier to read: timestamp: 2025-05-0... See more...
Hi @tawfiq15  Are you able to share your helm chart so I can check over it and compare to the log, please? Below is the formatted version of the log to make it easier to read: timestamp: 2025-05-06T13:50:00.857Z level: error file: helper/transformer.go:118 message: Failed to process entry fields: otelcol.component.id: "filelog" otelcol.component.kind: "receiver" otelcol.signal: "logs" operator_id: "move" operator_type: "move" error: "move: field does not exist: attributes.uid" action: "send" entry.timestamp: "2025-05-06T13:49:09.153Z" time: "2025-05-06T13:49:09.153467683+00:00" log.file.path: "/var/log/containers/splunk-otel-collector-agent-46r6g_openshift-logging_otel-collector-1eb5729e9591a5a6b6b3142b8cbbd754b24f8239fad4d2df28c268cf8158e61e.log" stream: "stderr" logtag: "F" log: | 2025-05-06T13:49:09.153Z error helper/transformer.go:118 Failed to process entry { "otelcol.component.id": "filelog", "otelcol.component.kind": "receiver", "otelcol.signal": "logs", "operator_id": "add", "operator_type": "add", "error": "evaluate value_expr: invalid operation: string + <nil> (1:18)\n | \"kube:container:\"+resource[\"k8s.container.name\"]\n | .................^", "action": "send", "entry.timestamp": "2025-05-06T13:48:59.854Z", "log.file.path": "/var/log/containers/splunk-otel-collector-agent-46r6g_openshift-logging_otel-collector-1eb5729e9591a5a6b6b3142b8cbbd754b24f8239fad4d2df28c268cf8158e61e.log", "stream": "stderr", "logtag": "F", "log": "github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/operator/transformer/move.(*Transformer).Process", "time": "2025-05-06T13:48:59.854 }   Thanks
@NoSpaces  In general, there are four modes of deployer_push_mode: - full - merge_to_default - local_only - default_only By default merge_to_default setting is enabled. - If set to... See more...
@NoSpaces  In general, there are four modes of deployer_push_mode: - full - merge_to_default - local_only - default_only By default merge_to_default setting is enabled. - If set to "full": Bundles all of the app's contents located in default/, local/, users/<app>/, and other app subdirs. It then pushes the bundle to the members. When applying the bundle on a member, the non-local and non-user configurations from the deployer's app folder are copied to the member's app folder, overwriting existing contents. Local and user configurations are merged with the corresponding folders on the member, such that member configuration takes precedence. This option should not be used for built-in apps, as overwriting the member's built-in apps can result in adverse behavior. - If set to "merge_to_default": Merges the local and default folders into the default folder and pushes the merged app to the members. When applying the bundle on a member, the default configuration on the member is overwritten. User configurations are copied and merged with the user folder on the member, such that the existing configuration on the member takes precedence. - * If set to "local_only": This option bundles the app's local directory (and its metadata) and pushes it to the cluster. When applying the bundle to a member, the local configuration from the deployer is merged with the local configuration on the member, such that the member's existing configuration takes precedence. Use this option to push the local configuration of built-in apps, such as search. If used to push an app that relies on non-local content (such as default/ or bin/), these contents must already exist on the member. - If set to "local_only": This option bundles the app's local directory (and its metadata) and pushes it to the cluster. When applying the bundle to a member, the local configuration from the deployer is merged with the local configuration on the member, such that the member's existing configuration takes precedence. Use this option to push the local configuration of built-in apps, such as search. If used to push an app that relies on non-local content (such as default/ or bin/), these contents must already exist on the member. Based on your requirement you can change the deployer_push_mode. It is highly advisable to review the document below to gain a clear understanding of the behavior before implementing any changes. https://docs.splunk.com/Documentation/Splunk/9.3.1/DistSearch/PropagateSHCconfigurationchanges#Choose_a_deployer_push_mode