All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

"Built by Splunk Works": Splunk Works is an internal initiative or team within Splunk focused on creating innovative, often experimental or community-driven apps and add-ons. Apps labeled "Built by S... See more...
"Built by Splunk Works": Splunk Works is an internal initiative or team within Splunk focused on creating innovative, often experimental or community-driven apps and add-ons. Apps labeled "Built by Splunk Works" are developed by Splunk employees but may not carry the same level of formal support or certification as mainstream Splunk apps (e.g., Splunk Enterprise Security or Splunk IT Service Intelligence). These apps are often exploratory, proof-of-concept, or niche solutions. So my above statement is not completely true.  Apps marked "Built by Splunk Works" are indeed created by Splunk employees, making them "official" in the sense that they originate from Splunk Inc. However, they may not always be Splunk Supported or Splunk Certified, which is what some users mean when they refer to "official" apps. Glad you mentioned this because that does make it slightly different than being built by Splunk, certified by Splunk.  
Both answers above are correct in their response.  If you already knew this, I apologize beforehand, but one of the best ways to find out if the app is a 3rd party app, an app built by Splunk, or bui... See more...
Both answers above are correct in their response.  If you already knew this, I apologize beforehand, but one of the best ways to find out if the app is a 3rd party app, an app built by Splunk, or built by vendor of the product you are trying to ingest, go to Splunk base, look at the app and there is a field "created by"  If the answer is Splunk or Splunk works - this means that Splunk built the app.  If it said microsoft or something similar you could assume Microsoft, if it says LAME Creations (just a hypothetical example) that means someone called LAME Creations built the app.  Most of the apps built by Splunk were designed with a use case where they worked with the actual vendor or something similar that should provide you with some level of confidence this is the app.  Unfortunately the bigger issue is if the app is "STILL" the current app that is recommended by Splunk.  What I mean by that is over time, apps get recreated or rebranded into other apps and that can still be a problem with Splunk built apps so I also like to look at the version history and see if the app is relatively current.  If it is current, it means Splunk is still working on it, and that should also help provide some level of confidence this is the app.  The last method is a mixture between using Google Foo to search for what the community is using or asking on this forum - so you are already doing this.  Hope this helps.  The answer was generic, but it was just me sharing how I look at an app on Splunkbase to see if I should use it in my environment.   
In Dashboard Studio, a single click interaction on the timechart can set both global_time.earliest (the start of the clicked bar) and global_time.latest (the end of the 2-hour bar) by using a token ... See more...
In Dashboard Studio, a single click interaction on the timechart can set both global_time.earliest (the start of the clicked bar) and global_time.latest (the end of the 2-hour bar) by using a token formula. Instead of relying on a second click, you’ll compute global_time.latest as global_time.earliest + 2 hours. This ensures the exact 2-hour range is applied to other visualizations, mimicking the Search app’s timeline filtering. This is assuming that you want 2 hour chunks.  You can get crazy and tokenize the span=2h and then use that same token in the example I provide, but that is not the solution I am providing below.    Steps to Implement Verify Your Timechart Configuration: Ensure your timechart uses a 2-hour span, as you mentioned (| timechart span=2h ...). This means each bar represents a 2-hour bucket (e.g., 10:00–12:00, 12:00–14:00). In Dashboard Studio, confirm the visualization is set up as a Timechart (Area, Column, or Line) under the Visualization tab. Set the global_time.earliest Token: You’ve already set a token interaction for global_time.earliest, but let’s confirm it’s correct. In Dashboard Studio’s UI Editor: Select your timechart visualization. Go to the Interactions tab in the configuration panel. Under On Click, add a Set Token action: Token Name: global_time.earliest Token Value: $result._time$ (this captures the start time of the clicked bar, e.g., 10:00 for a 10:00–12:00 bar). This sets global_time.earliest to the timestamp of the clicked bar’s start. Calculate global_time.latest Token: Instead of a second click, compute global_time.latest as global_time.earliest + 2 hours using a token formula. In the UI Editor: Go to the same On Click interaction for the timechart. Add a second Set Token action (below the global_time.earliest one): Token Name: global_time.latest Token Value: relative_time($global_time.earliest$, "+2h") This uses Splunk’s relative_time function to add 2 hours to the earliest timestamp (e.g., if earliest is 10:00, latest becomes 12:00). Both tokens will now be set on a single click, defining the exact 2-hour range of the clicked bar. Apply Tokens to Other Visualizations: Ensure other visualizations in your dashboard use the global_time.earliest and global_time.latest tokens to filter their time ranges. For each visualization (e.g., table, chart): Go to the Search tab in the configuration panel. Set the Time Range to Custom and use: Earliest: $global_time.earliest$ Latest: $global_time.latest$ Alternatively, modify the search query directly to include the token-based time range, e.g.: spl   index=your_index earliest=$global_time.earliest$ latest=$global_time.latest$ | ... Add a Default Time Range (Optional): To prevent visualizations from breaking before a timechart click, set default values for the tokens. In the UI Editor: Go to the Dashboard configuration (top-level settings). Under Tokens, add: Token Name: global_time.earliest, Default Value: -24h@h (e.g., last 24 hours, snapped to hour). Token Name: global_time.latest, Default Value: now (current time). This ensures other visualizations display data until the timechart is clicked. Test the Dashboard: Save and preview the dashboard. Click a timechart bar (e.g., representing 10:00–12:00). Verify that: global_time.earliest is set to the bar’s start (e.g., 10:00). global_time.latest is set to the bar’s end (e.g., 12:00). Other visualizations update to show data only for that 2-hour range. Use the Inspect tool (click the three dots on a visualization > Inspect > Tokens) to debug token values if needed. Why This Works Single Click: Using relative_time($global_time.earliest$, "+2h") avoids the need for a second click, as it calculates the end of the 2-hour bar based on the clicked time. Mimics Search App: The Search app’s timeline sets both earliest and latest times for a selected range. This solution replicates that by defining the full 2-hour bucket. Dashboard Studio Limitation: Dashboard Studio doesn’t natively support range selection (like dragging over a timeline), so computing latest via a formula is the best approach. Troubleshooting Tips Tokens Not Setting: If global_time.latest isn’t updating, check the token syntax in the Source view (JSON). Ensure the relative_time function is correct: "value": "relative_time($global_time.earliest$, \"+2h\")". Time Format Issues: Ensure $result._time$ returns a timestamp in epoch format (seconds). If not, use strptime in the timechart search to format it, e.g., | eval _time=strptime(_time, "%Y-%m-%d %H:%M:%S"). Visualization Not Updating: Confirm other visualizations reference $global_time.earliest$ and $global_time.latest$ correctly. Check their search queries in the Source view. Span Mismatch: If the timechart span changes (e.g., dynamically set), you may need to make the +2h offset dynamic. Let us know if your span varies for a custom solution. Example JSON Snippet (Source View) For reference, here’s how the timechart’s interaction might look in the dashboard’s JSON (edit in Source view if needed): json   { "visualizations": { "viz_timechart": { "type": "splunk.timechart", "options": { ... }, "dataSources": { "primary": "ds_timechart" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "token": "global_time.earliest", "value": "$result._time$" } }, { "type": "drilldown.setToken", "options": { "token": "global_time.latest", "value": "relative_time($global_time.earliest$, \"+2h\")" } } ] } } }
Is it possible to get the upstream service details(Calling service) to an inferred service through metrics ? Through inbuilt dimensions there is no option. Can someone suggest if its possible ?
You are going to have to contact Splunk Support for any older versions not on their website.  I apologize for that inconvenience.   It is your environment and you need to do what you and your mana... See more...
You are going to have to contact Splunk Support for any older versions not on their website.  I apologize for that inconvenience.   It is your environment and you need to do what you and your management team feel are the best things, but as a person employed in the Cyber Security arena, I feel that I should at least mention the following.  None of this applies to your wanting to run 9.0.x  It was the Splunk 7 and Splunk 8 that raised my antennae.   Running 7.x (and to a lesser extent 8.x) UFs introduces significant risks, especially since Splunk 7.x reached End of Support (EOS) between October 2020 and October 2021, and 8.2.x is also at or past end of life. Here are the key implications:   Operational Risks: Limited Functionality: Splunk 7.x UFs lack support for newer features like data compression, advanced SSL configurations, or Splunk-to-Splunk (S2S) Protocol V4, which 9.x indexers use by default. This can cause performance issues or data ingestion failures if configurations mismatch. For example, 7.x UFs may not handle modern event-breaking or parsing rules in 9.0.x apps.   Management Challenges: If you use a Deployment Server (DS), it must be 9.0.x or newer to manage 7.x/8.x UFs. Older DS versions may fail to deploy apps to newer UFs, complicating configuration management. Stability Issues: 7.x UFs may encounter bugs or crashes on modern OSes (e.g., newer Linux kernels), as they were designed for older environments. Splunk Support won’t provide fixes for EOS versions, leaving you to work around issues manually. Security Risks: Vulnerabilities: 7.x UFs miss critical security patches available in 8.x and 9.x, exposing your environment to known vulnerabilities (e.g., CVE fixes). Without patches, UFs could be exploited, especially if they’re on internet-facing systems or handle sensitive data.   SSL/TLS Weaknesses: 7.x UFs use outdated SSL/TLS protocols, which may conflict with 9.0.x’s stricter security defaults (e.g., TLS 1.2/1.3). This can lead to connection failures or insecure data transmission. 8.x UFs are less problematic but still lack the latest TLS enhancements in 9.x. Compliance Issues: Running EOS software like 7.x may violate compliance requirements (e.g., PCI DSS, HIPAA), as auditors often flag unsupported software as non-compliant. Recommendations for UFs: Upgrade UFs to 9.0.x: Plan to upgrade your 7.x and 8.x UFs to 9.0.x (or at least 8.2.x) to align with your indexer. UFs are lightweight, and upgrades are straightforward  Start with a few test UFs to validate compatibility with your 9.0.x indexer and DS (if used). Use the Deployment Server to automate UF upgrades, ensuring serverclass.conf matches the new version. Prioritize 7.x First: 7.x UFs are the most critical to upgrade due to EOS status and severe security risks. 8.x UFs are less urgent but should be updated to avoid future EOS issues. Check Compatibility: Confirm UF OS compatibility with 9.0.x (e.g., AL2023 or supported Windows versions) using the Splunk System Requirements.   Interim Step: If upgrading all UFs immediately isn’t feasible, ensure your 9.0.x indexer’s inputs.conf supports legacy S2S protocols (e.g., V3 for 7.x UFs) by setting connectionTimeout or readTimeout to accommodate older clients. However, this is a temporary workaround. Why Upgrade UFs?: Aligning UFs with 9.0.x ensures optimal performance, security, and supportability. Splunk 9.0.x introduces features like ingest actions and enhanced TLS validation, which 7.x UFs can’t leverage.   Upgrading avoids the risk of data loss or ingestion delays due to protocol mismatches or unpatched bugs. Splunk Support can assist with 9.0.x issues, but not with 7.x, reducing your troubleshooting burden.
And why would you go for 9.0 which is out of support? I'd strongly advise against that. Unless you have a very good reason for doing so (and a very very good support contract, other than us, mere mor... See more...
And why would you go for 9.0 which is out of support? I'd strongly advise against that. Unless you have a very good reason for doing so (and a very very good support contract, other than us, mere mortals) it's unwise to keep your environment at an unsupported version (which applies to the current 8.2 as well).
The two previous posts both are good answers, but since you stated you are new to Splunk I decided to give you a thorough write up that explains how to check each of these areas that have been called... See more...
The two previous posts both are good answers, but since you stated you are new to Splunk I decided to give you a thorough write up that explains how to check each of these areas that have been called out to see if they are the problem that is causing your error.   The warning you’re seeing (The TCP output processor has paused the data flow) means your forwarder (at MRNOOXX) is unable to send data to the receiving Splunk instance (at 192.XXX.X.XX), likely because the receiver is not accepting data or the connection is blocked. This can stall data indexing, so let’s troubleshoot it step-by-step. Here’s a comprehensive checklist to resolve the issue: Verify Receiver is Running: Ensure the Splunk instance at 192.XXX.X.XX (likely an indexer) is active. On the receiver, run $SPLUNK_HOME/bin/splunk status to confirm splunkd is running. If it’s stopped, restart it with $SPLUNK_HOME/bin/splunk restart. Confirm Receiving Port is Open: The default port for Splunk-to-Splunk forwarding is 9997. On the receiver, check if port 9997 is listening: netstat -an | grep 9997 (Linux) or netstat -an | findstr 9997 (Windows). Verify the receiver’s inputs.conf has a [splunktcp://9997] stanza. Run $SPLUNK_HOME/bin/splunk cmd btool inputs list splunktcp --debug to check. Ensure disabled = 0. Test Network Connectivity: From the forwarder, test connectivity to the receiver’s port 9997: nc -vz -w1 192.XXX.X.XX 9997 (Linux) or telnet 192.XXX.X.XX 9997 (Windows). If it fails, check for firewalls or network issues. Confirm no firewalls are blocking port 9997 on the receiver or network path. Check Forwarder Configuration: On the forwarder, verify outputs.conf points to the correct receiver IP and port. Check $SPLUNK_HOME/etc/system/local/outputs.conf or app-specific configs (e.g., $SPLUNK_HOME/etc/apps/<app>/local/outputs.conf). Example: ini   [tcpout:default-autolb-group] server = 192.XXX.X.XX:9997 disabled = 0 Ensure no conflicting outputs.conf files exist (run $SPLUNK_HOME/bin/splunk cmd btool outputs list --debug). Inspect Receiver Health: The error suggests the indexer may be overwhelmed, causing backpressure. Use the Splunk Monitoring Console (on a Search Head or standalone instance) to check: Go to Monitoring Console > Indexing > Queue Throughput to see if queues (e.g., parsing, indexing) are full (100% fill ratio). Check Resource Usage > Machine for CPU, memory, and disk I/O (IOPS) on the indexer. High usage may indicate bottlenecks. Run this search on the Search Head to check queue status: | rest /services/server/introspection/queues splunk_server=192.XXX.X.XX | table title, current_size, max_size, fill_percentage. Ensure the indexer has sufficient disk space (df -h on Linux or dir on Windows) and isn’t exceeding license limits (check Monitoring Console > Licensing). Check for SSL Mismatches: If SSL is enabled (e.g., useSSL = true in outputs.conf on the forwarder), ensure the receiver’s inputs.conf has ssl = true. Verify certificates match in $SPLUNK_HOME/etc/auth/ on both systems. Check splunkd.log on the receiver for SSL errors: grep -i ssl $SPLUNK_HOME/var/log/splunk/splunkd.log. Review Logs for Clues: On the forwarder, check $SPLUNK_HOME/var/log/splunk/splunkd.log for errors around the TCP warning (search for “TcpOutputProc” or “blocked”). Look for queue or connection errors. On the receiver, search splunkd.log for errors about queue fullness, indexing delays, or connection refusals (e.g., grep -i "192.XXX.X.XX" $SPLUNK_HOME/var/log/splunk/splunkd.log). Share any relevant errors to help narrow it down. Proactive Mitigation: If the issue is intermittent (e.g., due to temporary indexer overload), consider enabling persistent queues on the forwarder to buffer data during blockages. In outputs.conf: ini   [tcpout] maxQueueSize = 100MB usePersistentQueue = true Restart the forwarder after changes. Architecture and Version Details: Could you share: Your Splunk version (e.g., 9.3.1)? Run $SPLUNK_HOME/bin/splunk version. Your setup (e.g., Universal Forwarder to single indexer, or Heavy Forwarder to indexer cluster)? Is the receiver a standalone indexer, Splunk Cloud, or part of a cluster? This will help tailor the solution, as queue behaviors vary by version and architecture. Quick Fixes to Try: Restart both the forwarder and receiver to clear temporary issues: $SPLUNK_HOME/bin/splunk restart. Simplify outputs.conf on the forwarder to point to one indexer (e.g., server = 192.XXX.X.XX:9997) and test. Check indexer disk space and license usage immediately, as these are common culprits. Next Steps: Share the output of the network test (nc or telnet), any splunkd.log errors, and your architecture details. If you have access to the Monitoring Console, let us know the queue fill percentages or resource usage metrics.  
For some reason, Splunk has stopped receiving data.  It could be because of any of several reasons.  Check the logs on indexer for possible explanations.  Also, the Monitoring Console may offer clues... See more...
For some reason, Splunk has stopped receiving data.  It could be because of any of several reasons.  Check the logs on indexer for possible explanations.  Also, the Monitoring Console may offer clues - look for blocked indexer queues.
Hi @yash_eng  This warning indicates your forwarder cannot send data to the receiving Splunk instance at 192.XXX.X.XX because the connection is blocked or the receiver is not accepting data. I'd re... See more...
Hi @yash_eng  This warning indicates your forwarder cannot send data to the receiving Splunk instance at 192.XXX.X.XX because the connection is blocked or the receiver is not accepting data. I'd recommend checking the following: Verify the receiver is running - Ensure the Splunk instance at 192.XXX.X.XX is active and accessible Confirm receiving port is open - Default is 9997 for Splunk-to-Splunk forwarding - can you confirm this is listening on the receiving system? Check network connectivity - Test if you can reach the destination IP from your forwarder machine - Can you perform a netcat check (e.g. nc -vz -w1 192.x.x.x 9997) to prove you can connect from source to destination? Verify receiver configuration - Ensure the receiving Splunk instance has inputs configured to accept data on the expected port. You can use btool with "$SPLUNK_HOME/bin/splunk cmd btool inputs list splunktcp" Can you give some more inforomation on your architecture / deployment setup? This might help pinpoint the possible issue, some common issues include; Receiver Splunk service is down, Firewall blocking the connection, Incorrect receiving port configuration, Network connectivity issues, Receiver disk space full or other resource constraints or SSL misconfiguration - if you're able to show us additional logs around the other errors this might also help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi,  I am having an issue trying to make a version of the search app filtering timeline work in my dashboard in Dashboard Studio with other visualizations. I have set a token interaction on click to... See more...
Hi,  I am having an issue trying to make a version of the search app filtering timeline work in my dashboard in Dashboard Studio with other visualizations. I have set a token interaction on click to update the global_time.earliest value as to the time that is clicked on the chart. I, however, am running into an issue where I cannot set the global_time.latest value by clicking again on the timechart. If I set up a second token interaction to get the latest time, it just sets it to the same as the earliest, all on the first click. I'm trying to filter it down to each bar's representation on the timechart, which is 2 hours ( |timechart span=2h ...).  Like the search apps version, this timechart is meant to be a filtering tool that will only filter down the search times of the other visualizations once it is set. Setting the earliest token works perfectly fine; it's all just about the latest. I just need to know how or if it is possible. Thank you!!
Contact Splunk Support for versions not available on the web site.
Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up. "The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X... See more...
Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up. "The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X.XX inside output group default-auto lb-group from host_src=MRNOOXX has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data." Kindly help me. Thank you 
Hi There, We've a standalone Splunk instance v8.2.2.1 deployed on a  RHEL server which is EOL; we wish to migrate to a newer OS Amazon Linux (AL) 2023 OS-- rather than  performing an in-place upgrad... See more...
Hi There, We've a standalone Splunk instance v8.2.2.1 deployed on a  RHEL server which is EOL; we wish to migrate to a newer OS Amazon Linux (AL) 2023 OS-- rather than  performing an in-place upgrade. Instead of using the most recent version of Splunk enterprise, we still wish to adopt a more conservative approach and choose 9.0.x (we've UFs that are older version 7.x and 8.x). Please let me know where can i download 9.0.x version of Splunk enterprise as it's not here: https://www.splunk.com/en_us/download/previous-releases.html   Thanks!
Hi @tscroggins  I have appended intermediate  and root cert to the cacert.pem .After this error is not observed.
As many mentioned on this post, even if I was able to get Splunk to read the log file it will end up with duplicate logs or I might lose events if the UF reads to slow. The solution is to write a cu... See more...
As many mentioned on this post, even if I was able to get Splunk to read the log file it will end up with duplicate logs or I might lose events if the UF reads to slow. The solution is to write a custom script that can handle the log behaviour of when it's "full" it starts overwriting the oldest event. This custom script allows Splunk to ingest events and can help handle the duplicate logs. As for loss of events by overwriting, I don't have a bullet proof solution beyond ensuring the events are ingested into Splunk faster than they are written. You should consider just using the script to tail the log and write a new log file, to aid in this if necessary.   Many thanks for the insights on UF behaviour for this wierd log.
Hi _olivier_, Yes, off course when on your server go to the monitoring console, there under the menu setting, select "general setup" and there you can set the server roles.    Kind regards. 
Hello @Satyams14, If you plan to stream WAF logs to Eventhubs and wish to use Splunk Supported Add-on, you can also consider using Splunk Add-on for Microsoft Cloudservices (#3110 - https://splunkba... See more...
Hello @Satyams14, If you plan to stream WAF logs to Eventhubs and wish to use Splunk Supported Add-on, you can also consider using Splunk Add-on for Microsoft Cloudservices (#3110 - https://splunkbase.splunk.com/app/3110). It is a supported add-on and can fetch logs directly from the eventhub. Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!! 
Hi @_olivier_ , don't attach a new question on an old one, even if on the same topic: open a new request, so you will be more sure to receive an answer. Ciao. Giuseppe
Hi @Satyams14  This app is created by Splunk (but not a Splunk supported app) - not created by Microsoft, having said that I believe that it IS the "go-to" app for Azure feeds/onboarding. For a goo... See more...
Hi @Satyams14  This app is created by Splunk (but not a Splunk supported app) - not created by Microsoft, having said that I believe that it IS the "go-to" app for Azure feeds/onboarding. For a good overview on getting-data-in (GDI) for Azure check out https://docs.splunk.com/Documentation/SVA/current/Architectures/AzureGDI (which lists this app).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Satyams14 , as you can read at https://splunkbase.splunk.com/app/3757, this isn't an official app by Splunk or Microsoft: It was created by "Splunk Works", It isn't supported, even if it has... See more...
Hi @Satyams14 , as you can read at https://splunkbase.splunk.com/app/3757, this isn't an official app by Splunk or Microsoft: It was created by "Splunk Works", It isn't supported, even if it has 64,900 downloads, and you can find it on GitHub. Ciao. Giuseppe