All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @LizAndy123  When configuring your alert, select it to run "For each result" under the Trigger setting as per the screenshot below:  Did this answer help you? If so, please consider: Add... See more...
Hi @LizAndy123  When configuring your alert, select it to run "For each result" under the Trigger setting as per the screenshot below:  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
So I have successfully configured some reports and alerts that send the $result to Mattermost. My question is how to deal with a search which returns maybe 5 results? Example - Current search may r... See more...
So I have successfully configured some reports and alerts that send the $result to Mattermost. My question is how to deal with a search which returns maybe 5 results? Example - Current search may return - Example Text : Hello World How do I pass each individual  $result ? So Search could return Hello World followed by Hello World2 followed by Hello World3  If I put $result.text$ it prints Hello World but if I want to then show the second result or 3rd...is it possible through this>?
Ok, thank you. I thought there was something up with the $'s; they would accept it as a static value instead of a predefined token when setting them up in the interactions menu, but the logic wouldn'... See more...
Ok, thank you. I thought there was something up with the $'s; they would accept it as a static value instead of a predefined token when setting them up in the interactions menu, but the logic wouldn't work. And it seems to be the case as well for the second, the eval statement just did not work at all as intended. I was wondering why this didn't work.
Thank you for the timely response. I tried what you recommended and ran into a few issues that I was not able to diagnose or fix with the troubleshooting tips provided.  I got everything looking exa... See more...
Thank you for the timely response. I tried what you recommended and ran into a few issues that I was not able to diagnose or fix with the troubleshooting tips provided.  I got everything looking exactly as you said. However, $result._time$ doesn't seem to evaluate to a time whatsoever; when I check the value, it is literally just "$result._time$".  The latest time value gets set to "relative_time(-7d@h", which appears incomplete as shown. I get an error on the visualization saying Invalid earliest_time, and both earliest and latest show invalid values. When I tried to put in the troubleshooting eval command you recommended, it did not fix the issue. The time should be coming in correctly.
@LAME-Creations @LOP22456  Please do not set both INDEXED_EXTRACTIONS and KV_MODE = json. See props.conf docs for more info - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf W... See more...
@LAME-Creations @LOP22456  Please do not set both INDEXED_EXTRACTIONS and KV_MODE = json. See props.conf docs for more info - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf When 'INDEXED_EXTRACTIONS = JSON' for a particular source type, do not also set 'KV_MODE = json' for that source type. This causes the Splunk software to extract the JSON fields twice: once at index time, and again at search time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Just posting to confirm this, though I've never written in. Running into it now as generating a summary index is changing the value type to AFAICT a string, meaning the previous value of 5136, 5136,... See more...
Just posting to confirm this, though I've never written in. Running into it now as generating a summary index is changing the value type to AFAICT a string, meaning the previous value of 5136, 5136, which is searchable via EventCode=5136, is now broken in the summary index, where the value is now something like "5136\n5136" which... is not helpful at all.
Hi, I’m probably not understanding the question completely, so feel free to provide a specific example if you want. One thing I will point out is that when thinking about what information you can ge... See more...
Hi, I’m probably not understanding the question completely, so feel free to provide a specific example if you want. One thing I will point out is that when thinking about what information you can get from an inferred service--it’s limited to what you can see from the trace spans that are generated when an instrumented service calls that uninstrumented inferred service. Here is a screen shot of a service-centric view of an inferred service and what you can see about it.  
Thanks for sharing the details of your FortiGate log parsing issue in Splunk Cloud! It sounds like your Logstash server is combining multiple FortiGate logs into a single file, which is then sent to ... See more...
Thanks for sharing the details of your FortiGate log parsing issue in Splunk Cloud! It sounds like your Logstash server is combining multiple FortiGate logs into a single file, which is then sent to your Heavy Forwarder (HF) and ingested into Splunk Cloud as multi-line events (sometimes 20+ logs per event). Your props.conf configuration isn’t breaking these into individual events per host, likely due to an incorrect LINE_BREAKER regex or misconfigured parsing settings. The _grokparsefailure tag suggests additional parsing issues, possibly from Logstash or Splunk misinterpreting the syslog format. Below is a solution to parse these logs into individual events per FortiGate host, tailored for Splunk Cloud and your HF setup. Why the Current Configuration Isn’t Working LINE_BREAKER Issue: Your LINE_BREAKER = }(\s*)\{ aims to split events between JSON objects (e.g., } {), but it may not account for the syslog headers or whitespace correctly, causing Splunk to treat multiple JSON objects as one event. The regex might also be too restrictive or not capturing all cases. SHOULD_LINEMERGE: Setting SHOULD_LINEMERGE = false is correct to disable line merging, but without a precise LINE_BREAKER, Splunk may still fail to split events. Logstash Aggregation: Logstash is bundling multiple FortiGate logs into a single file, and the _grokparsefailure tag indicates Logstash’s grok filter (or Splunk’s parsing) isn’t correctly processing the syslog format, leading to malformed events. Splunk Cloud Constraints: In Splunk Cloud, props.conf changes must be applied on the HF, as you don’t have direct access to the indexers. The current configuration may not be properly deployed or tested. Solution: Parse Multi-Line FortiGate Logs into Individual Events To break the multi-line events into individual events per FortiGate host, you’ll need to refine the props.conf configuration on the Heavy Forwarder and ensure proper event breaking at ingestion time. Since the logs are JSON with syslog headers, you can use Splunk’s JSON parsing capabilities and a corrected LINE_BREAKER to split events. Step 1: Update props.conf on the Heavy Forwarder Modify the props.conf file on your HF to correctly break events and parse the JSON structure. Place this in $SPLUNK_HOME/etc/system/local/props.conf or an app’s local directory (e.g., $SPLUNK_HOME/etc/apps/<your_app>/local/props.conf). ini   [fortigate_log] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?=\{"log":\{"syslog":\{"priority":\d+\}\}) INDEXED_EXTRACTIONS = json KV_MODE = json TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ BREAK_ONLY_BEFORE = ^\{"log":\{"syslog":\{"priority":\d+\}\} TRUNCATE = 10000 category = Structured disabled = false pulldown_type = true Explanation: LINE_BREAKER = ([\r\n]+)(?=\{"log":\{"syslog":\{"priority":\d+\}\}): Splits events on newlines (\r\n) followed by the start of a new JSON object (e.g., {"log":{"syslog":{"priority":189}}). The positive lookahead (?=) ensures the JSON start is not consumed, preserving the event. SHOULD_LINEMERGE = false: Prevents Splunk from merging lines, relying on LINE_BREAKER for event boundaries. INDEXED_EXTRACTIONS = json: Automatically extracts JSON fields (e.g., host.hostname, fgt.srcip) at index time on the HF, reducing search-time parsing issues. KV_MODE = json: Ensures search-time field extraction for JSON fields, complementing index-time parsing. TIMESTAMP_FIELDS = timestamp: Uses the timestamp field (e.g., 2025-06-23T15:20:45Z) for event timestamps. TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ: Matches the timestamp format in the logs. BREAK_ONLY_BEFORE: Reinforces event breaking by matching the start of a JSON object, as a fallback if LINE_BREAKER struggles. TRUNCATE = 10000: Ensures large events (up to 10,000 characters) aren’t truncated, accommodating multi-log events. category and pulldown_type: Improves Splunk Cloud’s UI compatibility for source type selection. Step 2: Deploy and Restart the Heavy Forwarder Deploy props.conf: Place the updated props.conf in $SPLUNK_HOME/etc/system/local/ or a custom app directory on the HF. If using a custom app, ensure it’s deployed via a Deployment Server or manually copied to the HF. Restart the HF: On the Windows HF, open a Command Prompt as Administrator. Navigate to $SPLUNK_HOME\bin (e.g., cd "C:\Program Files\Splunk\bin"). Run: splunk restart Note: In Splunk Cloud, you can’t modify indexer configurations directly. The HF applies these parsing rules before forwarding to the Splunk Cloud indexers. Step 3: Verify Event Breaking Run a search in Splunk Cloud to confirm events are split correctly: spl   index=<your_index> sourcetype=fortigate_log| stats count by host.hostname Check that each FortiGate host (from host.hostname) appears as a separate event with the correct count. Each event should correspond to one JSON log entry (e.g., one per host.hostname like “redact”). If events are still merged, inspect $SPLUNK_HOME/var/log/splunk/splunkd.log on the HF for parsing errors (e.g., grep -i "fortigate_log" splunkd.log). Step 4: Address _grokparsefailure Tag The _grokparsefailure tag suggests Logstash’s grok filter isn’t correctly parsing the FortiGate syslog format, which may contribute to event merging. Since you can’t modify the Logstash setup, you can mitigate this in Splunk: Override Logstash Tags: In props.conf, add a transform to remove the _grokparsefailure tag and ensure clean parsing: ini   [fortigate_log] ... TRANSFORMS-remove_grok_failure = remove_grokparsefailure In $SPLUNK_HOME/etc/system/local/transforms.conf: ini   [remove_grokparsefailure] REGEX = . FORMAT = tags::none DEST_KEY = _MetaData:tags Restart the HF after adding the transform. This clears the _grokparsefailure tag, ensuring Splunk doesn’t inherit Logstash’s parsing issues. Step 5: Optimize FortiGate Integration (Optional) Install Fortinet FortiGate Add-On: If not already installed, add the Fortinet FortiGate Add-On for Splunk on the HF and Search Head to improve field extraction and CIM compliance. Install on the HF for index-time parsing (already handled by INDEXED_EXTRACTIONS = json). Install on the Search Head for search-time field mappings and dashboards. Verify Syslog Configuration: Ensure FortiGate devices send logs to Logstash via UDP 514 or TCP 601, as per Fortinet’s syslog standards.   Check Logstash Output: If possible, verify Logstash’s output plugin (e.g., Splunk HTTP Event Collector or TCP output) is configured to send individual JSON objects without excessive buffering, which may contribute to event merging. Troubleshooting Tips Test Parsing: Ingest a small sample log file via the HF’s Add Data wizard in Splunk Web to test the props.conf settings before processing live data. Check Event Boundaries: Run index=<your_index> sourcetype=fortigate_log | head 10 and verify each event contains only one JSON object with a unique host.hostname. Logstash Buffering: If Logstash continues to bundle logs, consider asking your Logstash admin to adjust the output plugin (e.g., splunk output with HEC) to flush events more frequently, though you noted this isn’t changeable.   Splunk Cloud Support: If parsing issues persist, contact Splunk Cloud Support to validate the HF configuration or request assistance with indexer-side parsing (though the HF should handle most parsing).
Hi @LAME-Creations  Please can you confirm if you were able to test and have working the example and process provided?  Unfortunately this reads a lot like an AI hallucination because it looks to m... See more...
Hi @LAME-Creations  Please can you confirm if you were able to test and have working the example and process provided?  Unfortunately this reads a lot like an AI hallucination because it looks to mix Classic XML and Dashboard Studio approaches to tokens. For example it is not possible to put $ into the value field for a token, and it should be row._time.value not $result._time$ If you have this as a working approach then please can you share a working version as I wasnt aware it was possible to do evals in drilldown.setToken  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You have a thread from 2020 that states they fixed their problem.  I am pretty sure the reason the solution works is similar to what I am going to suggest here.  I have found (no scientific evidence ... See more...
You have a thread from 2020 that states they fixed their problem.  I am pretty sure the reason the solution works is similar to what I am going to suggest here.  I have found (no scientific evidence to support it) that sometimes the conf files just seem to be buggered and if reset them, it starts to work.  I swear the settings are the same before the reset and after, but for some reason it works.  Maybe it's voodoo or whatever, but it has worked for me in the past. Here is a breakdown of quickly resetting the configurations that you need The warning suggests the SH is trying to query a non-existent or misconfigured search peer, possibly due to stale or incorrect settings in outputs.conf or related configuration files. Resetting outputs.conf clears any corrupted or conflicting settings (e.g., incorrect server names, ports, or SSL configurations) that might be preventing the SH from recognizing the IDX as a valid peer. Restarting Splunk ensures a clean state, and re-adding the peer re-establishes the connection with fresh, verified settings. Steps to Reset and Reconfigure Back Up Configuration Files: Before making changes, back up your Splunk configuration files to avoid losing custom settings. On the Search Head, copy the $SPLUNK_HOME\etc\system\local directory (e.g., C:\Program Files\Splunk\etc\system\local) to a safe location (e.g., C:\SplunkBackup). Delete or Rename outputs.conf: Navigate to $SPLUNK_HOME\etc\system\local on the Search Head (e.g., C:\Program Files\Splunk\etc\system\local). Locate outputs.conf. If it exists, rename it to outputs.conf.bak (or delete it if you’re sure no critical settings are needed). Note: If outputs.conf is in an app directory (e.g., $SPLUNK_HOME\etc\apps\<app_name>\local), check there too and rename/delete it. This ensures Splunk starts with default output settings, clearing any misconfigurations. Restart Splunk on the Search Head: Open a Command Prompt as Administrator on the Windows SH host. Navigate to $SPLUNK_HOME\bin (e.g., cd "C:\Program Files\Splunk\bin"). Run: splunk restart This restarts the Splunk service, applying the reset configuration. Verify Indexer Configuration: Ensure the Indexer is configured to receive data on the correct port (default: 9997). On the Indexer, check $SPLUNK_HOME\etc\system\local\inputs.conf for a [splunktcp://9997] stanza: ini   [splunktcp://9997] disabled = 0 If missing, add it and restart the Indexer (splunk restart). Confirm port 9997 is open: netstat -an | findstr 9997 (should show LISTENING). Reconfigure the Search Peer: On the Search Head, log into the Splunk Web UI as an admin. Go to Settings > Distributed Search > Search Peers. Remove the existing Indexer peer (select the IDX and click Remove). Add the Indexer as a new peer: Click Add New. Enter the Indexer’s details: Peer URI: https://<Indexer_IP>:8089 (e.g., https://192.168.1.100:8089). Authentication: Use the SH admin credentials or a pass4SymmKey (if configured in distsearch.conf). Replication Settings: Ensure settings match your setup (usually default). Save and wait for the status to show Healthy. Alternatively, use the CLI: cmd   splunk add search-server https://<Indexer_IP>:8089 -auth <admin>:<password> -remoteUsername <admin> -remotePassword <password> Test the Search: Run your search again from the SH: index=_internal. Verify results are returned without the warning. Check the Monitoring Console (Settings > Monitoring Console > Search > Distributed Search Health) to confirm the peer is active and responding. Additional Tips Check Network Connectivity: Ensure the SH can reach the IDX on port 8089 (management) and 9997 (data). Run: telnet <Indexer_IP> 8089 and telnet <Indexer_IP> 9997 from the SH host. If blocked, check Windows Firewall or network settings. Verify SSL Settings: If using SSL, ensure distsearch.conf on the SH and inputs.conf on the IDX align (e.g., ssl = true). Check $SPLUNK_HOME\var\log\splunk\splunkd.log on both hosts for SSL errors. Confirm Splunk Versions: Your SH and IDX should be on compatible versions (e.g., SH 8.2.2.1 or newer, IDX same or older). Run splunk version on both to confirm. If mismatched, upgrade the SH first. Debug Logs: If the issue persists, check $SPLUNK_HOME\var\log\splunk\splunkd.log
Actually, compression has been around for quite a long time and 7.x forwarders should support it. Also, your protocol levels are way off. Not to mention the bogus requirement to use 9.0+ DS to mana... See more...
Actually, compression has been around for quite a long time and 7.x forwarders should support it. Also, your protocol levels are way off. Not to mention the bogus requirement to use 9.0+ DS to manage 7/8 version UFs. Please refrain from posting AI-generated content.
Hello, We have multiple fortigate devices forwarding to a logstash server that is storing all the device's logs in 1 file (I can't change this unfortunately). This is then forwarding to our HF, and ... See more...
Hello, We have multiple fortigate devices forwarding to a logstash server that is storing all the device's logs in 1 file (I can't change this unfortunately). This is then forwarding to our HF, and then to Splunk Cloud. This then enters splunk with sometimes 20+ logs in a single event, and I can't get them to parse out into individual events by host. Below are samples of 2 logs, but in a single event there could be 20+ logs - I cannot get this to parse correctly out into each event per host (redact). {"log":{"syslog":{"priority":189}},"host":{"hostname":"redact"},"fgt":{"proto":"1","tz":"+0200","vpntype":"ipsecvpn","rcvdbyte":"3072","policyname":"MW","type":"traffic","identifier":"43776","trandisp":"noop","logid":"0001000014","srcintfrole":"undefined","policyid":"36","rcvdpkt":"3","vd":"root","duration":"180","dstintfrole":"undefined","dstip":"10.53.6.1","level":"notice","eventtime":"1750692044675283970","policytype":"policy","subtype":"local","srcip":"10.53.4.119","dstintf":"root","srcintf":"HUB1-VPN1","sessionid":"5612390","action":"accept","service":"PING","app":"PING","sentbyte":"3072","sentpkt":"3","dstcountry":"Reserved","poluuid":"cb0c79de-2400-51f0-7067-d28729f733cf","srccountry":"Reserved"},"timestamp":"2025-06-23T15:20:45Z","data_stream":{"namespace":"default","dataset":"fortinet.fortigate","type":"logs"},"@timestamp":"2025-06-23T15:20:45.000Z","type":"fortigate","logstash":{"hostname":"no_logstash_hostname"},"tags":["_grokparsefailure"],"@version":"1","system":{"syslog":{"version":"1"}},"event":{"created":"2025-06-23T15:20:45.563831683Z","original":"<189>1 2025-06-23T15:20:45Z redact - - - - eventtime=1750692044675283970 tz=\"+0200\" logid=\"0001000014\" type=\"traffic\" subtype=\"local\" level=\"notice\" vd=\"root\" srcip=10.53.4.119 identifier=43776 srcintf=\"redact\" srcintfrole=\"undefined\" dstip=10.53.6.1 dstintf=\"root\" dstintfrole=\"undefined\" srccountry=\"Reserved\" dstcountry=\"Reserved\" sessionid=5612390 proto=1 action=\"accept\" policyid=36 policytype=\"policy\" poluuid=\"cb0c79de-2400-51f0-7067-d28729f733cf\" policyname=\"MW\" service=\"PING\" trandisp=\"noop\" app=\"PING\" duration=180 sentbyte=3072 rcvdbyte=3072 sentpkt=3 rcvdpkt=3 vpntype=\"ipsecvpn\""},"observer":{"ip":"10.53.12.113"}} {"log":{"syslog":{"priority":189}},"host":{"hostname":"redact"},"fgt":{"proto":"1","tz":"+0200","rcvdbyte":"3072","policyname":"redact (ICMP)","type":"traffic","identifier":"43776","trandisp":"noop","logid":"0001000014","srcintfrole":"wan","policyid":"40","rcvdpkt":"3","vd":"root","duration":"180","dstintfrole":"undefined","dstip":"10.52.25.145","level":"notice","eventtime":"1750692044620716079","policytype":"policy","subtype":"local","srcip":"10.53.4.119","dstintf":"root","srcintf":"wan1","sessionid":"8441941","action":"accept","service":"PING","app":"PING","sentbyte":"3072","sentpkt":"3","dstcountry":"Reserved","poluuid":"813c45e0-3ad6-51f0-db42-8ec755725c23","srccountry":"Reserved"},"timestamp":"2025-06-23T15:20:45Z","data_stream":{"namespace":"default","dataset":"fortinet.fortigate","type":"logs"},"@timestamp":"2025-06-23T15:20:45.000Z","type":"fortigate","logstash":{"hostname":"no_logstash_hostname"},"tags":["_grokparsefailure"],"@version":"1","system":{"syslog":{"version":"1"}},"event":{"created":"2025-06-23T15:20:45.639474828Z","original":"<189>1 2025-06-23T15:20:45Z redact - - - - eventtime=1750692044620716079 tz=\"+0200\" logid=\"0001000014\" type=\"traffic\" subtype=\"local\" level=\"notice\" vd=\"root\" srcip=10.53.4.119 identifier=43776 srcintf=\"wan1\" srcintfrole=\"wan\" dstip=10.52.25.145 dstintf=\"root\" dstintfrole=\"undefined\" srccountry=\"Reserved\" dstcountry=\"Reserved\" sessionid=8441941 proto=1 action=\"accept\" policyid=40 policytype=\"policy\" poluuid=\"813c45e0-3ad6-51f0-db42-8ec755725c23\" policyname=\"redact (ICMP)\" service=\"PING\" trandisp=\"noop\" app=\"PING\" duration=180 sentbyte=3072 rcvdbyte=3072 sentpkt=3 rcvdpkt=3"},"observer":{"ip":"10.52.31.14"}}   I have edited props.conf to contain the following stanza, but still no luck:   [fortigate_log] SHOULD_LINEMERGE = false LINE_BREAKER = }(\s*)\{   Any direction on where to go from here? 
"Built by Splunk Works": Splunk Works is an internal initiative or team within Splunk focused on creating innovative, often experimental or community-driven apps and add-ons. Apps labeled "Built by S... See more...
"Built by Splunk Works": Splunk Works is an internal initiative or team within Splunk focused on creating innovative, often experimental or community-driven apps and add-ons. Apps labeled "Built by Splunk Works" are developed by Splunk employees but may not carry the same level of formal support or certification as mainstream Splunk apps (e.g., Splunk Enterprise Security or Splunk IT Service Intelligence). These apps are often exploratory, proof-of-concept, or niche solutions. So my above statement is not completely true.  Apps marked "Built by Splunk Works" are indeed created by Splunk employees, making them "official" in the sense that they originate from Splunk Inc. However, they may not always be Splunk Supported or Splunk Certified, which is what some users mean when they refer to "official" apps. Glad you mentioned this because that does make it slightly different than being built by Splunk, certified by Splunk.  
Both answers above are correct in their response.  If you already knew this, I apologize beforehand, but one of the best ways to find out if the app is a 3rd party app, an app built by Splunk, or bui... See more...
Both answers above are correct in their response.  If you already knew this, I apologize beforehand, but one of the best ways to find out if the app is a 3rd party app, an app built by Splunk, or built by vendor of the product you are trying to ingest, go to Splunk base, look at the app and there is a field "created by"  If the answer is Splunk or Splunk works - this means that Splunk built the app.  If it said microsoft or something similar you could assume Microsoft, if it says LAME Creations (just a hypothetical example) that means someone called LAME Creations built the app.  Most of the apps built by Splunk were designed with a use case where they worked with the actual vendor or something similar that should provide you with some level of confidence this is the app.  Unfortunately the bigger issue is if the app is "STILL" the current app that is recommended by Splunk.  What I mean by that is over time, apps get recreated or rebranded into other apps and that can still be a problem with Splunk built apps so I also like to look at the version history and see if the app is relatively current.  If it is current, it means Splunk is still working on it, and that should also help provide some level of confidence this is the app.  The last method is a mixture between using Google Foo to search for what the community is using or asking on this forum - so you are already doing this.  Hope this helps.  The answer was generic, but it was just me sharing how I look at an app on Splunkbase to see if I should use it in my environment.   
In Dashboard Studio, a single click interaction on the timechart can set both global_time.earliest (the start of the clicked bar) and global_time.latest (the end of the 2-hour bar) by using a token ... See more...
In Dashboard Studio, a single click interaction on the timechart can set both global_time.earliest (the start of the clicked bar) and global_time.latest (the end of the 2-hour bar) by using a token formula. Instead of relying on a second click, you’ll compute global_time.latest as global_time.earliest + 2 hours. This ensures the exact 2-hour range is applied to other visualizations, mimicking the Search app’s timeline filtering. This is assuming that you want 2 hour chunks.  You can get crazy and tokenize the span=2h and then use that same token in the example I provide, but that is not the solution I am providing below.    Steps to Implement Verify Your Timechart Configuration: Ensure your timechart uses a 2-hour span, as you mentioned (| timechart span=2h ...). This means each bar represents a 2-hour bucket (e.g., 10:00–12:00, 12:00–14:00). In Dashboard Studio, confirm the visualization is set up as a Timechart (Area, Column, or Line) under the Visualization tab. Set the global_time.earliest Token: You’ve already set a token interaction for global_time.earliest, but let’s confirm it’s correct. In Dashboard Studio’s UI Editor: Select your timechart visualization. Go to the Interactions tab in the configuration panel. Under On Click, add a Set Token action: Token Name: global_time.earliest Token Value: $result._time$ (this captures the start time of the clicked bar, e.g., 10:00 for a 10:00–12:00 bar). This sets global_time.earliest to the timestamp of the clicked bar’s start. Calculate global_time.latest Token: Instead of a second click, compute global_time.latest as global_time.earliest + 2 hours using a token formula. In the UI Editor: Go to the same On Click interaction for the timechart. Add a second Set Token action (below the global_time.earliest one): Token Name: global_time.latest Token Value: relative_time($global_time.earliest$, "+2h") This uses Splunk’s relative_time function to add 2 hours to the earliest timestamp (e.g., if earliest is 10:00, latest becomes 12:00). Both tokens will now be set on a single click, defining the exact 2-hour range of the clicked bar. Apply Tokens to Other Visualizations: Ensure other visualizations in your dashboard use the global_time.earliest and global_time.latest tokens to filter their time ranges. For each visualization (e.g., table, chart): Go to the Search tab in the configuration panel. Set the Time Range to Custom and use: Earliest: $global_time.earliest$ Latest: $global_time.latest$ Alternatively, modify the search query directly to include the token-based time range, e.g.: spl   index=your_index earliest=$global_time.earliest$ latest=$global_time.latest$ | ... Add a Default Time Range (Optional): To prevent visualizations from breaking before a timechart click, set default values for the tokens. In the UI Editor: Go to the Dashboard configuration (top-level settings). Under Tokens, add: Token Name: global_time.earliest, Default Value: -24h@h (e.g., last 24 hours, snapped to hour). Token Name: global_time.latest, Default Value: now (current time). This ensures other visualizations display data until the timechart is clicked. Test the Dashboard: Save and preview the dashboard. Click a timechart bar (e.g., representing 10:00–12:00). Verify that: global_time.earliest is set to the bar’s start (e.g., 10:00). global_time.latest is set to the bar’s end (e.g., 12:00). Other visualizations update to show data only for that 2-hour range. Use the Inspect tool (click the three dots on a visualization > Inspect > Tokens) to debug token values if needed. Why This Works Single Click: Using relative_time($global_time.earliest$, "+2h") avoids the need for a second click, as it calculates the end of the 2-hour bar based on the clicked time. Mimics Search App: The Search app’s timeline sets both earliest and latest times for a selected range. This solution replicates that by defining the full 2-hour bucket. Dashboard Studio Limitation: Dashboard Studio doesn’t natively support range selection (like dragging over a timeline), so computing latest via a formula is the best approach. Troubleshooting Tips Tokens Not Setting: If global_time.latest isn’t updating, check the token syntax in the Source view (JSON). Ensure the relative_time function is correct: "value": "relative_time($global_time.earliest$, \"+2h\")". Time Format Issues: Ensure $result._time$ returns a timestamp in epoch format (seconds). If not, use strptime in the timechart search to format it, e.g., | eval _time=strptime(_time, "%Y-%m-%d %H:%M:%S"). Visualization Not Updating: Confirm other visualizations reference $global_time.earliest$ and $global_time.latest$ correctly. Check their search queries in the Source view. Span Mismatch: If the timechart span changes (e.g., dynamically set), you may need to make the +2h offset dynamic. Let us know if your span varies for a custom solution. Example JSON Snippet (Source View) For reference, here’s how the timechart’s interaction might look in the dashboard’s JSON (edit in Source view if needed): json   { "visualizations": { "viz_timechart": { "type": "splunk.timechart", "options": { ... }, "dataSources": { "primary": "ds_timechart" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "token": "global_time.earliest", "value": "$result._time$" } }, { "type": "drilldown.setToken", "options": { "token": "global_time.latest", "value": "relative_time($global_time.earliest$, \"+2h\")" } } ] } } }
Is it possible to get the upstream service details(Calling service) to an inferred service through metrics ? Through inbuilt dimensions there is no option. Can someone suggest if its possible ?
You are going to have to contact Splunk Support for any older versions not on their website.  I apologize for that inconvenience.   It is your environment and you need to do what you and your mana... See more...
You are going to have to contact Splunk Support for any older versions not on their website.  I apologize for that inconvenience.   It is your environment and you need to do what you and your management team feel are the best things, but as a person employed in the Cyber Security arena, I feel that I should at least mention the following.  None of this applies to your wanting to run 9.0.x  It was the Splunk 7 and Splunk 8 that raised my antennae.   Running 7.x (and to a lesser extent 8.x) UFs introduces significant risks, especially since Splunk 7.x reached End of Support (EOS) between October 2020 and October 2021, and 8.2.x is also at or past end of life. Here are the key implications:   Operational Risks: Limited Functionality: Splunk 7.x UFs lack support for newer features like data compression, advanced SSL configurations, or Splunk-to-Splunk (S2S) Protocol V4, which 9.x indexers use by default. This can cause performance issues or data ingestion failures if configurations mismatch. For example, 7.x UFs may not handle modern event-breaking or parsing rules in 9.0.x apps.   Management Challenges: If you use a Deployment Server (DS), it must be 9.0.x or newer to manage 7.x/8.x UFs. Older DS versions may fail to deploy apps to newer UFs, complicating configuration management. Stability Issues: 7.x UFs may encounter bugs or crashes on modern OSes (e.g., newer Linux kernels), as they were designed for older environments. Splunk Support won’t provide fixes for EOS versions, leaving you to work around issues manually. Security Risks: Vulnerabilities: 7.x UFs miss critical security patches available in 8.x and 9.x, exposing your environment to known vulnerabilities (e.g., CVE fixes). Without patches, UFs could be exploited, especially if they’re on internet-facing systems or handle sensitive data.   SSL/TLS Weaknesses: 7.x UFs use outdated SSL/TLS protocols, which may conflict with 9.0.x’s stricter security defaults (e.g., TLS 1.2/1.3). This can lead to connection failures or insecure data transmission. 8.x UFs are less problematic but still lack the latest TLS enhancements in 9.x. Compliance Issues: Running EOS software like 7.x may violate compliance requirements (e.g., PCI DSS, HIPAA), as auditors often flag unsupported software as non-compliant. Recommendations for UFs: Upgrade UFs to 9.0.x: Plan to upgrade your 7.x and 8.x UFs to 9.0.x (or at least 8.2.x) to align with your indexer. UFs are lightweight, and upgrades are straightforward  Start with a few test UFs to validate compatibility with your 9.0.x indexer and DS (if used). Use the Deployment Server to automate UF upgrades, ensuring serverclass.conf matches the new version. Prioritize 7.x First: 7.x UFs are the most critical to upgrade due to EOS status and severe security risks. 8.x UFs are less urgent but should be updated to avoid future EOS issues. Check Compatibility: Confirm UF OS compatibility with 9.0.x (e.g., AL2023 or supported Windows versions) using the Splunk System Requirements.   Interim Step: If upgrading all UFs immediately isn’t feasible, ensure your 9.0.x indexer’s inputs.conf supports legacy S2S protocols (e.g., V3 for 7.x UFs) by setting connectionTimeout or readTimeout to accommodate older clients. However, this is a temporary workaround. Why Upgrade UFs?: Aligning UFs with 9.0.x ensures optimal performance, security, and supportability. Splunk 9.0.x introduces features like ingest actions and enhanced TLS validation, which 7.x UFs can’t leverage.   Upgrading avoids the risk of data loss or ingestion delays due to protocol mismatches or unpatched bugs. Splunk Support can assist with 9.0.x issues, but not with 7.x, reducing your troubleshooting burden.
And why would you go for 9.0 which is out of support? I'd strongly advise against that. Unless you have a very good reason for doing so (and a very very good support contract, other than us, mere mor... See more...
And why would you go for 9.0 which is out of support? I'd strongly advise against that. Unless you have a very good reason for doing so (and a very very good support contract, other than us, mere mortals) it's unwise to keep your environment at an unsupported version (which applies to the current 8.2 as well).
The two previous posts both are good answers, but since you stated you are new to Splunk I decided to give you a thorough write up that explains how to check each of these areas that have been called... See more...
The two previous posts both are good answers, but since you stated you are new to Splunk I decided to give you a thorough write up that explains how to check each of these areas that have been called out to see if they are the problem that is causing your error.   The warning you’re seeing (The TCP output processor has paused the data flow) means your forwarder (at MRNOOXX) is unable to send data to the receiving Splunk instance (at 192.XXX.X.XX), likely because the receiver is not accepting data or the connection is blocked. This can stall data indexing, so let’s troubleshoot it step-by-step. Here’s a comprehensive checklist to resolve the issue: Verify Receiver is Running: Ensure the Splunk instance at 192.XXX.X.XX (likely an indexer) is active. On the receiver, run $SPLUNK_HOME/bin/splunk status to confirm splunkd is running. If it’s stopped, restart it with $SPLUNK_HOME/bin/splunk restart. Confirm Receiving Port is Open: The default port for Splunk-to-Splunk forwarding is 9997. On the receiver, check if port 9997 is listening: netstat -an | grep 9997 (Linux) or netstat -an | findstr 9997 (Windows). Verify the receiver’s inputs.conf has a [splunktcp://9997] stanza. Run $SPLUNK_HOME/bin/splunk cmd btool inputs list splunktcp --debug to check. Ensure disabled = 0. Test Network Connectivity: From the forwarder, test connectivity to the receiver’s port 9997: nc -vz -w1 192.XXX.X.XX 9997 (Linux) or telnet 192.XXX.X.XX 9997 (Windows). If it fails, check for firewalls or network issues. Confirm no firewalls are blocking port 9997 on the receiver or network path. Check Forwarder Configuration: On the forwarder, verify outputs.conf points to the correct receiver IP and port. Check $SPLUNK_HOME/etc/system/local/outputs.conf or app-specific configs (e.g., $SPLUNK_HOME/etc/apps/<app>/local/outputs.conf). Example: ini   [tcpout:default-autolb-group] server = 192.XXX.X.XX:9997 disabled = 0 Ensure no conflicting outputs.conf files exist (run $SPLUNK_HOME/bin/splunk cmd btool outputs list --debug). Inspect Receiver Health: The error suggests the indexer may be overwhelmed, causing backpressure. Use the Splunk Monitoring Console (on a Search Head or standalone instance) to check: Go to Monitoring Console > Indexing > Queue Throughput to see if queues (e.g., parsing, indexing) are full (100% fill ratio). Check Resource Usage > Machine for CPU, memory, and disk I/O (IOPS) on the indexer. High usage may indicate bottlenecks. Run this search on the Search Head to check queue status: | rest /services/server/introspection/queues splunk_server=192.XXX.X.XX | table title, current_size, max_size, fill_percentage. Ensure the indexer has sufficient disk space (df -h on Linux or dir on Windows) and isn’t exceeding license limits (check Monitoring Console > Licensing). Check for SSL Mismatches: If SSL is enabled (e.g., useSSL = true in outputs.conf on the forwarder), ensure the receiver’s inputs.conf has ssl = true. Verify certificates match in $SPLUNK_HOME/etc/auth/ on both systems. Check splunkd.log on the receiver for SSL errors: grep -i ssl $SPLUNK_HOME/var/log/splunk/splunkd.log. Review Logs for Clues: On the forwarder, check $SPLUNK_HOME/var/log/splunk/splunkd.log for errors around the TCP warning (search for “TcpOutputProc” or “blocked”). Look for queue or connection errors. On the receiver, search splunkd.log for errors about queue fullness, indexing delays, or connection refusals (e.g., grep -i "192.XXX.X.XX" $SPLUNK_HOME/var/log/splunk/splunkd.log). Share any relevant errors to help narrow it down. Proactive Mitigation: If the issue is intermittent (e.g., due to temporary indexer overload), consider enabling persistent queues on the forwarder to buffer data during blockages. In outputs.conf: ini   [tcpout] maxQueueSize = 100MB usePersistentQueue = true Restart the forwarder after changes. Architecture and Version Details: Could you share: Your Splunk version (e.g., 9.3.1)? Run $SPLUNK_HOME/bin/splunk version. Your setup (e.g., Universal Forwarder to single indexer, or Heavy Forwarder to indexer cluster)? Is the receiver a standalone indexer, Splunk Cloud, or part of a cluster? This will help tailor the solution, as queue behaviors vary by version and architecture. Quick Fixes to Try: Restart both the forwarder and receiver to clear temporary issues: $SPLUNK_HOME/bin/splunk restart. Simplify outputs.conf on the forwarder to point to one indexer (e.g., server = 192.XXX.X.XX:9997) and test. Check indexer disk space and license usage immediately, as these are common culprits. Next Steps: Share the output of the network test (nc or telnet), any splunkd.log errors, and your architecture details. If you have access to the Monitoring Console, let us know the queue fill percentages or resource usage metrics.  
For some reason, Splunk has stopped receiving data.  It could be because of any of several reasons.  Check the logs on indexer for possible explanations.  Also, the Monitoring Console may offer clues... See more...
For some reason, Splunk has stopped receiving data.  It could be because of any of several reasons.  Check the logs on indexer for possible explanations.  Also, the Monitoring Console may offer clues - look for blocked indexer queues.