All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have never seen this before and I will be completely transparent that I put your question into an AI engine so the response may not be anything close to what you are looking for, but the AI seemed ... See more...
I have never seen this before and I will be completely transparent that I put your question into an AI engine so the response may not be anything close to what you are looking for, but the AI seemed to think you might be having web browser caching issues (which I have actually had the web caching problems, just never had them affect the pages you mentioned).  The recommendation is to try to clear your cache in your browser or the method I use the most often is to use incognito mode.  Again, no idea if this will help, but I do know that I have changed the navigation menus on an app and they would not update in my browser and I had to run in incognito mode or open up a different browser that hadn't cached my Splunk website to see the changes to the navigation.  Hope this helps.    
I am logged in as the admin user, but whenever I try to access Tokens, Users, or other settings pages, I get a blank page. I’m not sure what to do next. #Splunk #Enterprise
OK. So this is not (or at least might not be)  about the phonehomes as such but on the info shown in the DS console. I'd go for 1) Verifying on selected forwarders that the phonehomes are shown in ... See more...
OK. So this is not (or at least might not be)  about the phonehomes as such but on the info shown in the DS console. I'd go for 1) Verifying on selected forwarders that the phonehomes are shown in the splunkd.log 2) Checking the logs on the DS itself to see if it can see the phonehomes. 3) Checking if you have the selective routing properly configured on the DS. https://help.splunk.com/en/splunk-enterprise/administer/manage-distributed-deployments/9.2/configure-the-deployment-system/upgrade-pre-9.2-deployment-servers (it's not about upgraded instances only; we had this issue lately on a new installation of 9.3.something).
 How did you determine this? - This is what the Forwarder Management Web UI shows us, client phone home time stamp coincides with the restart. 
What do you mean by "clients phoning home only when you restart the DS"? How did you determine this? The clients phone home on schedule - it's asynchronous versus whatever the DS is doing.
Ok. Can you please stop posting random copy-pastes from LLMs? LLMs are a useful tool... if they supplement your knowledge and expertise. Otherwise you're only introducing confusing wrong advices into... See more...
Ok. Can you please stop posting random copy-pastes from LLMs? LLMs are a useful tool... if they supplement your knowledge and expertise. Otherwise you're only introducing confusing wrong advices into the thread. Your advice about both indexed extractions and kv mode at the same time is simply wrong - it will lead to duplicate fields. Your line breaker is also needlessly complicated. BREAK_ONLY_BEFORE has no effectt with line merging disabled. Your advice about an addon for Fortigate is completely off because the TA for Fortigate available on Splunkbase handles default Fortigate event format, not jsons. Adjusting the events to be parsed by that addon will require more than just installing said addon. And there is no _MetaData:tags key! LLMs are known for making things up. Copy-pasting their delusions here isn't helping anyone! Just stop leading people astray. @LOP22456 I assume that it's either multiple events per line in your input file or your events are multilined and therefore the usuall approach to split the file on line breaks doesn't work. Unfortunately, there's no bulletproof solution for this since handling structured data with regexes alone is bound to be wrong in border cases. You can assume that your input breaks when you have two "touching" braces without a comma between them (even better if they must be on separate lines - that could give you "stronger" line breaker) but there still could be a border case where you have such string inside your json. But in most cases something like LINE_BREAKER = }([\r\n\s]*){ should do. In most cases. In some border cases you might end up with broken events.
Still no success after attempting all the steps below. Checked splunkd log on a few fowarders as well as the Deployment server and neither indicated connection errors. One question I have is in regar... See more...
Still no success after attempting all the steps below. Checked splunkd log on a few fowarders as well as the Deployment server and neither indicated connection errors. One question I have is in regards to indexes. From the webui i see the _dsphonehome, _dsappevent, _dsclient, but I don't see those indexes in the indexes.conf file on the deployment server. Another note is I found this and wondering if this could help? Our Splunk instance is at version 9.3.1.   https://community.splunk.com/t5/Splunk-Enterprise/After-upgrading-my-DS-to-Enterprise-9-2-2-clients-can-t-connect/m-p/695607
Okay @datachacha  Ive been having a good think about this and I dont think I have an elegant solution - but I think I do have *a* solution: This uses a hidden token/text box to the side and a s... See more...
Okay @datachacha  Ive been having a good think about this and I dont think I have an elegant solution - but I think I do have *a* solution: This uses a hidden token/text box to the side and a search to determine the _time+2hours.  You can then use this in your other queries as earliest/latest as per the sample event on the dashboard using `$globalTimeSpl:results.earliest$` and `$globalTimeSpl:results.latest$` Here is the full JSON to have a play around with - does this do what you need? { "title": "testing", "description": "", "inputs": { "input_MPUmpGoR": { "options": { "defaultValue": "DEFAULT", "token": "calc_earliest" }, "title": "Earliest", "type": "input.text" }, "input_zIorjrMc": { "options": { "defaultValue": "-24h@h,now", "token": "tr_global" }, "title": "Main Time Selector", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "-24h@h", "latest": "now" } } } } }, "visualizations": { "viz_BcDlqy4I": { "options": { "markdown": "Earliest = $globalTimeSpl:result.earliest$ \nLatest = $globalTimeSpl:result.latest$" }, "type": "splunk.markdown" }, "viz_NgmH6lHI": { "dataSources": { "primary": "ds_BlYVOfBA" }, "title": "This shows for time selected + 2hours", "type": "splunk.table" }, "viz_Nqdf4h2p": { "dataSources": { "primary": "ds_ccCiW2S8" }, "eventHandlers": [ { "options": { "tokens": [ { "key": "row._time.value", "token": "calc_earliest" } ] }, "type": "drilldown.setToken" } ], "type": "splunk.column" }, "viz_zUx2Zt29": { "dataSources": { "primary": "ds_ZKBDXZy2_ds_BlYVOfBA" }, "type": "splunk.table" } }, "dataSources": { "ds_BlYVOfBA": { "name": "global", "options": { "query": "index=_internal earliest=$globalTimeSpl:result.earliest$ latest=$globalTimeSpl:result.latest$ \n| addinfo \n| head 1\n| table info* _raw" }, "type": "ds.search" }, "ds_ZKBDXZy2_ds_BlYVOfBA": { "name": "globalTimeSpl", "options": { "enableSmartSources": true, "query": "| makeresults \n| addinfo\n| eval earliest=IF($calc_earliest|s$!=\"DEFAULT\",$calc_earliest|s$,info_min_time)\n| eval latest=IF($calc_earliest|s$!=\"DEFAULT\",$calc_earliest$+7200, info_max_time)", "queryParameters": { "earliest": "$tr_global.earliest$", "latest": "$tr_global.latest$" } }, "type": "ds.search" }, "ds_ccCiW2S8": { "name": "tstat", "options": { "query": "| tstats count where index=_internal by _time span=1h", "queryParameters": { "earliest": "$tr_global.earliest$", "latest": "$tr_global.latest$" } }, "type": "ds.search" }, "ds_rt307Czb": { "name": "timeSPL", "options": { "enableSmartSources": true, "query": "| makeresults \n| addinfo", "queryParameters": { "earliest": "-60m@m", "latest": "now" } }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_zIorjrMc" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_Nqdf4h2p", "position": { "h": 300, "w": 1390, "x": 10, "y": 210 }, "type": "block" }, { "item": "viz_NgmH6lHI", "position": { "h": 140, "w": 1390, "x": 10, "y": 60 }, "type": "block" }, { "item": "viz_BcDlqy4I", "position": { "h": 50, "w": 300, "x": 20, "y": 10 }, "type": "block" }, { "item": "input_MPUmpGoR", "position": { "h": 82, "w": 198, "x": 1470, "y": 50 }, "type": "input" }, { "item": "viz_zUx2Zt29", "position": { "h": 100, "w": 680, "x": 1470, "y": 130 }, "type": "block" } ], "type": "absolute" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @LizAndy123  When configuring your alert, select it to run "For each result" under the Trigger setting as per the screenshot below:  Did this answer help you? If so, please consider: Add... See more...
Hi @LizAndy123  When configuring your alert, select it to run "For each result" under the Trigger setting as per the screenshot below:  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
So I have successfully configured some reports and alerts that send the $result to Mattermost. My question is how to deal with a search which returns maybe 5 results? Example - Current search may r... See more...
So I have successfully configured some reports and alerts that send the $result to Mattermost. My question is how to deal with a search which returns maybe 5 results? Example - Current search may return - Example Text : Hello World How do I pass each individual  $result ? So Search could return Hello World followed by Hello World2 followed by Hello World3  If I put $result.text$ it prints Hello World but if I want to then show the second result or 3rd...is it possible through this>?
Ok, thank you. I thought there was something up with the $'s; they would accept it as a static value instead of a predefined token when setting them up in the interactions menu, but the logic wouldn'... See more...
Ok, thank you. I thought there was something up with the $'s; they would accept it as a static value instead of a predefined token when setting them up in the interactions menu, but the logic wouldn't work. And it seems to be the case as well for the second, the eval statement just did not work at all as intended. I was wondering why this didn't work.
Thank you for the timely response. I tried what you recommended and ran into a few issues that I was not able to diagnose or fix with the troubleshooting tips provided.  I got everything looking exa... See more...
Thank you for the timely response. I tried what you recommended and ran into a few issues that I was not able to diagnose or fix with the troubleshooting tips provided.  I got everything looking exactly as you said. However, $result._time$ doesn't seem to evaluate to a time whatsoever; when I check the value, it is literally just "$result._time$".  The latest time value gets set to "relative_time(-7d@h", which appears incomplete as shown. I get an error on the visualization saying Invalid earliest_time, and both earliest and latest show invalid values. When I tried to put in the troubleshooting eval command you recommended, it did not fix the issue. The time should be coming in correctly.
@LAME-Creations @LOP22456  Please do not set both INDEXED_EXTRACTIONS and KV_MODE = json. See props.conf docs for more info - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf W... See more...
@LAME-Creations @LOP22456  Please do not set both INDEXED_EXTRACTIONS and KV_MODE = json. See props.conf docs for more info - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf When 'INDEXED_EXTRACTIONS = JSON' for a particular source type, do not also set 'KV_MODE = json' for that source type. This causes the Splunk software to extract the JSON fields twice: once at index time, and again at search time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Just posting to confirm this, though I've never written in. Running into it now as generating a summary index is changing the value type to AFAICT a string, meaning the previous value of 5136, 5136,... See more...
Just posting to confirm this, though I've never written in. Running into it now as generating a summary index is changing the value type to AFAICT a string, meaning the previous value of 5136, 5136, which is searchable via EventCode=5136, is now broken in the summary index, where the value is now something like "5136\n5136" which... is not helpful at all.
Hi, I’m probably not understanding the question completely, so feel free to provide a specific example if you want. One thing I will point out is that when thinking about what information you can ge... See more...
Hi, I’m probably not understanding the question completely, so feel free to provide a specific example if you want. One thing I will point out is that when thinking about what information you can get from an inferred service--it’s limited to what you can see from the trace spans that are generated when an instrumented service calls that uninstrumented inferred service. Here is a screen shot of a service-centric view of an inferred service and what you can see about it.  
Thanks for sharing the details of your FortiGate log parsing issue in Splunk Cloud! It sounds like your Logstash server is combining multiple FortiGate logs into a single file, which is then sent to ... See more...
Thanks for sharing the details of your FortiGate log parsing issue in Splunk Cloud! It sounds like your Logstash server is combining multiple FortiGate logs into a single file, which is then sent to your Heavy Forwarder (HF) and ingested into Splunk Cloud as multi-line events (sometimes 20+ logs per event). Your props.conf configuration isn’t breaking these into individual events per host, likely due to an incorrect LINE_BREAKER regex or misconfigured parsing settings. The _grokparsefailure tag suggests additional parsing issues, possibly from Logstash or Splunk misinterpreting the syslog format. Below is a solution to parse these logs into individual events per FortiGate host, tailored for Splunk Cloud and your HF setup. Why the Current Configuration Isn’t Working LINE_BREAKER Issue: Your LINE_BREAKER = }(\s*)\{ aims to split events between JSON objects (e.g., } {), but it may not account for the syslog headers or whitespace correctly, causing Splunk to treat multiple JSON objects as one event. The regex might also be too restrictive or not capturing all cases. SHOULD_LINEMERGE: Setting SHOULD_LINEMERGE = false is correct to disable line merging, but without a precise LINE_BREAKER, Splunk may still fail to split events. Logstash Aggregation: Logstash is bundling multiple FortiGate logs into a single file, and the _grokparsefailure tag indicates Logstash’s grok filter (or Splunk’s parsing) isn’t correctly processing the syslog format, leading to malformed events. Splunk Cloud Constraints: In Splunk Cloud, props.conf changes must be applied on the HF, as you don’t have direct access to the indexers. The current configuration may not be properly deployed or tested. Solution: Parse Multi-Line FortiGate Logs into Individual Events To break the multi-line events into individual events per FortiGate host, you’ll need to refine the props.conf configuration on the Heavy Forwarder and ensure proper event breaking at ingestion time. Since the logs are JSON with syslog headers, you can use Splunk’s JSON parsing capabilities and a corrected LINE_BREAKER to split events. Step 1: Update props.conf on the Heavy Forwarder Modify the props.conf file on your HF to correctly break events and parse the JSON structure. Place this in $SPLUNK_HOME/etc/system/local/props.conf or an app’s local directory (e.g., $SPLUNK_HOME/etc/apps/<your_app>/local/props.conf). ini   [fortigate_log] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?=\{"log":\{"syslog":\{"priority":\d+\}\}) INDEXED_EXTRACTIONS = json KV_MODE = json TIMESTAMP_FIELDS = timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ BREAK_ONLY_BEFORE = ^\{"log":\{"syslog":\{"priority":\d+\}\} TRUNCATE = 10000 category = Structured disabled = false pulldown_type = true Explanation: LINE_BREAKER = ([\r\n]+)(?=\{"log":\{"syslog":\{"priority":\d+\}\}): Splits events on newlines (\r\n) followed by the start of a new JSON object (e.g., {"log":{"syslog":{"priority":189}}). The positive lookahead (?=) ensures the JSON start is not consumed, preserving the event. SHOULD_LINEMERGE = false: Prevents Splunk from merging lines, relying on LINE_BREAKER for event boundaries. INDEXED_EXTRACTIONS = json: Automatically extracts JSON fields (e.g., host.hostname, fgt.srcip) at index time on the HF, reducing search-time parsing issues. KV_MODE = json: Ensures search-time field extraction for JSON fields, complementing index-time parsing. TIMESTAMP_FIELDS = timestamp: Uses the timestamp field (e.g., 2025-06-23T15:20:45Z) for event timestamps. TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ: Matches the timestamp format in the logs. BREAK_ONLY_BEFORE: Reinforces event breaking by matching the start of a JSON object, as a fallback if LINE_BREAKER struggles. TRUNCATE = 10000: Ensures large events (up to 10,000 characters) aren’t truncated, accommodating multi-log events. category and pulldown_type: Improves Splunk Cloud’s UI compatibility for source type selection. Step 2: Deploy and Restart the Heavy Forwarder Deploy props.conf: Place the updated props.conf in $SPLUNK_HOME/etc/system/local/ or a custom app directory on the HF. If using a custom app, ensure it’s deployed via a Deployment Server or manually copied to the HF. Restart the HF: On the Windows HF, open a Command Prompt as Administrator. Navigate to $SPLUNK_HOME\bin (e.g., cd "C:\Program Files\Splunk\bin"). Run: splunk restart Note: In Splunk Cloud, you can’t modify indexer configurations directly. The HF applies these parsing rules before forwarding to the Splunk Cloud indexers. Step 3: Verify Event Breaking Run a search in Splunk Cloud to confirm events are split correctly: spl   index=<your_index> sourcetype=fortigate_log| stats count by host.hostname Check that each FortiGate host (from host.hostname) appears as a separate event with the correct count. Each event should correspond to one JSON log entry (e.g., one per host.hostname like “redact”). If events are still merged, inspect $SPLUNK_HOME/var/log/splunk/splunkd.log on the HF for parsing errors (e.g., grep -i "fortigate_log" splunkd.log). Step 4: Address _grokparsefailure Tag The _grokparsefailure tag suggests Logstash’s grok filter isn’t correctly parsing the FortiGate syslog format, which may contribute to event merging. Since you can’t modify the Logstash setup, you can mitigate this in Splunk: Override Logstash Tags: In props.conf, add a transform to remove the _grokparsefailure tag and ensure clean parsing: ini   [fortigate_log] ... TRANSFORMS-remove_grok_failure = remove_grokparsefailure In $SPLUNK_HOME/etc/system/local/transforms.conf: ini   [remove_grokparsefailure] REGEX = . FORMAT = tags::none DEST_KEY = _MetaData:tags Restart the HF after adding the transform. This clears the _grokparsefailure tag, ensuring Splunk doesn’t inherit Logstash’s parsing issues. Step 5: Optimize FortiGate Integration (Optional) Install Fortinet FortiGate Add-On: If not already installed, add the Fortinet FortiGate Add-On for Splunk on the HF and Search Head to improve field extraction and CIM compliance. Install on the HF for index-time parsing (already handled by INDEXED_EXTRACTIONS = json). Install on the Search Head for search-time field mappings and dashboards. Verify Syslog Configuration: Ensure FortiGate devices send logs to Logstash via UDP 514 or TCP 601, as per Fortinet’s syslog standards.   Check Logstash Output: If possible, verify Logstash’s output plugin (e.g., Splunk HTTP Event Collector or TCP output) is configured to send individual JSON objects without excessive buffering, which may contribute to event merging. Troubleshooting Tips Test Parsing: Ingest a small sample log file via the HF’s Add Data wizard in Splunk Web to test the props.conf settings before processing live data. Check Event Boundaries: Run index=<your_index> sourcetype=fortigate_log | head 10 and verify each event contains only one JSON object with a unique host.hostname. Logstash Buffering: If Logstash continues to bundle logs, consider asking your Logstash admin to adjust the output plugin (e.g., splunk output with HEC) to flush events more frequently, though you noted this isn’t changeable.   Splunk Cloud Support: If parsing issues persist, contact Splunk Cloud Support to validate the HF configuration or request assistance with indexer-side parsing (though the HF should handle most parsing).
Hi @LAME-Creations  Please can you confirm if you were able to test and have working the example and process provided?  Unfortunately this reads a lot like an AI hallucination because it looks to m... See more...
Hi @LAME-Creations  Please can you confirm if you were able to test and have working the example and process provided?  Unfortunately this reads a lot like an AI hallucination because it looks to mix Classic XML and Dashboard Studio approaches to tokens. For example it is not possible to put $ into the value field for a token, and it should be row._time.value not $result._time$ If you have this as a working approach then please can you share a working version as I wasnt aware it was possible to do evals in drilldown.setToken  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You have a thread from 2020 that states they fixed their problem.  I am pretty sure the reason the solution works is similar to what I am going to suggest here.  I have found (no scientific evidence ... See more...
You have a thread from 2020 that states they fixed their problem.  I am pretty sure the reason the solution works is similar to what I am going to suggest here.  I have found (no scientific evidence to support it) that sometimes the conf files just seem to be buggered and if reset them, it starts to work.  I swear the settings are the same before the reset and after, but for some reason it works.  Maybe it's voodoo or whatever, but it has worked for me in the past. Here is a breakdown of quickly resetting the configurations that you need The warning suggests the SH is trying to query a non-existent or misconfigured search peer, possibly due to stale or incorrect settings in outputs.conf or related configuration files. Resetting outputs.conf clears any corrupted or conflicting settings (e.g., incorrect server names, ports, or SSL configurations) that might be preventing the SH from recognizing the IDX as a valid peer. Restarting Splunk ensures a clean state, and re-adding the peer re-establishes the connection with fresh, verified settings. Steps to Reset and Reconfigure Back Up Configuration Files: Before making changes, back up your Splunk configuration files to avoid losing custom settings. On the Search Head, copy the $SPLUNK_HOME\etc\system\local directory (e.g., C:\Program Files\Splunk\etc\system\local) to a safe location (e.g., C:\SplunkBackup). Delete or Rename outputs.conf: Navigate to $SPLUNK_HOME\etc\system\local on the Search Head (e.g., C:\Program Files\Splunk\etc\system\local). Locate outputs.conf. If it exists, rename it to outputs.conf.bak (or delete it if you’re sure no critical settings are needed). Note: If outputs.conf is in an app directory (e.g., $SPLUNK_HOME\etc\apps\<app_name>\local), check there too and rename/delete it. This ensures Splunk starts with default output settings, clearing any misconfigurations. Restart Splunk on the Search Head: Open a Command Prompt as Administrator on the Windows SH host. Navigate to $SPLUNK_HOME\bin (e.g., cd "C:\Program Files\Splunk\bin"). Run: splunk restart This restarts the Splunk service, applying the reset configuration. Verify Indexer Configuration: Ensure the Indexer is configured to receive data on the correct port (default: 9997). On the Indexer, check $SPLUNK_HOME\etc\system\local\inputs.conf for a [splunktcp://9997] stanza: ini   [splunktcp://9997] disabled = 0 If missing, add it and restart the Indexer (splunk restart). Confirm port 9997 is open: netstat -an | findstr 9997 (should show LISTENING). Reconfigure the Search Peer: On the Search Head, log into the Splunk Web UI as an admin. Go to Settings > Distributed Search > Search Peers. Remove the existing Indexer peer (select the IDX and click Remove). Add the Indexer as a new peer: Click Add New. Enter the Indexer’s details: Peer URI: https://<Indexer_IP>:8089 (e.g., https://192.168.1.100:8089). Authentication: Use the SH admin credentials or a pass4SymmKey (if configured in distsearch.conf). Replication Settings: Ensure settings match your setup (usually default). Save and wait for the status to show Healthy. Alternatively, use the CLI: cmd   splunk add search-server https://<Indexer_IP>:8089 -auth <admin>:<password> -remoteUsername <admin> -remotePassword <password> Test the Search: Run your search again from the SH: index=_internal. Verify results are returned without the warning. Check the Monitoring Console (Settings > Monitoring Console > Search > Distributed Search Health) to confirm the peer is active and responding. Additional Tips Check Network Connectivity: Ensure the SH can reach the IDX on port 8089 (management) and 9997 (data). Run: telnet <Indexer_IP> 8089 and telnet <Indexer_IP> 9997 from the SH host. If blocked, check Windows Firewall or network settings. Verify SSL Settings: If using SSL, ensure distsearch.conf on the SH and inputs.conf on the IDX align (e.g., ssl = true). Check $SPLUNK_HOME\var\log\splunk\splunkd.log on both hosts for SSL errors. Confirm Splunk Versions: Your SH and IDX should be on compatible versions (e.g., SH 8.2.2.1 or newer, IDX same or older). Run splunk version on both to confirm. If mismatched, upgrade the SH first. Debug Logs: If the issue persists, check $SPLUNK_HOME\var\log\splunk\splunkd.log
Actually, compression has been around for quite a long time and 7.x forwarders should support it. Also, your protocol levels are way off. Not to mention the bogus requirement to use 9.0+ DS to mana... See more...
Actually, compression has been around for quite a long time and 7.x forwarders should support it. Also, your protocol levels are way off. Not to mention the bogus requirement to use 9.0+ DS to manage 7/8 version UFs. Please refrain from posting AI-generated content.
Hello, We have multiple fortigate devices forwarding to a logstash server that is storing all the device's logs in 1 file (I can't change this unfortunately). This is then forwarding to our HF, and ... See more...
Hello, We have multiple fortigate devices forwarding to a logstash server that is storing all the device's logs in 1 file (I can't change this unfortunately). This is then forwarding to our HF, and then to Splunk Cloud. This then enters splunk with sometimes 20+ logs in a single event, and I can't get them to parse out into individual events by host. Below are samples of 2 logs, but in a single event there could be 20+ logs - I cannot get this to parse correctly out into each event per host (redact). {"log":{"syslog":{"priority":189}},"host":{"hostname":"redact"},"fgt":{"proto":"1","tz":"+0200","vpntype":"ipsecvpn","rcvdbyte":"3072","policyname":"MW","type":"traffic","identifier":"43776","trandisp":"noop","logid":"0001000014","srcintfrole":"undefined","policyid":"36","rcvdpkt":"3","vd":"root","duration":"180","dstintfrole":"undefined","dstip":"10.53.6.1","level":"notice","eventtime":"1750692044675283970","policytype":"policy","subtype":"local","srcip":"10.53.4.119","dstintf":"root","srcintf":"HUB1-VPN1","sessionid":"5612390","action":"accept","service":"PING","app":"PING","sentbyte":"3072","sentpkt":"3","dstcountry":"Reserved","poluuid":"cb0c79de-2400-51f0-7067-d28729f733cf","srccountry":"Reserved"},"timestamp":"2025-06-23T15:20:45Z","data_stream":{"namespace":"default","dataset":"fortinet.fortigate","type":"logs"},"@timestamp":"2025-06-23T15:20:45.000Z","type":"fortigate","logstash":{"hostname":"no_logstash_hostname"},"tags":["_grokparsefailure"],"@version":"1","system":{"syslog":{"version":"1"}},"event":{"created":"2025-06-23T15:20:45.563831683Z","original":"<189>1 2025-06-23T15:20:45Z redact - - - - eventtime=1750692044675283970 tz=\"+0200\" logid=\"0001000014\" type=\"traffic\" subtype=\"local\" level=\"notice\" vd=\"root\" srcip=10.53.4.119 identifier=43776 srcintf=\"redact\" srcintfrole=\"undefined\" dstip=10.53.6.1 dstintf=\"root\" dstintfrole=\"undefined\" srccountry=\"Reserved\" dstcountry=\"Reserved\" sessionid=5612390 proto=1 action=\"accept\" policyid=36 policytype=\"policy\" poluuid=\"cb0c79de-2400-51f0-7067-d28729f733cf\" policyname=\"MW\" service=\"PING\" trandisp=\"noop\" app=\"PING\" duration=180 sentbyte=3072 rcvdbyte=3072 sentpkt=3 rcvdpkt=3 vpntype=\"ipsecvpn\""},"observer":{"ip":"10.53.12.113"}} {"log":{"syslog":{"priority":189}},"host":{"hostname":"redact"},"fgt":{"proto":"1","tz":"+0200","rcvdbyte":"3072","policyname":"redact (ICMP)","type":"traffic","identifier":"43776","trandisp":"noop","logid":"0001000014","srcintfrole":"wan","policyid":"40","rcvdpkt":"3","vd":"root","duration":"180","dstintfrole":"undefined","dstip":"10.52.25.145","level":"notice","eventtime":"1750692044620716079","policytype":"policy","subtype":"local","srcip":"10.53.4.119","dstintf":"root","srcintf":"wan1","sessionid":"8441941","action":"accept","service":"PING","app":"PING","sentbyte":"3072","sentpkt":"3","dstcountry":"Reserved","poluuid":"813c45e0-3ad6-51f0-db42-8ec755725c23","srccountry":"Reserved"},"timestamp":"2025-06-23T15:20:45Z","data_stream":{"namespace":"default","dataset":"fortinet.fortigate","type":"logs"},"@timestamp":"2025-06-23T15:20:45.000Z","type":"fortigate","logstash":{"hostname":"no_logstash_hostname"},"tags":["_grokparsefailure"],"@version":"1","system":{"syslog":{"version":"1"}},"event":{"created":"2025-06-23T15:20:45.639474828Z","original":"<189>1 2025-06-23T15:20:45Z redact - - - - eventtime=1750692044620716079 tz=\"+0200\" logid=\"0001000014\" type=\"traffic\" subtype=\"local\" level=\"notice\" vd=\"root\" srcip=10.53.4.119 identifier=43776 srcintf=\"wan1\" srcintfrole=\"wan\" dstip=10.52.25.145 dstintf=\"root\" dstintfrole=\"undefined\" srccountry=\"Reserved\" dstcountry=\"Reserved\" sessionid=8441941 proto=1 action=\"accept\" policyid=40 policytype=\"policy\" poluuid=\"813c45e0-3ad6-51f0-db42-8ec755725c23\" policyname=\"redact (ICMP)\" service=\"PING\" trandisp=\"noop\" app=\"PING\" duration=180 sentbyte=3072 rcvdbyte=3072 sentpkt=3 rcvdpkt=3"},"observer":{"ip":"10.52.31.14"}}   I have edited props.conf to contain the following stanza, but still no luck:   [fortigate_log] SHOULD_LINEMERGE = false LINE_BREAKER = }(\s*)\{   Any direction on where to go from here?