All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@MikeMakai  Hi Mike,I recently integrated an FTD appliance with Splunk. Previously, the customer was using a Cisco ASA, and last week they upgraded to FTD. We didn’t make any changes to the Splunk se... See more...
@MikeMakai  Hi Mike,I recently integrated an FTD appliance with Splunk. Previously, the customer was using a Cisco ASA, and last week they upgraded to FTD. We didn’t make any changes to the Splunk setup and are still using the Cisco ASA add-on. Interestingly, the logs are being parsed correctly. Have you tried using the Cisco ASA add-on? Additionally, when you run a TCP dump on the destination side (Splunk), how are the logs appearing from the FTD device? Are they coming through as expected? It seems the cisco:ftd:syslog sourcetype isn’t parsing them properly. I’ve attached a screenshot for your reference. I hope this helps. if any reply helps you, you could add your upvote/karma points to that reply.  
01-09-2025 17:01:37.725 -0500 WARN  TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autol... See more...
01-09-2025 17:01:37.725 -0500 WARN  TcpOutputProc [4940 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=sbdcrib.splunkcloud.com inside output group default-autolb-group from host_src=CRBCITDHCP-01 has been blocked for blocked_seconds=1800. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
01-09-2025 17:30:30.169 -0500 INFO  PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwa... See more...
01-09-2025 17:30:30.169 -0500 INFO  PeriodicHealthReporter - feature="TCPOutAutoLB-0" color=red indicator="s2s_connections" due_to_threshold_value=70 measured_value=100 reason="More than 70% of forwarding destinations have failed.  Ensure your hosts and ports in outputs.conf are correct.  Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct." node_type=indicator node_path=splunkd.data_forwarding.splunk-2-splunk_forwarding.tcpoutautolb-0.s2s_connections
If you want to get duration as whole rounded up minutes, use ceil, as @isoutamo shows, e.g. | eval WholeMinutes=ceil(diffTime/1000/60)  
What are some reasons why a Linux UF will get quarantined by the deployment manager:8089? 
transaction is not a safe command to use if you have large data volumes, as it will silently ignore data when it hits limits. You are using a long span of 5m, so it will potentially have to hold lots... See more...
transaction is not a safe command to use if you have large data volumes, as it will silently ignore data when it hits limits. You are using a long span of 5m, so it will potentially have to hold lots of data in memory. Secondly, if you do use transaction and are wanting to group by host, you need to supply host as a field in the transaction command. With all search debugging tasks, first find a small dataset that contains the condition you are trying to catch and then just use the transaction command to see what transactions you get - if you can post some comments or anonymised data that demonstrates what you are having trouble with, that would help. Note that it is generally possible to use stats as a replacement for transaction, but in this case, may not be applicable.
Please keep in mind that Splunk docs no longer specify support for Windows 10/11, only specifically server version.  Something may have impacted the install extraction process.
This simply means that your input is sending an SNMP get or walk request but is not getting a response. There might be a multitude of possible reasons - wrong host, wrong community, wrong protocol ve... See more...
This simply means that your input is sending an SNMP get or walk request but is not getting a response. There might be a multitude of possible reasons - wrong host, wrong community, wrong protocol version, firewall rules...
It looks as if the other end doesn't speak TLS.
I have two log messages "%ROUTING-LDP-5-NSR_SYNC_START" and "%ROUTING-LDP-5-NBR_CHANGE" which usually accompany each other whenever there is a peer flapping. So "%ROUTING-LDP-5-NBR_CHANGE" is followe... See more...
I have two log messages "%ROUTING-LDP-5-NSR_SYNC_START" and "%ROUTING-LDP-5-NBR_CHANGE" which usually accompany each other whenever there is a peer flapping. So "%ROUTING-LDP-5-NBR_CHANGE" is followed by "%ROUTING-LDP-5-NSR_SYNC_START" almost every time. I am trying to find the output where a device only produces "%ROUTING-LDP-5-NSR_SYNC_START" without "%ROUTING-LDP-5-NBR_CHANGE" and I am using transaction but not been able to figure it out.  index = test ("%ROUTING-LDP-5-NSR_SYNC_START" OR "%ROUTING-LDP-5-NBR_CHANGE") | transaction maxspan=5m startswith="%ROUTING-LDP-5-NSR_SYNC_START" endswith="%ROUTING-LDP-5-NBR_CHANGE" | search eventcount=1 startswith="%ROUTING-LDP-5-NSR_SYNC_START" | stats count by host
Quite often OS openssl didn't work correctly as there could be some version conflicts and missing libraries etc. if your PATH and LD_LIBRARY_PATH is incorrectly set. For that reason I always use Splu... See more...
Quite often OS openssl didn't work correctly as there could be some version conflicts and missing libraries etc. if your PATH and LD_LIBRARY_PATH is incorrectly set. For that reason I always use Splunk's openssl version. Basically that means that you can read it, but for some reason it cannot get any real answer. Just read and response is OK (errno=0). You could also try curl -vk https://host:port to try if this get more information? I think that you have some issues with your TLS settings on your configuration. Could you tell exactly what you have tied to achieve and what you have done? Add also all those *.conf files inside </> blocks with masked **** passwords etc. Have you look this instructions: https://conf.splunk.com/files/2023/slides/SEC1936B.pdf this presentation is excellent bootcamp for use TLS with Splunk.
Hi @JohnEGones , Although an expensive solution, you can use sandbox and data diode for sending threat intel files into the air gapped network.  Download the files using internet connected system... See more...
Hi @JohnEGones , Although an expensive solution, you can use sandbox and data diode for sending threat intel files into the air gapped network.  Download the files using internet connected system, after sandbox scanning you can send the file to protected network through a strictly configured one way data diode. Lastly you can update ES threat list by serving this file with a simple web server.    
Hey, so my company is working on creating a visual in SharePoint by integrating Iframes from report. This is working fine but the question I had was that, Will the embedded link stop working if a use... See more...
Hey, so my company is working on creating a visual in SharePoint by integrating Iframes from report. This is working fine but the question I had was that, Will the embedded link stop working if a user that created them leaves the org and the account is disabled or deleted? Thank you in advance for any help! #Iframes #reports #embed
I am trying to query AWS config data in Splunk to identify the names of all S3 buckets in AWS. Is there a way to write a SPL that will list out the S3 bucket names from t
I do have a ticket open, but am also leveraging the community to determine if this has been seen in the past.
Hi Kiran, I'm sending syslog directly from the FTD devices. Here is the config file. [tcp://192.168.1.2:1470] connection_host = dns index = cisco_sfw_ftd_syslog sourcetype = cisco:ftd:syslog ... See more...
Hi Kiran, I'm sending syslog directly from the FTD devices. Here is the config file. [tcp://192.168.1.2:1470] connection_host = dns index = cisco_sfw_ftd_syslog sourcetype = cisco:ftd:syslog [sbg_sfw_syslog_input://FTD_Pier] event_types = *,syslog_intrusion,syslog_connection,syslog_file,syslog_file_malware index = cisco_sfw_ftd_syslog interval = 600 port = 1470 restrictToHost = 192.168.1.2 sourcetype = cisco:ftd:syslog type = tcp [tcp://192.168.200.2:1470] connection_host = dns index = cisco_sfw_ftd_syslog sourcetype = cisco:ftd:syslog [sbg_sfw_syslog_input://FTD_Kona] event_types = *,syslog_intrusion,syslog_connection,syslog_file,syslog_file_malware index = cisco_sfw_ftd_syslog interval = 600 port = 1470 restrictToHost = 192.168.200.2 sourcetype = cisco:ftd:syslog type = tcp Thanks, Mike 
Hi @John.Gregg, Thanks for asking your question on the Community. It appears the community has not jumped in with a reply yet. Did you happen to find any new information or a solution you can share... See more...
Hi @John.Gregg, Thanks for asking your question on the Community. It appears the community has not jumped in with a reply yet. Did you happen to find any new information or a solution you can share here? If not and you are still looking for help, you can contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
Hi @Tom.Davison, Thanks for asking your question on the community. The community has not chimed in yet, did you happen to find any new information or a solution you can share? If you are still l... See more...
Hi @Tom.Davison, Thanks for asking your question on the community. The community has not chimed in yet, did you happen to find any new information or a solution you can share? If you are still looking for help, you can contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
What sort of optimisation are you try to do? My guess is that you are trying to remove all the joins? It would help immensely if you could share some raw events from your various different sources ... See more...
What sort of optimisation are you try to do? My guess is that you are trying to remove all the joins? It would help immensely if you could share some raw events from your various different sources which demonstrate the sort of result you are trying to achieve with your search, and describe in non-SPL terms what it is that you are trying to achieve, for example, what your example result would look like and the relationship between the results and the various input events. Also, what have you already tried in terms of "optimisation"?
Hey guys, so I was wondering if anyone had any idea how to optimize this query to minimize the sub searches.  My brain hurts just looking at it honestly, for all the SPL Pros please lend a hand if ... See more...
Hey guys, so I was wondering if anyone had any idea how to optimize this query to minimize the sub searches.  My brain hurts just looking at it honestly, for all the SPL Pros please lend a hand if possible.    index=efg* * | search EVENT_TYPE=FG_EVENTATTR AND ((NAME=ConsumerName AND VALUE=OneStream) OR NAME=ProducerFilename OR NAME=OneStreamSubmissionID OR NAME=ConsumerFileSize OR NAME=RouteID) | search | where trim(VALUE)!="" | eval keyValuePair=mvzip(NAME,VALUE,"=") | eval efgTime=min(MODIFYTS) ```We need to convert EDT/EST timestamps to UTC time.``` | eval EST_time=strptime(efgTime,"%Y-%m-%d %H:%M:%S.%N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime = EST_time | eval UTC_time=strftime(tempTime, "%Y-%m-%d %H:%M:%S.%1N") | stats values(*) as * by ARRIVEDFILE_KEY | eval temptime3=min(UTC_time) | eval keyValuePair=mvappend("EFG_Delivery_Time=".temptime3, keyValuePair) | eval keyValuePair=mvsort(keyValuePair) ```Let's extract our values now.``` | eval tempStr_1 = mvfilter(LIKE(keyValuePair, "%ConsumerFileSize=%")) | eval tempStr_2 = mvfilter(LIKE(keyValuePair, "%EFG_Delivery_Time=%")) | eval tempStr_3 = mvfilter(LIKE(keyValuePair, "%OneStreamSubmissionID=%")) | eval tempStr_4 = mvfilter(LIKE(keyValuePair, "%ProducerFilename=%")) | eval tempStr_5 = mvfilter(LIKE(keyValuePair, "%RouteID=%")) ```Now, let's assign the values to the right field name.``` | eval "File Size"=ltrim(tempStr_1,"ConsumerFileSize=") | eval "EFG Delivery Time"=ltrim(tempStr_2,"EFG_Delivery_Time=") | eval "Submission ID"=substr(tempStr_3, -38) | eval "Source File Name"=ltrim(tempStr_4,"ProducerFilename=") | eval "Route ID"=ltrim(tempStr_5,"RouteID=") ```Bring it all together! (Join EFG data to the data in the OS lookup table.``` | search keyValuePair="*OneStreamSubmissionID*" | rename "Submission ID" as Submission_ID | rename "Source File Name" as Source_File_Name | join type=left max=0 Source_File_Name [ search index=asvsdp* source=Watcher_Delivery_Status sourcetype=c1_json event_code=SINK_DELIVERY_COMPLETION (sink_name=onelake-delta-table-sink OR sink_name=onelake-table-sink OR onelake-direct-sink) | eval test0=session_id | eval test1=substr(test0, 6) | eval o=len(test1) | eval Quick_Check=substr(test1, o-33, o) | eval p=if(like(Quick_Check, "%-%"), 35, 33) | eval File_Name_From_Session_ID=substr(test1, 1, o-p) | rename File_Name_From_Session_ID as Source_File_Name ```| lookup DFS-EFG-SDP-lookup_table_03.csv local=true Source_File_Name AS Source_File_Name OUTPUT Submission_ID, OS_time, BAP, Status``` | join type=left max=0 Source_File_Name [ search index=asvexternalfilegateway_summary * | table Source_File_Name, Submission_ID, Processed_time, OS_time, BAP, Status ] | table event_code, event_timestamp, session_id, sink_name, _time, Source_File_Name, Submission_ID, OS_time, BAP, Status | search "Source_File_Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) ] ```| lookup DFS-EFG-SDP-lookup_table_03.csv Submission_ID AS Submission_ID OUTPUT Processed_time, OS_time, BAP, Status``` | join type=left max=0 Submission_ID [ search index=asvexternalfilegateway_summary * | table Submission_ID, Processed_time, OS_time, BAP, Status ] | eval "Delivery Status"=if(event_code="SINK_DELIVERY_COMPLETION","DELIVERED","FAILED") | eval BAP = upper(BAP) ```| rename Processed_time as "OL Delivery Time" | eval "OL Delivery Time"=if('Delivery Status'="FAILED","Failed at OneStream",'OL Delivery Time')``` | rename OS_time as "OS Delivery Time" ```Display consolidated data in tabular format.``` | eval "OL Delivery Time"=strftime(event_timestamp/1000, "%Y-%m-%d %H:%M:%S.%3N") ``` Convert OS timestamp from UTC EST/EDT ``` | eval OS_TC='OS Delivery Time' | eval OS_UTC_time=strptime(OS_TC,"%Y-%m-%d %H:%M:%S.%3N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime_2 = OS_UTC_time - 18000 ```| eval tempTime = EST_time``` | eval "OS Delivery Time"=strftime(tempTime_2, "%Y-%m-%d %H:%M:%S.%3N") ``` Convert OL timestamp from UTC EST/EDT ``` | eval OL_UTC_time=strptime('OL Delivery Time',"%Y-%m-%d %H:%M:%S.%3N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime_3 = OL_UTC_time - 18000 ```| eval tempTime = EST_time``` | eval "OL Delivery Time"=strftime(tempTime_3, "%Y-%m-%d %H:%M:%S.%3N") | rename Source_File_Name as "Source File Name" | rename Submission_ID as "Submission ID" | fields BAP "Route ID" "Source File Name" "File Size" "EFG Delivery Time" "OS Delivery Time" "OL Delivery Time" "Delivery Status" "Submission ID" ``` | search Source_File_Name IN (*COF-DFS*)``` | append [ search index=efg* source=efg_prod_summary sourcetype=stash STATUS_MESSAGE=Failed ConsumerName=OneStream | eval BAP=upper("badiscoverdatasupport") | eval "Delivery Status"="FAILED", "Submission ID"="--" | rename RouteID as "Route ID", SourceFilename as "Source File Name", FILE_SIZE as "File Size", ArrivalTime as "EFG Delivery Time" | table BAP "Route ID" "Source File Name" "File Size" "EFG Delivery Time" "OS Delivery Time" "OL Delivery Time" "Delivery Status" "Submission ID" | search "Source File Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) ] | sort -"EFG Delivery Time" | search "Source File Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) | dedup "Submission ID"