All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am trying to query AWS config data in Splunk to identify the names of all S3 buckets in AWS. Is there a way to write a SPL that will list out the S3 bucket names from t
I do have a ticket open, but am also leveraging the community to determine if this has been seen in the past.
Hi Kiran, I'm sending syslog directly from the FTD devices. Here is the config file. [tcp://192.168.1.2:1470] connection_host = dns index = cisco_sfw_ftd_syslog sourcetype = cisco:ftd:syslog ... See more...
Hi Kiran, I'm sending syslog directly from the FTD devices. Here is the config file. [tcp://192.168.1.2:1470] connection_host = dns index = cisco_sfw_ftd_syslog sourcetype = cisco:ftd:syslog [sbg_sfw_syslog_input://FTD_Pier] event_types = *,syslog_intrusion,syslog_connection,syslog_file,syslog_file_malware index = cisco_sfw_ftd_syslog interval = 600 port = 1470 restrictToHost = 192.168.1.2 sourcetype = cisco:ftd:syslog type = tcp [tcp://192.168.200.2:1470] connection_host = dns index = cisco_sfw_ftd_syslog sourcetype = cisco:ftd:syslog [sbg_sfw_syslog_input://FTD_Kona] event_types = *,syslog_intrusion,syslog_connection,syslog_file,syslog_file_malware index = cisco_sfw_ftd_syslog interval = 600 port = 1470 restrictToHost = 192.168.200.2 sourcetype = cisco:ftd:syslog type = tcp Thanks, Mike 
Hi @John.Gregg, Thanks for asking your question on the Community. It appears the community has not jumped in with a reply yet. Did you happen to find any new information or a solution you can share... See more...
Hi @John.Gregg, Thanks for asking your question on the Community. It appears the community has not jumped in with a reply yet. Did you happen to find any new information or a solution you can share here? If not and you are still looking for help, you can contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
Hi @Tom.Davison, Thanks for asking your question on the community. The community has not chimed in yet, did you happen to find any new information or a solution you can share? If you are still l... See more...
Hi @Tom.Davison, Thanks for asking your question on the community. The community has not chimed in yet, did you happen to find any new information or a solution you can share? If you are still looking for help, you can contact AppDynamics Support: How to contact AppDynamics Support and manage existing cases with Cisco Support Case Manager (SCM) 
What sort of optimisation are you try to do? My guess is that you are trying to remove all the joins? It would help immensely if you could share some raw events from your various different sources ... See more...
What sort of optimisation are you try to do? My guess is that you are trying to remove all the joins? It would help immensely if you could share some raw events from your various different sources which demonstrate the sort of result you are trying to achieve with your search, and describe in non-SPL terms what it is that you are trying to achieve, for example, what your example result would look like and the relationship between the results and the various input events. Also, what have you already tried in terms of "optimisation"?
Hey guys, so I was wondering if anyone had any idea how to optimize this query to minimize the sub searches.  My brain hurts just looking at it honestly, for all the SPL Pros please lend a hand if ... See more...
Hey guys, so I was wondering if anyone had any idea how to optimize this query to minimize the sub searches.  My brain hurts just looking at it honestly, for all the SPL Pros please lend a hand if possible.    index=efg* * | search EVENT_TYPE=FG_EVENTATTR AND ((NAME=ConsumerName AND VALUE=OneStream) OR NAME=ProducerFilename OR NAME=OneStreamSubmissionID OR NAME=ConsumerFileSize OR NAME=RouteID) | search | where trim(VALUE)!="" | eval keyValuePair=mvzip(NAME,VALUE,"=") | eval efgTime=min(MODIFYTS) ```We need to convert EDT/EST timestamps to UTC time.``` | eval EST_time=strptime(efgTime,"%Y-%m-%d %H:%M:%S.%N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime = EST_time | eval UTC_time=strftime(tempTime, "%Y-%m-%d %H:%M:%S.%1N") | stats values(*) as * by ARRIVEDFILE_KEY | eval temptime3=min(UTC_time) | eval keyValuePair=mvappend("EFG_Delivery_Time=".temptime3, keyValuePair) | eval keyValuePair=mvsort(keyValuePair) ```Let's extract our values now.``` | eval tempStr_1 = mvfilter(LIKE(keyValuePair, "%ConsumerFileSize=%")) | eval tempStr_2 = mvfilter(LIKE(keyValuePair, "%EFG_Delivery_Time=%")) | eval tempStr_3 = mvfilter(LIKE(keyValuePair, "%OneStreamSubmissionID=%")) | eval tempStr_4 = mvfilter(LIKE(keyValuePair, "%ProducerFilename=%")) | eval tempStr_5 = mvfilter(LIKE(keyValuePair, "%RouteID=%")) ```Now, let's assign the values to the right field name.``` | eval "File Size"=ltrim(tempStr_1,"ConsumerFileSize=") | eval "EFG Delivery Time"=ltrim(tempStr_2,"EFG_Delivery_Time=") | eval "Submission ID"=substr(tempStr_3, -38) | eval "Source File Name"=ltrim(tempStr_4,"ProducerFilename=") | eval "Route ID"=ltrim(tempStr_5,"RouteID=") ```Bring it all together! (Join EFG data to the data in the OS lookup table.``` | search keyValuePair="*OneStreamSubmissionID*" | rename "Submission ID" as Submission_ID | rename "Source File Name" as Source_File_Name | join type=left max=0 Source_File_Name [ search index=asvsdp* source=Watcher_Delivery_Status sourcetype=c1_json event_code=SINK_DELIVERY_COMPLETION (sink_name=onelake-delta-table-sink OR sink_name=onelake-table-sink OR onelake-direct-sink) | eval test0=session_id | eval test1=substr(test0, 6) | eval o=len(test1) | eval Quick_Check=substr(test1, o-33, o) | eval p=if(like(Quick_Check, "%-%"), 35, 33) | eval File_Name_From_Session_ID=substr(test1, 1, o-p) | rename File_Name_From_Session_ID as Source_File_Name ```| lookup DFS-EFG-SDP-lookup_table_03.csv local=true Source_File_Name AS Source_File_Name OUTPUT Submission_ID, OS_time, BAP, Status``` | join type=left max=0 Source_File_Name [ search index=asvexternalfilegateway_summary * | table Source_File_Name, Submission_ID, Processed_time, OS_time, BAP, Status ] | table event_code, event_timestamp, session_id, sink_name, _time, Source_File_Name, Submission_ID, OS_time, BAP, Status | search "Source_File_Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) ] ```| lookup DFS-EFG-SDP-lookup_table_03.csv Submission_ID AS Submission_ID OUTPUT Processed_time, OS_time, BAP, Status``` | join type=left max=0 Submission_ID [ search index=asvexternalfilegateway_summary * | table Submission_ID, Processed_time, OS_time, BAP, Status ] | eval "Delivery Status"=if(event_code="SINK_DELIVERY_COMPLETION","DELIVERED","FAILED") | eval BAP = upper(BAP) ```| rename Processed_time as "OL Delivery Time" | eval "OL Delivery Time"=if('Delivery Status'="FAILED","Failed at OneStream",'OL Delivery Time')``` | rename OS_time as "OS Delivery Time" ```Display consolidated data in tabular format.``` | eval "OL Delivery Time"=strftime(event_timestamp/1000, "%Y-%m-%d %H:%M:%S.%3N") ``` Convert OS timestamp from UTC EST/EDT ``` | eval OS_TC='OS Delivery Time' | eval OS_UTC_time=strptime(OS_TC,"%Y-%m-%d %H:%M:%S.%3N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime_2 = OS_UTC_time - 18000 ```| eval tempTime = EST_time``` | eval "OS Delivery Time"=strftime(tempTime_2, "%Y-%m-%d %H:%M:%S.%3N") ``` Convert OL timestamp from UTC EST/EDT ``` | eval OL_UTC_time=strptime('OL Delivery Time',"%Y-%m-%d %H:%M:%S.%3N") ```IMPORTANT STEP: During EDT you add 14400 to convert to UTC; during EST you add 18000. (We need to automate this step in the code.)``` | eval tempTime_3 = OL_UTC_time - 18000 ```| eval tempTime = EST_time``` | eval "OL Delivery Time"=strftime(tempTime_3, "%Y-%m-%d %H:%M:%S.%3N") | rename Source_File_Name as "Source File Name" | rename Submission_ID as "Submission ID" | fields BAP "Route ID" "Source File Name" "File Size" "EFG Delivery Time" "OS Delivery Time" "OL Delivery Time" "Delivery Status" "Submission ID" ``` | search Source_File_Name IN (*COF-DFS*)``` | append [ search index=efg* source=efg_prod_summary sourcetype=stash STATUS_MESSAGE=Failed ConsumerName=OneStream | eval BAP=upper("badiscoverdatasupport") | eval "Delivery Status"="FAILED", "Submission ID"="--" | rename RouteID as "Route ID", SourceFilename as "Source File Name", FILE_SIZE as "File Size", ArrivalTime as "EFG Delivery Time" | table BAP "Route ID" "Source File Name" "File Size" "EFG Delivery Time" "OS Delivery Time" "OL Delivery Time" "Delivery Status" "Submission ID" | search "Source File Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) ] | sort -"EFG Delivery Time" | search "Source File Name" IN (*OS.AIS.COF.DataOne.PROD*, *fgmulti_985440_GHR.COF.PROD.USPS.CARD*, *COF-DFS*) | dedup "Submission ID"
Those two timeouts do not need to match.  Sixty seconds should be more than enough time to establish a connection.  More may be needed to send a bundle, however. The bigger question is *why* is the ... See more...
Those two timeouts do not need to match.  Sixty seconds should be more than enough time to establish a connection.  More may be needed to send a bundle, however. The bigger question is *why* is the delivery timing out?  Perhaps the bundle is too large.  If so, you should work on making it smaller (look at large lookup files first).
In the Dashboard Studio there is an attribute that will control the width of a column, the attribute name is called width. This is my column format section, note the values for width. Dashboard Studi... See more...
In the Dashboard Studio there is an attribute that will control the width of a column, the attribute name is called width. This is my column format section, note the values for width. Dashboard Studio says it defaults to 90 and the smallest it can use is 90. "options": { "columnFormat": { "FailRate": { "data": "> table | seriesByName(\"FailRate\") | formatByType(FailRateColumnFormatEditorConfig)", "rowColors": "> table | seriesByName('FailRate') | pick(FailRateRowColorsEditorConfig)", "rowBackgroundColors": "> table | seriesByName(\"FailRate\") | rangeValue(FailRateRowBackgroundColorsEditorConfig)" }, "Integration Name": { "width": 245 }, "Function": { "width": 300 }, "#": { "width": 40 } }
Hello, In the first one I tested the OS's OpenSSL and with the command you mentioned, I get the following response: read:errno=0.
Are you asking a question? If so, this is not clear. Did you follow the installation instructions? Did you verify that there are no host or network rules or filtering that block SNMP packets/quer... See more...
Are you asking a question? If so, this is not clear. Did you follow the installation instructions? Did you verify that there are no host or network rules or filtering that block SNMP packets/queries?
Hi all, Do any of you all run into issues where the bundle replication keeps timing out and splunkd.log references increasing the sendRcvTimeout parameter, in a previous ticket with support, they su... See more...
Hi all, Do any of you all run into issues where the bundle replication keeps timing out and splunkd.log references increasing the sendRcvTimeout parameter, in a previous ticket with support, they supplied a Golden Configuration that says that this value should be around 180. Based on: https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Distsearchconf Under, 'classic' REPLICATION-SPECIFIC SETTINGS connectionTimeout = <integer> * The maximum amount of time to wait, in seconds, before a search head's initial connection to a peer times out. * Default: 60 sendRcvTimeout = <integer> * The maximum amount of time to wait, in seconds, when a search head is sending a full replication to a peer. * Default: 60   Should these two values be adjusted and kept in-sync? I am considering adding another 30 seconds to each. Or, if there is something else I should be verifying first, it would be helpful to get some direction here.
When i am trying with message.backendCalls{}.endPoint then its showing exactly where 404 is coming but i want result on the basis for LOB.   any suggestion?
Hi @Miguel3393 , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma ... See more...
Hi @Miguel3393 , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thanks for the response @gcusello  This is the result I get with what you mention. Regards.
Hi all, Was wondering if there was a way to manually grab the threat intelligence updates for Splunk ES (we are on 7.3.1.) Specifically:  Intelligence download of "mitre_attack" - threatlist downlo... See more...
Hi all, Was wondering if there was a way to manually grab the threat intelligence updates for Splunk ES (we are on 7.3.1.) Specifically:  Intelligence download of "mitre_attack" - threatlist download Our Splunk environment is on-prem and air-gapped, so there is not really any way to create an external connection to the internet. Any ideas or advice would be appreciated.
Have you try it with Splunk's openssl or OS's openssl? You could/should try it with  splunk cmd openssl s_client -showcerts -connect host:port
Please validate your data. Based on your screenshots, it seems that when error code 404 occurs, the field message.incomingRequest.lob does not exist in these events.
There is still no response for 404 status code, its only coming for below query index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.Splunk... See more...
There is still no response for 404 status code, its only coming for below query index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | chart count by "message.backendCalls{}.responseCode", "message.incomingRequest.lob"  
Add message.incomingRequest.lob=* to your base search to filter for events that contain the field message.incomingRequest.lob index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc... See more...
Add message.incomingRequest.lob=* to your base search to filter for events that contain the field message.incomingRequest.lob index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" "message.incomingRequest.lob"=* | chart count by "message.backendCalls{}.responseCode", "message.incomingRequest.lob"