All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunklearner , let me understand: your F5 WAF is already sending its logs to your syslog server, the syslog server writes these logs in a file  and in a foled, I suppose that in thefolder p... See more...
Hi @splunklearner , let me understand: your F5 WAF is already sending its logs to your syslog server, the syslog server writes these logs in a file  and in a foled, I suppose that in thefolder path, there's the hostname or ip address of the sender. i this case, you have to install your UF on the syslog server and then install on this UF the Fortinet Fortigate Add_On for Splunk. In this add-on, you have to create a local folder and a nef conf file called inputs.conf. If the path of the log files is /data/f5_waf/<ip_address>/<year>/<month>/<day>/ and the filename is waflogs_yyyymmdd.log, in this file you have to add the following stanza: [monitor:///data/f5_waf/.../waflogs_*.log] index = your_index sourcetype = fgt_logs host_segment = 3 disabled = 0 and then restart the UF. For more infos see at https://docs.splunk.com/Documentation/SplunkCloud/9.2.2406/Data/Monitorfilesanddirectories Ciao. Giuseppe
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me wit... See more...
Hi all, We want to configure F5 WAF logs to Splunk. WAF team sending logs to our syslog server. In our syslog server UF is installed and it will forward the data to our indexer.  Please help me with any detailed documentation or steps followed to ingest the data successfully and any troubleshooting if needed? Don't know what syslog is for me... I am very new to Splunk and learning. Apologies if it is basic question. But seriously want to learn.
Okey, so i dont now exactly where the search is. I have the datamodel.  
Hi @Vnarunart , this is a request that I posted in Splunk Ideas (https://ideas.splunk.com/ideas/EID-I-1731) and it's in "Under consideration" state, if you think that's useful, please vote it! Anyw... See more...
Hi @Vnarunart , this is a request that I posted in Splunk Ideas (https://ideas.splunk.com/ideas/EID-I-1731) and it's in "Under consideration" state, if you think that's useful, please vote it! Anyway, you could add to your Heavy forwarders a custom field with the name of the HF: https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Data/Configureindex-timefieldextraction in props.conf [default] TRANSFORMS-hf_name = my_hf_1 in props.conf: [my_hf_1] REGEX = . FORMAT = my_hf_1::my_hf_1 WRITE_META = [true] DEST_KEY = my_hf_1 DEFAULT_VALUE = my_hf_1 and then in fields.conf [my_hf_1] INDEXED=true one for each HF. Ciao. Giuseppe  
Yes.  Sorry I erased = when editing text
Compared with some of your previous questions on the same subject, this is much clearer.  In Re: Search an index for two fields with a join, I gave an example based on speculation that description wa... See more...
Compared with some of your previous questions on the same subject, this is much clearer.  In Re: Search an index for two fields with a join, I gave an example based on speculation that description was unimportant.  Now that you illustrate expected results, I no longer have to read your mind.  The illustrated results also implies that there can be a different format in description, and that fields first and last are all lower-case, while name in description uses the first-cap rule.  So, instead of using the second search as subsearch to limit the first search, simply append output from second search and do stats on events from both.     index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | append [search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | rex field=description "Leaver Request for (?<first>\S+) (?<last>\S+) -" | rex field=description "(?<first>\S+) (?<last>\S+) Offboarding on -" | eval first = lower(first), last = lower(last) ] | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | fields identity email extensionattribute10 extensionattribute11 first last _time affect_dest active description dv_state number | stats values(*) as * min(_time) as _time BY first last   Hope this helps.
@yuanliu  You mean like as below ? TIME_PREFIX= EVENTTS=\" 
Going back to my four commandments of asking answerable questions: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers... See more...
Going back to my four commandments of asking answerable questions: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. Until you can illustrate your data, no one can help you.  On the surface, your case function should work given this set of data: url abc.fromregion1.com def.toregion2wego.com ghi.fromregion1toregion2.com You can run a stats and get region count Region 1 2 Region 2 1 Here is the emulation to prove the above.   | makeresults format=csv data="url abc.fromregion1.com def.toregion2wego.com ghi.fromregion1toregion2.com" ``` data emulation above ``` | eval region=case(url like "%region1%","Region 1",url like "%region2%","Region 2") | stats count by region  
I see.  TIME_PREFIX is a regex.  So, you need to escape quotation marks.   TIME_PREFIX = EVENTTS=\"    
I am confused as to how using transpose can changes anything.  The solution should be the same no matter how you obtain the table.  The following will pick up rows with equal values:   index=idx1 s... See more...
I am confused as to how using transpose can changes anything.  The solution should be the same no matter how you obtain the table.  The following will pick up rows with equal values:   index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose header_field=field1 | foreach sys* [eval _row_values = mvappend(_row_values, <<FIELD>>)] | where mvcount(mvdedup(_row_values)) == 1   Using your sample data, the result is column sys1 sys2 field3 10 10 field4 a a field6 c c field7 20 20 field9 10 10 Here is an emulation for you to play with and compare with real data   | makeresults format=csv data="column, sys1, sys2 field2, a, b field3, 10, 10 field4, a, a field5, 10, 20 field6, c, c field7, 20, 20 field8, a, d field9, 10, 10 field10, 20, 10" ``` the above emulates index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose header_field=field1 ```    
@yuanliu  EVENTTS="2024-11-07 18:29:43.175" I want to match _time with above timestamps.
Very astute observation, @johnhuang !  My default is Wrap Results Yes. Switch to No, and the spacing changes: Not sure what the rationale is for this behavior.
The screenshot shows the timestamp as "2024-11-07 18:45:00.035", the event time as "11/7/24 6:45:00.035 PM".   What exactly do not match?
Thanks for your reply @sainag_splunk  I have done some tests and checks. For the load, I do not think it is too much data, I increased the number of heavy forwarders from 2 to 4 it did not make any... See more...
Thanks for your reply @sainag_splunk  I have done some tests and checks. For the load, I do not think it is too much data, I increased the number of heavy forwarders from 2 to 4 it did not make any change. For the TLS/SSL,  The instance with the UF supports  SSLv3 TLSv1 TLSv1.2 TLSv1.3 The load balancer (LB) (the HF are behind the LB) support TLS 1.2 and 1.3 To eliminate the LB I pointed the UF directly to the HF by changing the outputs.conf as follows uri = http://<ip-of-hf>:8088 It did not work in the environment with UF v9.3.1 and HF v9.3.1 , with the same error. Telnet from UF to HF on port 8088 worked However this (direct to HF) worked in the environment with UF v9.3.1 and HF v9.1.2   Also I noticed that the restart of UF in the environment with the problem is very slow, it takes 4-5 minutes, In the environment with no issues it takes a couple of seconds.   Output and input configs look similar. 
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results st... See more...
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results still do not match after ingestion. Please advise me on the proper settings and assist me in fixing this one. Raw events: 2024-11-07 18:45:00.035, ID="51706", IDEVENT="313032807", EVENTTS="2024-11-07 18:29:43.175", INSERTTS="2024-11-07 18:42:05.819", SOURCE="Shuttle.DiagnosticErrorInfoLogList.28722.csv", LOCATIONOFFSET="0", LOGTIME="2024-11-07 18:29:43.175", BLOCK="2", SECTION="A9.18", SIDE="-", LOCATIONREF="10918", ALARMID="20201", RECOVERABLE="False", SHUTTLEID="Shuttle_069", ALARM="20201", LOCATIONDIR="Front" Existing props setting: CHARSET = UTF-8 DATETIME_CONFIG = LINE_BREAKER = [0-9]\-[0-9]+\-[0-9]+\s[0-9]+:[0-9]+:[0-9]+.\d+ NO_BINARY_CHECK = true category = Custom TIME_PREFIX = EVENTTS=" TIME_FORMAT = %Y-%m-%d %H:%M:%S.%N MAX_TIMESTAMP_LOOKAHEAD = 30 TZ = UTC In the below screeshot still we can see _time is not properly extracted with the matching timestamp of the field name "EVENTTS".    
Thank you for sharing the details.Your prompt response is greatly appreciated. how many events are being processed:  124,878 events i Duration: 184.767 seconds  How many indexers are searching thi... See more...
Thank you for sharing the details.Your prompt response is greatly appreciated. how many events are being processed:  124,878 events i Duration: 184.767 seconds  How many indexers are searching this data: One index(asvservices) Please help me on improving the performance and duration time should be 15 to 20 seconds Query: index=asvservices authenticateByRedirectFinish (*) | join request_correlation_id [ search index= asvservices stepup_validate ("isMatchFound\\\":true") | spath "policy_metadata_policy_name" | search "policy_metadata_policy_name" = stepup_validate | fields "request_correlation_id" ] | spath "metadata_endpoint_service_name" | spath "protocol_response_detail" | search "metadata_endpoint_service_name"=authenticateByRedirectFinish | rename "protocol_response_detail" as response  
Format -> Wrap Results 
You can use LIKE or MATCH   | eval region=CASE(LIKE(url, "%region1%"), "Region 1", LIKE(url, "%region2%"), "Region 2") | eval region=CASE(MATCH(url, "region1"), "Region 1", MATCH(url, "region2"), ... See more...
You can use LIKE or MATCH   | eval region=CASE(LIKE(url, "%region1%"), "Region 1", LIKE(url, "%region2%"), "Region 2") | eval region=CASE(MATCH(url, "region1"), "Region 1", MATCH(url, "region2"), "Region 2")
Thank you very much for your comprehensive response. I have a follow-up question. In a scenario where we have two HF, is there a way to determine which HF the data originated from when searching in S... See more...
Thank you very much for your comprehensive response. I have a follow-up question. In a scenario where we have two HF, is there a way to determine which HF the data originated from when searching in Splunk Cloud? Thank you for your advice and time.
Hi Bhumi, Yes, it is from HF->indexer