All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Going back to my four commandments of asking answerable questions: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers... See more...
Going back to my four commandments of asking answerable questions: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. Until you can illustrate your data, no one can help you.  On the surface, your case function should work given this set of data: url abc.fromregion1.com def.toregion2wego.com ghi.fromregion1toregion2.com You can run a stats and get region count Region 1 2 Region 2 1 Here is the emulation to prove the above.   | makeresults format=csv data="url abc.fromregion1.com def.toregion2wego.com ghi.fromregion1toregion2.com" ``` data emulation above ``` | eval region=case(url like "%region1%","Region 1",url like "%region2%","Region 2") | stats count by region  
I see.  TIME_PREFIX is a regex.  So, you need to escape quotation marks.   TIME_PREFIX = EVENTTS=\"    
I am confused as to how using transpose can changes anything.  The solution should be the same no matter how you obtain the table.  The following will pick up rows with equal values:   index=idx1 s... See more...
I am confused as to how using transpose can changes anything.  The solution should be the same no matter how you obtain the table.  The following will pick up rows with equal values:   index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose header_field=field1 | foreach sys* [eval _row_values = mvappend(_row_values, <<FIELD>>)] | where mvcount(mvdedup(_row_values)) == 1   Using your sample data, the result is column sys1 sys2 field3 10 10 field4 a a field6 c c field7 20 20 field9 10 10 Here is an emulation for you to play with and compare with real data   | makeresults format=csv data="column, sys1, sys2 field2, a, b field3, 10, 10 field4, a, a field5, 10, 20 field6, c, c field7, 20, 20 field8, a, d field9, 10, 10 field10, 20, 10" ``` the above emulates index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose header_field=field1 ```    
@yuanliu  EVENTTS="2024-11-07 18:29:43.175" I want to match _time with above timestamps.
Very astute observation, @johnhuang !  My default is Wrap Results Yes. Switch to No, and the spacing changes: Not sure what the rationale is for this behavior.
The screenshot shows the timestamp as "2024-11-07 18:45:00.035", the event time as "11/7/24 6:45:00.035 PM".   What exactly do not match?
Thanks for your reply @sainag_splunk  I have done some tests and checks. For the load, I do not think it is too much data, I increased the number of heavy forwarders from 2 to 4 it did not make any... See more...
Thanks for your reply @sainag_splunk  I have done some tests and checks. For the load, I do not think it is too much data, I increased the number of heavy forwarders from 2 to 4 it did not make any change. For the TLS/SSL,  The instance with the UF supports  SSLv3 TLSv1 TLSv1.2 TLSv1.3 The load balancer (LB) (the HF are behind the LB) support TLS 1.2 and 1.3 To eliminate the LB I pointed the UF directly to the HF by changing the outputs.conf as follows uri = http://<ip-of-hf>:8088 It did not work in the environment with UF v9.3.1 and HF v9.3.1 , with the same error. Telnet from UF to HF on port 8088 worked However this (direct to HF) worked in the environment with UF v9.3.1 and HF v9.1.2   Also I noticed that the restart of UF in the environment with the problem is very slow, it takes 4-5 minutes, In the environment with no issues it takes a couple of seconds.   Output and input configs look similar. 
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results st... See more...
Hello Splunkers!! I want to extract the _time and match it to the events fields' timestamp while ingesting to Splunk. However, even after applying the props.conf attributes setting, the results still do not match after ingestion. Please advise me on the proper settings and assist me in fixing this one. Raw events: 2024-11-07 18:45:00.035, ID="51706", IDEVENT="313032807", EVENTTS="2024-11-07 18:29:43.175", INSERTTS="2024-11-07 18:42:05.819", SOURCE="Shuttle.DiagnosticErrorInfoLogList.28722.csv", LOCATIONOFFSET="0", LOGTIME="2024-11-07 18:29:43.175", BLOCK="2", SECTION="A9.18", SIDE="-", LOCATIONREF="10918", ALARMID="20201", RECOVERABLE="False", SHUTTLEID="Shuttle_069", ALARM="20201", LOCATIONDIR="Front" Existing props setting: CHARSET = UTF-8 DATETIME_CONFIG = LINE_BREAKER = [0-9]\-[0-9]+\-[0-9]+\s[0-9]+:[0-9]+:[0-9]+.\d+ NO_BINARY_CHECK = true category = Custom TIME_PREFIX = EVENTTS=" TIME_FORMAT = %Y-%m-%d %H:%M:%S.%N MAX_TIMESTAMP_LOOKAHEAD = 30 TZ = UTC In the below screeshot still we can see _time is not properly extracted with the matching timestamp of the field name "EVENTTS".    
Thank you for sharing the details.Your prompt response is greatly appreciated. how many events are being processed:  124,878 events i Duration: 184.767 seconds  How many indexers are searching thi... See more...
Thank you for sharing the details.Your prompt response is greatly appreciated. how many events are being processed:  124,878 events i Duration: 184.767 seconds  How many indexers are searching this data: One index(asvservices) Please help me on improving the performance and duration time should be 15 to 20 seconds Query: index=asvservices authenticateByRedirectFinish (*) | join request_correlation_id [ search index= asvservices stepup_validate ("isMatchFound\\\":true") | spath "policy_metadata_policy_name" | search "policy_metadata_policy_name" = stepup_validate | fields "request_correlation_id" ] | spath "metadata_endpoint_service_name" | spath "protocol_response_detail" | search "metadata_endpoint_service_name"=authenticateByRedirectFinish | rename "protocol_response_detail" as response  
Format -> Wrap Results 
You can use LIKE or MATCH   | eval region=CASE(LIKE(url, "%region1%"), "Region 1", LIKE(url, "%region2%"), "Region 2") | eval region=CASE(MATCH(url, "region1"), "Region 1", MATCH(url, "region2"), ... See more...
You can use LIKE or MATCH   | eval region=CASE(LIKE(url, "%region1%"), "Region 1", LIKE(url, "%region2%"), "Region 2") | eval region=CASE(MATCH(url, "region1"), "Region 1", MATCH(url, "region2"), "Region 2")
Thank you very much for your comprehensive response. I have a follow-up question. In a scenario where we have two HF, is there a way to determine which HF the data originated from when searching in S... See more...
Thank you very much for your comprehensive response. I have a follow-up question. In a scenario where we have two HF, is there a way to determine which HF the data originated from when searching in Splunk Cloud? Thank you for your advice and time.
Hi Bhumi, Yes, it is from HF->indexer
I am trying to simply break down a url to extract the region and chart the use of specific urls over time. but i just get a NULL count of everything. How do i display the counts as separate values? ... See more...
I am trying to simply break down a url to extract the region and chart the use of specific urls over time. but i just get a NULL count of everything. How do i display the counts as separate values?   [query] | eval region=case(url like "%region1%","Region 1",url like "%region2%","Region 2") | timechart span=1h count by region
Hi Rick, When you mean to search for the field::value, do you mean at the rex part or during search? Apologies if my wording was confusing but the rex part managed to work and we did see the fields ... See more...
Hi Rick, When you mean to search for the field::value, do you mean at the rex part or during search? Apologies if my wording was confusing but the rex part managed to work and we did see the fields when we just searched the index (index= index_name) using verbose mode. However, we did not manage to see those fields when just using the props and transforms.conf.
Can you put a number on "taking a longer time"?  How much longer than 15-20 seconds?  Again I ask, how many events are being processed?  Millions of events will take a long time to process no matter ... See more...
Can you put a number on "taking a longer time"?  How much longer than 15-20 seconds?  Again I ask, how many events are being processed?  Millions of events will take a long time to process no matter how efficient the search is.  How many indexers are searching this data?  The more indexers that participate in the search (assuming the events are evenly distributed among them), the faster the search will be. Adding a sourcetype to the base search may help.  It may also help to add a fields command immediately after the base search.  That may reduce the number of fields being transported. resulting in a faster search.  Place the search after the first spath to help reduce the number of events the second spath needs to process. index=asvservices sourcetype=foo "authenticateByRedirectFinish" | fields metadata_endpoint_service_name protocol_response_detail | spath "metadata_endpoint_service_name" | search "metadata_endpoint_service_name"=authenticateByRedirectFinish | spath "protocol_response_detail" | rename "protocol_response_detail" as response  
See https://community.splunk.com/t5/Knowledge-Management/Persistent-queue-problems/td-p/703859
See https://community.splunk.com/t5/Knowledge-Management/Persistent-queue-problems/td-p/703859
See SPL-248479 in release notes. If you are using persistent queue and see following errors in splunkd.log.    ERROR TcpInputProc - Encountered Streaming S2S error 1. "Cannot register new_chann... See more...
See SPL-248479 in release notes. If you are using persistent queue and see following errors in splunkd.log.    ERROR TcpInputProc - Encountered Streaming S2S error 1. "Cannot register new_channel" 2. "Invalid payload_size" 3. "Too many bytes_used" 4. "Message rejected. Received unexpected message of size" 5. "not a valid combined field name/value type for data received"   Other S2S streaming errors as well.   You should upgrade your HF/IHF/IUF/IDX instance (if using persistent queue ) to following patches. 9.4.0/9.3.2/9.2.4/9.1.7 and above. This patch also fixes all the known PQ related crashes and other PQ issues. 
If you are asking for splunkcloud, you can download  private connectivity universal forwarder app.  https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Security/Privateconnectivityenable ... See more...
If you are asking for splunkcloud, you can download  private connectivity universal forwarder app.  https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Security/Privateconnectivityenable https://docs.splunk.com/File:PC4.png If this helps, Please Upvote.