All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For example    1) One index and one source type and search string is "hello" "how" "where".here each search string will give common log as "id" and "name"   2) once done with all 3 search string ... See more...
For example    1) One index and one source type and search string is "hello" "how" "where".here each search string will give common log as "id" and "name"   2) once done with all 3 search string (hello,how,where).within in next 5 min ,one log should present in splunk.   3)that log contain "completed" as string which also have "id" and "name".   4) incase after 5 min,"completed" string is not available in the splunk log,I want to retrieve"id" and "name" from my "hello",how,where string search result .pls help me with search query.
Can you provide sample events?
Yes all event .but some are json format
This query does show the earliest posted_timestamp. However, all the other fields are blank for the table command for fields (event_id, process_id, msg_timestamp, lag_in_seconds) Thanks!
Hi @Jasmine, "app" and "servie" are not default Splunk fields, although they may be extracted in your search context. If you're using the search interface to explore data, make sure you're running ... See more...
Hi @Jasmine, "app" and "servie" are not default Splunk fields, although they may be extracted in your search context. If you're using the search interface to explore data, make sure you're running searches in "Smart Mode" (preferably) or "Verbose Mode." The mode selector is just below the magnifying glass search button.
Hi @Pravinsugi, Can you provide sample events with sensitive information redacted? Do you have two event types? "published sourcing plan" "published transfer order" Or do you have four event t... See more...
Hi @Pravinsugi, Can you provide sample events with sensitive information redacted? Do you have two event types? "published sourcing plan" "published transfer order" Or do you have four event types? "published sourcing plan" "published transfer order" "published sourcing order" "transfer order published" Is the salesorderid field extracted from all event types or only from "published sourcing order?"
Hi @patpro; Let's cheat and convert your split symbols_scores_params field into JSON using rex in sed mode and then use spath to parse the JSON. Sed allows backreferences, so we can use the first c... See more...
Hi @patpro; Let's cheat and convert your split symbols_scores_params field into JSON using rex in sed mode and then use spath to parse the JSON. Sed allows backreferences, so we can use the first capture group, the score name, as a prefix to the _score and _options strings. I've also modified your makemv regular expression to accommodate options containing commas; right-brace followed by an optional comma appears to be an appropriate boundary. | makemv tokenizer="([^}]+}),?" symbols_scores_params | rex mode=sed field=symbols_scores_params "s/([^(]+)\\(([^)]+)\\){([^}]*)}/{\"\\1_score\":\"\\2\",\"\\1_options\":\"\\3\"}/" | eval symbols_scores_params="[".mvjoin(symbols_scores_params, ",")."]" | spath input=symbols_scores_params | rename "{}.*" as *  The eval-spath-rename sequence works around spath only operating on the first value of a multivalued field.
I have 2 string which need to be searched in splunk both string having different index and different source type.one string is "published sourcing plan " and another string is "published transfer ord... See more...
I have 2 string which need to be searched in splunk both string having different index and different source type.one string is "published sourcing plan " and another string is "published transfer order" .I need to get "published transfer order" log from the splunk.if it's not available after 5 min of getting "published sourcing plan "log in the splunk.i need to count it or need to retrieve some details like salesorderid from "published sourcing order" log .how to prepare sea rch query in splunk.incase none of the log available in the splunk for "transfer order published",I need to capture the things
Hello, I would like to properly parse rspamd logs that look like this (2 lines sample):   2023-11-12 16:06:22 #28191(rspamd_proxy) <8eca26>; proxy; rspamd_task_write_log: action="no action", diges... See more...
Hello, I would like to properly parse rspamd logs that look like this (2 lines sample):   2023-11-12 16:06:22 #28191(rspamd_proxy) <8eca26>; proxy; rspamd_task_write_log: action="no action", digest="107a69c58d90a38bb0214546cbe78b52", dns_req="79", filename="undef", forced_action="undef", ip="A.B.C.D", is_spam="F", len="98123", subject="foobar", head_from=""dude" <info@example.com>", head_to=""other" <other@example.net>", head_date="Sun, 12 Nov 2023 07:08:57 +0000", head_ua="nil", mid="<B3.B6.48980.82A70556@gg.mta2vrest.cc.prd.sparkpost>", qid="0E4231619A", scores="-0.71/15.00", settings_id="undef", symbols_scores_params="BAYES_HAM(-3.00){100.00%;},RBL_MAILSPIKE_VERYBAD(1.50){A.B.C.D:from;},RWL_AMI_LASTHOP(-1.00){A.B.C.D:from;},URI_COUNT_ODD(1.00){105;},FORGED_SENDER(0.30){info@example.com;bounce-1699772934217.160136898825325270136859@example.com;},MANY_INVISIBLE_PARTS(0.30){4;},ZERO_FONT(0.20){2;},BAD_REP_POLICIES(0.10){},MIME_GOOD(-0.10){multipart/alternative;text/plain;},HAS_LIST_UNSUB(-0.01){},ARC_NA(0.00){},ASN(0.00){asn:23528, ipnet:A.B.C.0/20, country:US;},DKIM_TRACE(0.00){example.com:+;},DMARC_POLICY_ALLOW(0.00){example.com;none;},FROM_HAS_DN(0.00){},FROM_NEQ_ENVFROM(0.00){info@example.com;bounce-1699772934217.160136898825325270136859@example.com;},HAS_REPLYTO(0.00){support@example.com;},MIME_TRACE(0.00){0:+;1:+;2:~;},RCPT_COUNT_ONE(0.00){1;},RCVD_COUNT_ZERO(0.00){0;},REDIRECTOR_URL(0.00){twitter.com;},REPLYTO_DOM_NEQ_FROM_DOM(0.00){},R_DKIM_ALLOW(0.00){example.com:s=scph0618;},R_SPF_ALLOW(0.00){+exists:A.B.C.D._spf.sparkpostmail.com;},TO_DN_ALL(0.00){},TO_MATCH_ENVRCPT_ALL(0.00){}", time_real="1605.197ms", user="undef" 2023-11-12 16:02:04 #28191(rspamd_proxy) <4a3599>; proxy; rspamd_task_write_log: action="no action", digest="5151f8aa4eaebc5877c7308fed4ea21e", dns_req="19", filename="undef", forced_action="undef", ip="E.F.G.H", is_spam="F", len="109529", subject="Re: barfoo", head_from="other me <other@example.net>", head_to="someone <someone@exmaple.fr>", head_date="Sun, 12 Nov 2023 16:02:03 +0100", head_ua="Apple Mail (2.3731.700.6)", mid="<3425840B-B955-4647-AB4D-163FC54BE820@example.net>", qid="163A215DB3", scores="-4.09/15.00", settings_id="undef", symbols_scores_params="BAYES_HAM(-2.99){99.99%;},ARC_ALLOW(-1.00){example.net:s=openarc-20230616:i=1;},MIME_GOOD(-0.10){multipart/mixed;text/plain;},APPLE_MAILER_COMMON(0.00){},ASN(0.00){asn:12322, ipnet:E.F.0.0/11, country:FR;},FREEMAIL_CC(0.00){example.com;},FREEMAIL_ENVRCPT(0.00){example.fr;example.com;},FREEMAIL_TO(0.00){example.fr;},FROM_EQ_ENVFROM(0.00){},FROM_HAS_DN(0.00){},MID_RHS_MATCH_FROM(0.00){},MIME_TRACE(0.00){0:+;1:+;2:~;},RCPT_COUNT_TWO(0.00){2;},RCVD_COUNT_ZERO(0.00){0;},TO_DN_ALL(0.00){},TO_MATCH_ENVRCPT_ALL(0.00){}", time_real="428.021ms", user="me"     The field I need to split is symbols_scores_params. I’ve used this:   sourcetype=rspamd user=* | makemv tokenizer="([^,]+),?" symbols_scores_params | mvexpand symbols_scores_params | rex field=symbols_scores_params "(?<name>[A-Z0-9_]+)\((?<score>-?[.0-9]+)\){(?<options>[^{}]+)}" | eval {name}_score=score, {name}_options=options     It works great, proper fields are created (eg. BAYES_HAM_score, BAYES_HAM_options, etc.), but a single event is turned into a pack of 17 to 35 events. Is there a way to dedup those events and to keep every new fields extracted from symbols_scores_params ?
Hi @Jasmine .. We may need more details from you.  Please update us your current Splunk Search query (remove any hostnames, ip address, etc before posting it here)
How to get default fields like host app,servie after using eval. after using eval, i am not able to fetch any default fields. please advise.
Try something like this source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | e... See more...
Try something like this source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | eval e2_t=strptime(msg_ timestamp, "%FT%T") | eval lag_in_seconds=e1_t-e2_t | eval r2_posted_timestamp=posted_time | table event_id process_id msg_timestamp r2_posted_timestamp lag_in_seconds e1_t | sort 0 e1_t | dedup event_id process_id
index=sample_index path=*/sample_path* responseCode=200 OR responseCode=403 | timechart span=1m count by responseCode | where '403' > 0
Thanks, if I enable maintenance mode won’t that stop data coming in or will the indexers that are up still be receiving data?
Hi @RemyaT, let me understand: do you want to count only events with response_code=403 or cout of all response_codes when there's at least one 403? If the first, you can try: index=sample_index pa... See more...
Hi @RemyaT, let me understand: do you want to count only events with response_code=403 or cout of all response_codes when there's at least one 403? If the first, you can try: index=sample_index path=*/sample_path* response_code=403 | timechart span=1m count if the second index=sample_index path=*/sample_path* | bucket _time span=1m | stats count(eval(response_code="200")) AS 200_count count(eval(response_code="403")) AS 403_count BY _time | where 403_count >0 Ciao. Giuseppe
I have the query to find the response code and count vs time (in 1 minute time interval) as below.   index=sample_index path=*/sample_path* | bucket _time span=1m | stats count by _time respons... See more...
I have the query to find the response code and count vs time (in 1 minute time interval) as below.   index=sample_index path=*/sample_path* | bucket _time span=1m | stats count by _time responseCode   The result shows the response code and count vs time for each minute. But I just need the events in those 1 minutes which have 403 response code along with other response codes and skip which doesn't have 403.  Suppose during time1, if there are only events with response code 200, I don't need that in my result. But during time2, if there are events with response code 200 and 403, I need that in the result as time, response code, count. 
Hi @djoobbani .. please check this SPL.. thanks.  source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(poste... See more...
Hi @djoobbani .. please check this SPL.. thanks.  source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | eval e2_t=strptime(msg_ timestamp, "%FT%T") | eval lag_in_seconds=e1_t-e2_t | eval r2_posted_timestamp=posted_time | stats earliest(r2_posted_timestamp) AS Earliest_r2_posted_timestamp, latest(r2_posted_timestamp) AS Latest_r2_posted_timestamp | table event_id process_id msg_timestamp r2_posted_timestamp lag_in_seconds Earliest_r2_posted_timestamp Latest_r2_posted_timestamp  
Nice SPL @ITWhisperer ..  Hi @Kirthika .. pls check this SPL.. (the stats logic may needs to be fine-tuned)   source="testlogrex.txt" host="laptop" sourcetype="nov12" | rex field=_raw "\|(?<msg>.+... See more...
Nice SPL @ITWhisperer ..  Hi @Kirthika .. pls check this SPL.. (the stats logic may needs to be fine-tuned)   source="testlogrex.txt" host="laptop" sourcetype="nov12" | rex field=_raw "\|(?<msg>.+)$" | stats sum(eval(case(msg=="**Starting**",1,msg=="Shutting down",-1))) as bad count(eval(case(msg=="**Starting**",1))) as starts | eval good=starts-bad   this SPL gives this result..  bad starts good 5 7 2 The Sample logs and rex used here: source="testlogrex.txt" host="laptop" sourcetype="nov12" | rex field=_raw "\|(?<msg>.+)$" | table _raw msg _raw msg 2022-08-19 08:10:04.6218|Shutting down Shutting down 2022-08-19 08:10:03.6061|dd03 dd03 2022-08-19 08:10:02.5905|fff fff 2022-08-19 08:10:01.0593|**Starting** **Starting** 2022-08-19 08:10:08.6843|**Starting** **Starting** 2022-08-19 08:10:07.6686|ddd07 ddd07 2022-08-19 08:10:06.6374|fffff06 fffff06 2022-08-19 08:10:05.6218|**Starting** **Starting** 2022-08-19 08:10:12.5905|fff12 fff12 2022-08-19 08:10:11.0593|**Starting** **Starting** 2022-08-19 08:10:10.1530|vv10 vv10 2022-08-19 08:10:09.1530|aa09 aa09 2022-08-19 08:10:16.6374|fffff16 fffff16 2022-08-19 08:10:15.6218|**Starting** **Starting** 2022-08-19 08:10:14.6218|Shutting down Shutting down 2022-08-19 08:10:13.6061|**Starting** **Starting** 2022-08-19 08:10:19.15|aa19 aa19 2022-08-19 08:10:18.6843|**Starting** **Starting** 2022-08-19 08:10:17.6686|ddd17 ddd17 2022-08-19 08:10:20.160|vv20 vv20
Hi @joe06031990  Usally what I usally follow is I will enable maintenance mode on cluster manager and  once activity done on indexers and up and running, will disbale maintenance mode. then all b... See more...
Hi @joe06031990  Usally what I usally follow is I will enable maintenance mode on cluster manager and  once activity done on indexers and up and running, will disbale maintenance mode. then all bucket fixup activitys will complete. some use maintenance mode and splunk offline together. 
@SanjayReddy Thanks for your response, I just mentioned the log format. Actually the log file is recent, new file will be generated everyday filename.<date> I updated my post as well.