All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @phildefer, I would normally recommend extracting the timestamp correctly when the data is indexed, but if you've uploaded the csv file as a lookup file, your approach would differ. How are you ... See more...
Hi @phildefer, I would normally recommend extracting the timestamp correctly when the data is indexed, but if you've uploaded the csv file as a lookup file, your approach would differ. How are you searching the data? How is the Date field formatted?
This is nice idea. I have come up with this query for 2 different time frames. Its retrieving/calculating the data for shorter timeframes (Ex: up to 3hours range). But for longer time frame, getting ... See more...
This is nice idea. I have come up with this query for 2 different time frames. Its retrieving/calculating the data for shorter timeframes (Ex: up to 3hours range). But for longer time frame, getting partial data for the fields 'p90Avg_PageRenderingTime' or 'p90Avg_PageRenderingTime1'. PFA image.  index="dynatrace" sourcetype="dynatrace:usersession" earliest=-50h@h latest=-46h@h | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="*****" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("") | eval pp_user_action_name=substr(pp_user_action_name,0,60) | spath output=pp_user_action_response_VCT input=user_actions path=visuallyCompleteTime | stats count(pp_user_action_response_VCT) As "Count",avg(pp_user_action_response_VCT) AS "Avg_PageRenderingTime" by pp_user_action_name | join type=left [search index="dynatrace" sourcetype="dynatrace:usersession" earliest=-50h@h latest=-46h@h | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="*****" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("") | eval pp_user_action_name=substr(pp_user_action_name,0,60) | spath output=pp_user_action_response_VCT input=user_actions path=visuallyCompleteTime | eventstats p90(pp_user_action_response_VCT) AS "p90_PageRenderingTime" by pp_user_action_name | where pp_user_action_response_VCT<=p90_PageRenderingTime | stats count(pp_user_action_response_VCT) As "Count1",avg(pp_user_action_response_VCT) AS "p90Avg_PageRenderingTime" values(p90_PageRenderingTime) by pp_user_action_name ] | join type=left [search index="dynatrace" sourcetype="dynatrace:usersession" earliest=-74h@h latest=-70h@h | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="*****" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("") | eval pp_user_action_name=substr(pp_user_action_name,0,60) | spath output=pp_user_action_response_VCT input=user_actions path=visuallyCompleteTime | stats count(pp_user_action_response_VCT) As "Count2",avg(pp_user_action_response_VCT) AS "Avg_PageRenderingTime1" by pp_user_action_name ] | join type=left [search index="dynatrace" sourcetype="dynatrace:usersession" earliest=-74h@h latest=-70h@h | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="*****" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("") | eval pp_user_action_name=substr(pp_user_action_name,0,60) | spath output=pp_user_action_response_VCT input=user_actions path=visuallyCompleteTime | eventstats p90(pp_user_action_response_VCT) AS "p90_PageRenderingTime1" by pp_user_action_name | where pp_user_action_response_VCT<=p90_PageRenderingTime1 | stats count(pp_user_action_response_VCT) As "Count3",avg(pp_user_action_response_VCT) AS "p90Avg_PageRenderingTime1" values(p90_PageRenderingTime1) by pp_user_action_name ] | eval Avg_PageRenderingTime=round(Avg_PageRenderingTime,0)/1000 | eval p90Avg_PageRenderingTime=round(p90Avg_PageRenderingTime,0)/1000 | eval Avg_PageRenderingTime1=round(Avg_PageRenderingTime1,0)/1000 | eval p90Avg_PageRenderingTime1=round(p90Avg_PageRenderingTime1,0)/1000 | table pp_user_action_name, Count,Avg_PageRenderingTime,p90Avg_PageRenderingTime,Count2,Avg_PageRenderingTime1,p90Avg_PageRenderingTime1 Any suggestions?  thanks in advance. 
Hello, I am a beginner with Splunk. I am experimenting with a csv dataset containing the daily average temperature for different cities across the world. As a first step, I would like to see, for a g... See more...
Hello, I am a beginner with Splunk. I am experimenting with a csv dataset containing the daily average temperature for different cities across the world. As a first step, I would like to see, for a given city, the graph for the average temperature over time. However by default, the X axis on the timechart shows the timestamp of the source file, as opposed to the time field contained in each event. As a result, all events show the same date, which is probably the date the dataset was created. How do I use the "Date" field contained in each event, instead of the Timestamp of the dataset file? Thanks,
  Hello, I am forwarding data from an embedded system to an enterprise instance running on a Vm. The logs look like this: acces_monitoring (indexed on splunk, the first empty space means still on... See more...
  Hello, I am forwarding data from an embedded system to an enterprise instance running on a Vm. The logs look like this: acces_monitoring (indexed on splunk, the first empty space means still online):      Access_IP                Access_time                    Logoff_time 1 192.168.200.55 1699814895.000000 2 192.168.200.55 1699814004.000000 1699814060.000000 3 192.168.200.55 1699811754.000000 1699812677.000000 4 192.168.200.55 1699808364.000000 1699809475.000000 5 192.168.200.55 1699806635.000000 1699806681.000000 6 192.168.200.55 1699791222.000000 1699806628.000000 7 192.168.200.55 1699791125.000000 1699791127.000000 8 192.168.200.55 1699724540.000000 1699724541.000000 9 192.168.200.55 1699724390.000000 1699724474.000000   command_monitoring:       Access_IP              exec_time                     executed_command 1 192.168.200.55 1699813121.000000 cd ~ 2 192.168.200.55 1699813116.000000 cd /opt 3 192.168.200.55 1699813110.000000 prova3 4 192.168.200.55 1699811813.000000 cat sshd_config 5 192.168.200.55 1699811807.000000 cd /etc/ssh 6 192.168.200.55 1699811801.000000 cd etc 7 192.168.200.55 1699811793.000000 cd 8 192.168.200.55 1699811788.000000 ls 9 192.168.200.55 1699811783.000000 e che riconosce le sessioni diverse 10 192.168.200.55 1699811776.000000 spero funziona 11 192.168.200.55 1699809221.000000 cat command_log.log 12 192.168.200.55 1699809210.000000 ./custom_shell.sh 13 192.168.200.55 1699808594.000000 CD /MEDIA 14 192.168.200.55 1699808587.000000 cd /medi 15 192.168.200.55 1699808584.000000 omar when i try to join the two by running:   index=main source="/media/ssd1/ip_command_log/command_log.log" | eval exec_time=strptime(exec_time, "%a %b %d %H:%M:%S %Y") | rename ip_execut as Access_IP | table Access_IP, exec_time, executed_command | join type=left Access_IP [ search index=main source="/media/ssd1/splunk_wtmp_output.txt" | dedup Access_time | eval Access_time=strptime(Access_time, "%a %b %d %H:%M:%S %Y") | eval Logoff_time=if(Logoff_time="still logged in", now(), strptime(Logoff_time, "%a %b %d %H:%M:%S %Y")) | table Access_IP, Access_time, Logoff_time ] | eval session_active = if(exec_time >= Access_time AND exec_time <= coalesce(Logoff_time, now()), "true", "false") | where session_active="true" | table Access_IP, Access_time, Logoff_time, exec_time, executed_command   it does not join over every session but only the last one so the one started at 1699814895.000000 and it will not identify any of the commands ran on the embedded system in the correct session.What could be the catch?   Thanks in advance!
Works really great! thanks a lot.
Could you please anyone help me on this?
For example    1) One index and one source type and search string is "hello" "how" "where".here each search string will give common log as "id" and "name"   2) once done with all 3 search string ... See more...
For example    1) One index and one source type and search string is "hello" "how" "where".here each search string will give common log as "id" and "name"   2) once done with all 3 search string (hello,how,where).within in next 5 min ,one log should present in splunk.   3)that log contain "completed" as string which also have "id" and "name".   4) incase after 5 min,"completed" string is not available in the splunk log,I want to retrieve"id" and "name" from my "hello",how,where string search result .pls help me with search query.
Can you provide sample events?
Yes all event .but some are json format
This query does show the earliest posted_timestamp. However, all the other fields are blank for the table command for fields (event_id, process_id, msg_timestamp, lag_in_seconds) Thanks!
Hi @Jasmine, "app" and "servie" are not default Splunk fields, although they may be extracted in your search context. If you're using the search interface to explore data, make sure you're running ... See more...
Hi @Jasmine, "app" and "servie" are not default Splunk fields, although they may be extracted in your search context. If you're using the search interface to explore data, make sure you're running searches in "Smart Mode" (preferably) or "Verbose Mode." The mode selector is just below the magnifying glass search button.
Hi @Pravinsugi, Can you provide sample events with sensitive information redacted? Do you have two event types? "published sourcing plan" "published transfer order" Or do you have four event t... See more...
Hi @Pravinsugi, Can you provide sample events with sensitive information redacted? Do you have two event types? "published sourcing plan" "published transfer order" Or do you have four event types? "published sourcing plan" "published transfer order" "published sourcing order" "transfer order published" Is the salesorderid field extracted from all event types or only from "published sourcing order?"
Hi @patpro; Let's cheat and convert your split symbols_scores_params field into JSON using rex in sed mode and then use spath to parse the JSON. Sed allows backreferences, so we can use the first c... See more...
Hi @patpro; Let's cheat and convert your split symbols_scores_params field into JSON using rex in sed mode and then use spath to parse the JSON. Sed allows backreferences, so we can use the first capture group, the score name, as a prefix to the _score and _options strings. I've also modified your makemv regular expression to accommodate options containing commas; right-brace followed by an optional comma appears to be an appropriate boundary. | makemv tokenizer="([^}]+}),?" symbols_scores_params | rex mode=sed field=symbols_scores_params "s/([^(]+)\\(([^)]+)\\){([^}]*)}/{\"\\1_score\":\"\\2\",\"\\1_options\":\"\\3\"}/" | eval symbols_scores_params="[".mvjoin(symbols_scores_params, ",")."]" | spath input=symbols_scores_params | rename "{}.*" as *  The eval-spath-rename sequence works around spath only operating on the first value of a multivalued field.
I have 2 string which need to be searched in splunk both string having different index and different source type.one string is "published sourcing plan " and another string is "published transfer ord... See more...
I have 2 string which need to be searched in splunk both string having different index and different source type.one string is "published sourcing plan " and another string is "published transfer order" .I need to get "published transfer order" log from the splunk.if it's not available after 5 min of getting "published sourcing plan "log in the splunk.i need to count it or need to retrieve some details like salesorderid from "published sourcing order" log .how to prepare sea rch query in splunk.incase none of the log available in the splunk for "transfer order published",I need to capture the things
Hello, I would like to properly parse rspamd logs that look like this (2 lines sample):   2023-11-12 16:06:22 #28191(rspamd_proxy) <8eca26>; proxy; rspamd_task_write_log: action="no action", diges... See more...
Hello, I would like to properly parse rspamd logs that look like this (2 lines sample):   2023-11-12 16:06:22 #28191(rspamd_proxy) <8eca26>; proxy; rspamd_task_write_log: action="no action", digest="107a69c58d90a38bb0214546cbe78b52", dns_req="79", filename="undef", forced_action="undef", ip="A.B.C.D", is_spam="F", len="98123", subject="foobar", head_from=""dude" <info@example.com>", head_to=""other" <other@example.net>", head_date="Sun, 12 Nov 2023 07:08:57 +0000", head_ua="nil", mid="<B3.B6.48980.82A70556@gg.mta2vrest.cc.prd.sparkpost>", qid="0E4231619A", scores="-0.71/15.00", settings_id="undef", symbols_scores_params="BAYES_HAM(-3.00){100.00%;},RBL_MAILSPIKE_VERYBAD(1.50){A.B.C.D:from;},RWL_AMI_LASTHOP(-1.00){A.B.C.D:from;},URI_COUNT_ODD(1.00){105;},FORGED_SENDER(0.30){info@example.com;bounce-1699772934217.160136898825325270136859@example.com;},MANY_INVISIBLE_PARTS(0.30){4;},ZERO_FONT(0.20){2;},BAD_REP_POLICIES(0.10){},MIME_GOOD(-0.10){multipart/alternative;text/plain;},HAS_LIST_UNSUB(-0.01){},ARC_NA(0.00){},ASN(0.00){asn:23528, ipnet:A.B.C.0/20, country:US;},DKIM_TRACE(0.00){example.com:+;},DMARC_POLICY_ALLOW(0.00){example.com;none;},FROM_HAS_DN(0.00){},FROM_NEQ_ENVFROM(0.00){info@example.com;bounce-1699772934217.160136898825325270136859@example.com;},HAS_REPLYTO(0.00){support@example.com;},MIME_TRACE(0.00){0:+;1:+;2:~;},RCPT_COUNT_ONE(0.00){1;},RCVD_COUNT_ZERO(0.00){0;},REDIRECTOR_URL(0.00){twitter.com;},REPLYTO_DOM_NEQ_FROM_DOM(0.00){},R_DKIM_ALLOW(0.00){example.com:s=scph0618;},R_SPF_ALLOW(0.00){+exists:A.B.C.D._spf.sparkpostmail.com;},TO_DN_ALL(0.00){},TO_MATCH_ENVRCPT_ALL(0.00){}", time_real="1605.197ms", user="undef" 2023-11-12 16:02:04 #28191(rspamd_proxy) <4a3599>; proxy; rspamd_task_write_log: action="no action", digest="5151f8aa4eaebc5877c7308fed4ea21e", dns_req="19", filename="undef", forced_action="undef", ip="E.F.G.H", is_spam="F", len="109529", subject="Re: barfoo", head_from="other me <other@example.net>", head_to="someone <someone@exmaple.fr>", head_date="Sun, 12 Nov 2023 16:02:03 +0100", head_ua="Apple Mail (2.3731.700.6)", mid="<3425840B-B955-4647-AB4D-163FC54BE820@example.net>", qid="163A215DB3", scores="-4.09/15.00", settings_id="undef", symbols_scores_params="BAYES_HAM(-2.99){99.99%;},ARC_ALLOW(-1.00){example.net:s=openarc-20230616:i=1;},MIME_GOOD(-0.10){multipart/mixed;text/plain;},APPLE_MAILER_COMMON(0.00){},ASN(0.00){asn:12322, ipnet:E.F.0.0/11, country:FR;},FREEMAIL_CC(0.00){example.com;},FREEMAIL_ENVRCPT(0.00){example.fr;example.com;},FREEMAIL_TO(0.00){example.fr;},FROM_EQ_ENVFROM(0.00){},FROM_HAS_DN(0.00){},MID_RHS_MATCH_FROM(0.00){},MIME_TRACE(0.00){0:+;1:+;2:~;},RCPT_COUNT_TWO(0.00){2;},RCVD_COUNT_ZERO(0.00){0;},TO_DN_ALL(0.00){},TO_MATCH_ENVRCPT_ALL(0.00){}", time_real="428.021ms", user="me"     The field I need to split is symbols_scores_params. I’ve used this:   sourcetype=rspamd user=* | makemv tokenizer="([^,]+),?" symbols_scores_params | mvexpand symbols_scores_params | rex field=symbols_scores_params "(?<name>[A-Z0-9_]+)\((?<score>-?[.0-9]+)\){(?<options>[^{}]+)}" | eval {name}_score=score, {name}_options=options     It works great, proper fields are created (eg. BAYES_HAM_score, BAYES_HAM_options, etc.), but a single event is turned into a pack of 17 to 35 events. Is there a way to dedup those events and to keep every new fields extracted from symbols_scores_params ?
Hi @Jasmine .. We may need more details from you.  Please update us your current Splunk Search query (remove any hostnames, ip address, etc before posting it here)
How to get default fields like host app,servie after using eval. after using eval, i am not able to fetch any default fields. please advise.
Try something like this source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | e... See more...
Try something like this source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | eval e2_t=strptime(msg_ timestamp, "%FT%T") | eval lag_in_seconds=e1_t-e2_t | eval r2_posted_timestamp=posted_time | table event_id process_id msg_timestamp r2_posted_timestamp lag_in_seconds e1_t | sort 0 e1_t | dedup event_id process_id
index=sample_index path=*/sample_path* responseCode=200 OR responseCode=403 | timechart span=1m count by responseCode | where '403' > 0
Thanks, if I enable maintenance mode won’t that stop data coming in or will the indexers that are up still be receiving data?