All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have following data: 02:00:00 Item=A Result=success 02:00:05 Item=B Result=success 02:05:00 Item=A Result=fail 02:05:05 Item=B Result=success 02:10:00 Item=A Result=fail 02:10:05 Item=B ... See more...
I have following data: 02:00:00 Item=A Result=success 02:00:05 Item=B Result=success 02:05:00 Item=A Result=fail 02:05:05 Item=B Result=success 02:10:00 Item=A Result=fail 02:10:05 Item=B Result=success 02:15:00 Item=A Result=success 02:15:05 Item=B Result=fail 02:20:00 Item=A Result=success 02:20:05 Item=B Result=fail 02:25:00 Item=A Result=success 02:25:05 Item=B Result=success 02:30:00 Item=A Result=success 02:30:05 Item=B Result=success 02:35:00 Item=A Result=success 02:35:05 Item=B Result=success 02:40:00 Item=A Result=success 02:40:05 Item=B Result=fail 02:45:00 Item=A Result=success 02:45:05 Item=B Result=success 02:50:00 Item=A Result=success 02:50:05 Item=B Result=success 02:55:00 Item=A Result=success 02:55:05 Item=B Result=success My desired results: Item StartTime EndTime Duration A       02:05:00    02:15:00 00:10:00 B       02:15:05    02:25:05 00:10:00 B       02:40:05    02:45:05 00:05:00 I had tried transaction and streamstats but got wrong results. Can anybody here help me to solve this problem? Thank you.
Hi Cansel Its a SaaS Controller, so cannot run those commands on the Controller.  AppD Support also informed me that this is not a controller-side issue, but rather general network issue.
Great @tscroggins  With that it worked perfectly for me, I didn't know the functions for processing json, with this I will be able to learn them Thank you so much!
Please help me on below things: Requirements: Once 3 events meets, immediately next event should published.if event is not published after 5 min ,need alert. Example : We have one customerno.for ... See more...
Please help me on below things: Requirements: Once 3 events meets, immediately next event should published.if event is not published after 5 min ,need alert. Example : We have one customerno.for the customer number ,I have to search whether 3 events meets logs available in the splunk log or not  Ex: index= 1 source type ="abc" "s1 event received" and "s2 event received" and "s3 event received"     When I search above query ,I will be getting like( format as below ) S1 received for 12345 customer,name=abz S2 received for 12345 customer,name = abz S3 received for 12345 customer,name =abz   If for one customer,all 3 event are met,next i want to search "created" message available in the splunk for same customer (12345) Here "created" message index and source type is different   If "created" message not available for 12345 customer no after 5 min once all 3 events meets,I need alert with customer no.pls help on this query..if "created" message available after 5 min also need capture customer number. Fyi : if we received "created" message in the log ,sample log will be (json format ) Log : created :{"customer no" : "12345",name :"kanunam"} like that.   Please please help me on search query.
Hi! This is a very basic question. First time working with Splunk Enterprise Platform. How do you actually go about switching on the feature to log network traffic coming into an internal network ... See more...
Hi! This is a very basic question. First time working with Splunk Enterprise Platform. How do you actually go about switching on the feature to log network traffic coming into an internal network with a specific IP range? I essentially want for Splunk Enterprise to act as a logger for all traffic that enters the internal network on a certain port, for example. How do I go about it? FYI - I do not want to use the Forwarder or upload log files function.
Please help me on below things: Requirements: Once 3 events meets, immediately next event should published.if event is not published after 5 min ,need alert. Example : We have one customerno.for ... See more...
Please help me on below things: Requirements: Once 3 events meets, immediately next event should published.if event is not published after 5 min ,need alert. Example : We have one customerno.for the customer number ,I have to search whether 3 events meets logs available in the splunk log or not  Ex: index= 1 source type ="abc" "s1 event received" and "s2 event received" and "s3 event received"     When I search above query ,I will be getting like S1 received for 12345 customer S2 received for 12345 customer S3 received for 12345 customer   If for one customer,all 3 event are met,next i want to search "created" message available in the splunk for same customer (12345) Here "created" message index and source type is different   If "created" message not available for 12345 customer no after 5 min once all 3 events meets,I need alert.pls help on this query.    
Empty and null are different things. If the field is "" then it is not null, so I use | where len(company)>0
Thank you for you help! It work!!
Without actual sample events, this may not match your use case exactly, but it's a starting point. I've used the following events to test: 2023-11-12 00:00:00 id=1 name=a hello how where 2023-11-1... See more...
Without actual sample events, this may not match your use case exactly, but it's a starting point. I've used the following events to test: 2023-11-12 00:00:00 id=1 name=a hello how where 2023-11-12 00:01:00 id=2 name=b hello how where 2023-11-12 00:03:00 id=1 name=a completed 2023-11-12 00:10:00 id=3 name=c hello how where 2023-11-12 00:10:00 id=4 name=d hello how where 2023-11-12 00:14:00 id=3 name=c completed 2023-11-12 00:16:00 id=4 name=d completed Save the following as an alert, and schedule it to run every minute: ((index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2)) earliest=-6m@m latest=@m | addinfo | where _time<relative_time(info_max_time, "-5m@m") OR match(_raw, "completed") | transaction keepevicted=t id name startswith="hello AND how AND where" endswith="completed" | where (closed_txn==0 AND NOT match(_raw, "completed")) OR duration>300 | table _time id name You can test the search using hard-coded, 6-minute timespans instead of earliest=-6m@m latest=@m. There will be a >1 minute delay before alerts are triggered, but the 6-minute time range allows us to cover the start time +/- 30 seconds of a sliding 5-minute window. Using the sample data, alerts would be triggered at the following times: Execution time: ~2023-11-12 00:07:00 - no completed event => _time id name 2023-11-12 00:01:00 2 b   Execution time: ~2023-11-12 00:16:00 - completed event late (>5 minutes) => _time id name 2023-11-12 00:10:00 4 d
I found solution for this: index=testindex  (vuln=* AND score=* AND company=*) OR (vuln=*) OR NOT (company="") (vuln=* AND score=* AND company=*)   ==>   condition for vuln, score, company exis... See more...
I found solution for this: index=testindex  (vuln=* AND score=* AND company=*) OR (vuln=*) OR NOT (company="") (vuln=* AND score=* AND company=*)   ==>   condition for vuln, score, company exists (vuln=*)  ==> condition for only vuln exists NOT (company="") ==> condition for only company exists          company=*   "is equivalent with"  NOT (company="")     "is equivalent with"    isnull(company) any idea why company=*  or isnull(company) does not work? Thank you
I had to re-think the way I was showing my job info in the UI as a orphaned Transaction could mean two different things.  i.e. 'running' start with no end message or multiple starts along with modif... See more...
I had to re-think the way I was showing my job info in the UI as a orphaned Transaction could mean two different things.  i.e. 'running' start with no end message or multiple starts along with modifying my initial SPL to look for an aborted message when a failure occurred. It was quite a learning experience is I went through all different permutations of keeporphans=true/false keepevicted=true/false and then look at the ending transaction data. I did notice that closed_txn defaults to null and not 0 and had to write some code to make it default to 0. eval closed_txn = if ( isnull(closed_txn,0, closed_txn) search closed_txn=0 to find those transactions that only has a START log event... Thanks for all you input as it gave me a different perspective from the way I was looking at it.  I was also to rewrite some of my existing SPL to have fewer lines.
Your search and your data don't match, in that you are parsing time in your SPL, but your data shows that as already in epoch time. You need to use max=0 in the join statement. Note that using join... See more...
Your search and your data don't match, in that you are parsing time in your SPL, but your data shows that as already in epoch time. You need to use max=0 in the join statement. Note that using join is not good practice, as it has a number of limitations and is slow. stats is generally the way to go, but in this case, you're effectively using the subsearch as a lookup table and are using it for a range search, but be aware.
As @bowesmana noted, this is the way. The timestamp is time zone-aware, though, so be mindful of the offset. If you prefer, you can include a time zone in your conversion, e.g. as a shortcut for for ... See more...
As @bowesmana noted, this is the way. The timestamp is time zone-aware, though, so be mindful of the offset. If you prefer, you can include a time zone in your conversion, e.g. as a shortcut for for UTC: | eval _time=strptime(Date."Z", "%Y-%m-%d%Z")
Date has YYYY-MM-DD format. I managed changing the '_time'  field by using  the command: eval _time=strptime(Date,"%Y-%m-%d") Now the Time column in the events list shows the date in the dd/mm/yy... See more...
Date has YYYY-MM-DD format. I managed changing the '_time'  field by using  the command: eval _time=strptime(Date,"%Y-%m-%d") Now the Time column in the events list shows the date in the dd/mm/yyyy, with the actual time of 00:00:00.000
As @tscroggins says, it's always important to get your ingest dates correctly extracted from the data in the first place. However, to extract a time from a field in the data you use the strptime() f... See more...
As @tscroggins says, it's always important to get your ingest dates correctly extracted from the data in the first place. However, to extract a time from a field in the data you use the strptime() function, e.g. | eval _time=strptime(date_field, "format_string") which will overwrite the existing _time field with the time converted from your data field called date_field according to the format string you specify. Time format variables are documented here https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Commontimeformatvariables e, g. this example, which you can paste into your search bar will convert the string in my_date_field to _time. | makeresults | eval my_date_field="2023-11-13 08:01:02.123" | eval _time=strptime(my_date_field, "%F %T.%Q") Note that times are converted to epoch times, but the _time field is special in that it will show you the formatted date, rather than the epoch.
Hi @phildefer, I would normally recommend extracting the timestamp correctly when the data is indexed, but if you've uploaded the csv file as a lookup file, your approach would differ. How are you ... See more...
Hi @phildefer, I would normally recommend extracting the timestamp correctly when the data is indexed, but if you've uploaded the csv file as a lookup file, your approach would differ. How are you searching the data? How is the Date field formatted?
This is nice idea. I have come up with this query for 2 different time frames. Its retrieving/calculating the data for shorter timeframes (Ex: up to 3hours range). But for longer time frame, getting ... See more...
This is nice idea. I have come up with this query for 2 different time frames. Its retrieving/calculating the data for shorter timeframes (Ex: up to 3hours range). But for longer time frame, getting partial data for the fields 'p90Avg_PageRenderingTime' or 'p90Avg_PageRenderingTime1'. PFA image.  index="dynatrace" sourcetype="dynatrace:usersession" earliest=-50h@h latest=-46h@h | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="*****" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("") | eval pp_user_action_name=substr(pp_user_action_name,0,60) | spath output=pp_user_action_response_VCT input=user_actions path=visuallyCompleteTime | stats count(pp_user_action_response_VCT) As "Count",avg(pp_user_action_response_VCT) AS "Avg_PageRenderingTime" by pp_user_action_name | join type=left [search index="dynatrace" sourcetype="dynatrace:usersession" earliest=-50h@h latest=-46h@h | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="*****" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("") | eval pp_user_action_name=substr(pp_user_action_name,0,60) | spath output=pp_user_action_response_VCT input=user_actions path=visuallyCompleteTime | eventstats p90(pp_user_action_response_VCT) AS "p90_PageRenderingTime" by pp_user_action_name | where pp_user_action_response_VCT<=p90_PageRenderingTime | stats count(pp_user_action_response_VCT) As "Count1",avg(pp_user_action_response_VCT) AS "p90Avg_PageRenderingTime" values(p90_PageRenderingTime) by pp_user_action_name ] | join type=left [search index="dynatrace" sourcetype="dynatrace:usersession" earliest=-74h@h latest=-70h@h | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="*****" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("") | eval pp_user_action_name=substr(pp_user_action_name,0,60) | spath output=pp_user_action_response_VCT input=user_actions path=visuallyCompleteTime | stats count(pp_user_action_response_VCT) As "Count2",avg(pp_user_action_response_VCT) AS "Avg_PageRenderingTime1" by pp_user_action_name ] | join type=left [search index="dynatrace" sourcetype="dynatrace:usersession" earliest=-74h@h latest=-70h@h | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="*****" | spath output=pp_user_action_name input=user_actions path=name | where pp_user_action_name in ("") | eval pp_user_action_name=substr(pp_user_action_name,0,60) | spath output=pp_user_action_response_VCT input=user_actions path=visuallyCompleteTime | eventstats p90(pp_user_action_response_VCT) AS "p90_PageRenderingTime1" by pp_user_action_name | where pp_user_action_response_VCT<=p90_PageRenderingTime1 | stats count(pp_user_action_response_VCT) As "Count3",avg(pp_user_action_response_VCT) AS "p90Avg_PageRenderingTime1" values(p90_PageRenderingTime1) by pp_user_action_name ] | eval Avg_PageRenderingTime=round(Avg_PageRenderingTime,0)/1000 | eval p90Avg_PageRenderingTime=round(p90Avg_PageRenderingTime,0)/1000 | eval Avg_PageRenderingTime1=round(Avg_PageRenderingTime1,0)/1000 | eval p90Avg_PageRenderingTime1=round(p90Avg_PageRenderingTime1,0)/1000 | table pp_user_action_name, Count,Avg_PageRenderingTime,p90Avg_PageRenderingTime,Count2,Avg_PageRenderingTime1,p90Avg_PageRenderingTime1 Any suggestions?  thanks in advance. 
Hello, I am a beginner with Splunk. I am experimenting with a csv dataset containing the daily average temperature for different cities across the world. As a first step, I would like to see, for a g... See more...
Hello, I am a beginner with Splunk. I am experimenting with a csv dataset containing the daily average temperature for different cities across the world. As a first step, I would like to see, for a given city, the graph for the average temperature over time. However by default, the X axis on the timechart shows the timestamp of the source file, as opposed to the time field contained in each event. As a result, all events show the same date, which is probably the date the dataset was created. How do I use the "Date" field contained in each event, instead of the Timestamp of the dataset file? Thanks,
  Hello, I am forwarding data from an embedded system to an enterprise instance running on a Vm. The logs look like this: acces_monitoring (indexed on splunk, the first empty space means still on... See more...
  Hello, I am forwarding data from an embedded system to an enterprise instance running on a Vm. The logs look like this: acces_monitoring (indexed on splunk, the first empty space means still online):      Access_IP                Access_time                    Logoff_time 1 192.168.200.55 1699814895.000000 2 192.168.200.55 1699814004.000000 1699814060.000000 3 192.168.200.55 1699811754.000000 1699812677.000000 4 192.168.200.55 1699808364.000000 1699809475.000000 5 192.168.200.55 1699806635.000000 1699806681.000000 6 192.168.200.55 1699791222.000000 1699806628.000000 7 192.168.200.55 1699791125.000000 1699791127.000000 8 192.168.200.55 1699724540.000000 1699724541.000000 9 192.168.200.55 1699724390.000000 1699724474.000000   command_monitoring:       Access_IP              exec_time                     executed_command 1 192.168.200.55 1699813121.000000 cd ~ 2 192.168.200.55 1699813116.000000 cd /opt 3 192.168.200.55 1699813110.000000 prova3 4 192.168.200.55 1699811813.000000 cat sshd_config 5 192.168.200.55 1699811807.000000 cd /etc/ssh 6 192.168.200.55 1699811801.000000 cd etc 7 192.168.200.55 1699811793.000000 cd 8 192.168.200.55 1699811788.000000 ls 9 192.168.200.55 1699811783.000000 e che riconosce le sessioni diverse 10 192.168.200.55 1699811776.000000 spero funziona 11 192.168.200.55 1699809221.000000 cat command_log.log 12 192.168.200.55 1699809210.000000 ./custom_shell.sh 13 192.168.200.55 1699808594.000000 CD /MEDIA 14 192.168.200.55 1699808587.000000 cd /medi 15 192.168.200.55 1699808584.000000 omar when i try to join the two by running:   index=main source="/media/ssd1/ip_command_log/command_log.log" | eval exec_time=strptime(exec_time, "%a %b %d %H:%M:%S %Y") | rename ip_execut as Access_IP | table Access_IP, exec_time, executed_command | join type=left Access_IP [ search index=main source="/media/ssd1/splunk_wtmp_output.txt" | dedup Access_time | eval Access_time=strptime(Access_time, "%a %b %d %H:%M:%S %Y") | eval Logoff_time=if(Logoff_time="still logged in", now(), strptime(Logoff_time, "%a %b %d %H:%M:%S %Y")) | table Access_IP, Access_time, Logoff_time ] | eval session_active = if(exec_time >= Access_time AND exec_time <= coalesce(Logoff_time, now()), "true", "false") | where session_active="true" | table Access_IP, Access_time, Logoff_time, exec_time, executed_command   it does not join over every session but only the last one so the one started at 1699814895.000000 and it will not identify any of the commands ran on the embedded system in the correct session.What could be the catch?   Thanks in advance!
Works really great! thanks a lot.