Splunk Search

How to detect two consecutive lines with same pattern

dpatiladobe
Explorer

I wanted to detect the pattern with two consecutive lines with Received x messages , In ideal scenario it should be Received followed by Updated tried streamer but no luck

8/22/17
5:30:32.542 AM

2017-08-22 01:30:32.542 INFO 3815 --- [ThreadPoolService-safe-1-thread-6256] com.adobe.ids.kafka.KafkaRestFacade : Updated consumed offset to value '56622449' for consumer 'mps_history_channel_consumer_3_prod_archiver_b030beb9' in consumer group 'mps_history'
date_hour = 1
8/22/17
5:30:17.849 AM

2017-08-22 01:30:17.849 INFO 3815 --- [ThreadPoolService-safe-1-thread-6252] c.a.ids.consumer.BasePermissionConsumer : Received 1 messages in batch for consumer 'mps_history_channel_consumer_3_prod_archiver_b030beb9'
date_hour = 1
8/22/17
5:30:17.606 AM

2017-08-22 01:30:17.606 INFO 3815 --- [ThreadPoolService-safe-1-thread-6248] c.a.ids.consumer.BasePermissionConsumer : Received 1 messages in batch for consumer 'mps_history_channel_consumer_3_prod_archiver_b030beb9'
date_hour = 1
8/22/17
5:30:17.437 AM

2017-08-22 01:30:17.437 INFO 3815 --- [ThreadPoolService-safe-1-thread-6252] com.adobe.ids.kafka.KafkaRestFacade : Updated consumed offset to value '56622448' for consumer 'mps_history_channel_consumer_3_prod_archiver_b030beb9' in consumer group 'mps_history'
date_hour = 1
8/22/17
5:30:02.602 AM

2017-08-22 01:30:02.602 INFO 3815 --- [ThreadPoolService-safe-1-thread-6248] com.adobe.ids.kafka.KafkaRestFacade : Updated consumed offset to value '56622448' for consumer 'mps_history_channel_consumer_3_prod_archiver_b030beb9' in consumer group 'mps_history'
date_hour = 1
8/22/17
5:29:56.573 AM

2017-08-22 01:29:56.573 INFO 3815 --- [ThreadPoolService-safe-1-thread-6240] c.a.ids.consumer.BasePermissionConsumer : Received 1 messages in batch for consumer 'mps_history_channel_consumer_3_prod_archiver_b030beb9'
date_hour = 1
8/22/17
5:29:56.572 AM

2017-08-22 01:29:56.572 INFO 3815 --- [ThreadPoolService-safe-1-thread-6244] c.a.ids.consumer.BasePermissionConsumer : Received 0 messages in batch for consumer 'mps_history_channel_consumer_3_prod_archiver_b030beb9'
date_hour = 1
8/22/17
5:29:47.776 AM

2017-08-22 01:29:47.776 INFO 3815 --- [ThreadPoolService-safe-1-thread-6244] com.adobe.ids.kafka.KafkaRestFacade : Updated consumed offset to value '56622447' for consumer 'mps_history_channel_consumer_3_prod_archiver_b030beb9' in consumer group 'mps_history'

Tags (1)
0 Karma

dpatiladobe
Explorer

I was able to get result using transaction command

search | earliest=-65m latest=-5m | rex field=_raw "for consumer '(?P\S+)" | transaction startswith=("Updated") by consumer_name | search eventcount > 2

0 Karma

DalJeanis
Legend

What are you trying to do with it? What is the use case? Do you want to eliminate the dups, or detect them?

 your search here
 some eval or rex to get what you want to test into checkfield
| streamstats current=f last(checkfield) as priorcheckfield

If you want to get rid of dups, add this...

| where isnull(priorcheckfield) OR checkfield!=priorcheckfield

If you want to keep both dups and only the dups, add this...

| eval SecondDup=if(checkfield=priorcheckfield,1 null())
| reverse
| streamstats current=T last(SecondDup) as BothDups window=2 
| where BothDups==1
0 Karma

dpatiladobe
Explorer

these is worked for me
index=ids_mps_prd "mps_prod'" earliest=-65m latest=-5m | rex field=_raw "for consumer '(?P\S+)" | transaction startswith=("Updated") by consumer_name | search eventcount< 2 OR eventcount > 2

0 Karma

dpatiladobe
Explorer

When i try do query with single consume it works as expected but when i do for all then it's not reprting the exact issue.

index=ids_mps_prd "mps_history_channel_consumer_3" | transaction startswith=("Received") | search eventcount > 2 OR eventcount < 2

But these does not work

index=ids_mps_prd "mps_prod" | rex field=_raw "for consumer '(?P\S+)" | transaction startswith=("Received") by consumer_name | search eventcount > 2

0 Karma

DalJeanis
Legend

I'll ask again. What are you trying to do? What is your use case?

0 Karma

dpatiladobe
Explorer

i am trying to find the pair of the line with Updated followed by Received with group by consumer group.
Our Java program get's the messages from Kafka and update offset back to Kafka so Updated followed by received should be sequence if somehow ( due to java schedule or let response time from kafka ) if missed the sequence we may missed the message or we ahead of the actual offset in kafka that causes us -ve lag.
Below worked for me

index=ids_mps_prd "mps_prod'" earliest=-65m latest=-5m | rex field=_raw "for consumer '(?P\S+)" | transaction startswith=("Updated") by consumer_name | search eventcount< 2 OR eventcount > 2

0 Karma

adonio
Ultra Champion
0 Karma
Get Updates on the Splunk Community!

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...

SignalFlow: What? Why? How?

What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth ...