Splunk Search

trying to generate an alert but The problem is that the logs are different

yeisonv
Explorer

Good morning, I am trying to generate an alert for productive applications when they are in "debug" mode

The problem is that the logs are different. 

when i search "index=wls sourcetype=wls_managedserver "debug" | stats count by host"

logically it lists the hosts that meet the condition in debug mode.

I would need to generate an alert to send me which hosts have the app in debug mode, but at the same time to send me only a trace of that search by email

ó extract the fields but it is more difficult because they are different

Host= W1422

Cluster= qa.3.3_man05

app= app.userdown or [appwork.consumer.serviceTaskExecutorBackedUpQueueConsumer- 149]

log examples:

####<Jul 29, 2020 12:07:28 PM ART> <Notice> <Stdout> <W1422> <qa3.3_man05> <mq.task.executor-1> <<WLS Kernel>> <> <> <1596035248169> <BEA-000000> <2020/07/29 12:07:28.169 [DEBUG] [mq.task.executor-1] [appwork.consumer.serviceTaskExecutorBackedUpQueueConsumer- 149] - No hay mensajes en la cola>

 

####<Jul 29, 2020 12:09:16 PM ART> <Notice> <Stdout> <W1522> <qa3.3_cl6_man01> <app.userdown> <<WLS Kernel>> <> <> <1596035356838> <BEA-000000> <[29/07/2020 12:09] DEBUG MonitoringManager.getSourceProcessor() -> Verificando processor para: javax.jms.ExceptionListener contra el tipo .persistence>

####<Jul 29, 2020 12:10:01 PM ART> <Notice> <Stdout> <W0188> <desa5.3_cl6_man01> <org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-7> <<WLS Kernel>> <> <> <1596035401281> <BEA-000000> <[29/07/2020 12:10] DEBUG SqlStatementLogger.logStatement() ->

 

if anyone can help me

thanks

Labels (5)
0 Karma
1 Solution

isoutamo
SplunkTrust
SplunkTrust

Hi

have you tried it like this:

index=wls sourcetype=wls_managedserver "debug" 
| rex "(\<[^>]+> ){3}\<(?<Host>[^>]+)>\s+<(?<Cluster>[^>]+)>\s+<(?<App>[^>]+)>.*BEA-000000>\s<(?<Message>[^>]+)>"
| rex field=Message "\[?\d+\/\d+\/\d+\s+\d+:\d+(:\d+\.\d+)?\]?\s+\[?(?<logLevel>[^\s\]]+)\]?"
| where logLevel = "DEBUG"
| stats values(Message) as Messages by logLevel,Host

This should found all nodes if those format is same.

One possible change can be that BEA-000000 which should change to BEA-[^\>]+ This should match also to some error messages not only informative.

r. Ismo

View solution in original post

isoutamo
SplunkTrust
SplunkTrust

Hi

If those three lines are all WLS messages which you have, then you could try this.

index=_internal | head 1
| eval _raw="####<Jul 29, 2020 12:07:28 PM ART> <Notice> <Stdout> <W1422> <qa3.3_man05> <mq.task.executor-1> <<WLS Kernel>> <> <> <1596035248169> <BEA-000000> <2020/07/29 12:07:28.169 [DEBUG] [mq.task.executor-1] [appwork.consumer.serviceTaskExecutorBackedUpQueueConsumer- 149] - No hay mensajes en la cola>
####<Jul 29, 2020 12:09:16 PM ART> <Notice> <Stdout> <W1522> <qa3.3_cl6_man01> <app.userdown> <<WLS Kernel>> <> <> <1596035356838> <BEA-000000> <[29/07/2020 12:09] DEBUG MonitoringManager.getSourceProcessor() -> Verificando processor para: javax.jms.ExceptionListener contra el tipo .persistence>
####<Jul 29, 2020 12:10:01 PM ART> <Notice> <Stdout> <W0188> <desa5.3_cl6_man01> <org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-7> <<WLS Kernel>> <> <> <1596035401281> <BEA-000000> <[29/07/2020 12:10] DEBUG SqlStatementLogger.logStatement() ->" 
| multikv noheader=t
| rename COMMENT as "prepare sample data"
| rex "(\<[^>]+> ){3}\<(?<Host>[^>]+)>\s+<(?<Cluster>[^>]+)>\s+<(?<App>[^>]+)>.*BEA-000000>\s<(?<Message>[^>]+)>"

r. Ismo 

yeisonv
Explorer

thank you very much for taking the time to help me

The problem is that in production we have more than 60 productive applications and they are always changing. I'm trying to identify when they change the app to "debug" mode and thus generate an alert

so When executing this

index=wls sourcetype=wls_managedserver "debug" | stats count by host

It shows me the hosts where the word "Debug" is, but I see many events because the app writes the same thing several times in the log. Is there a way to be able to list the hosts that send me 1 event per hosts by email?

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

try to add the next to the end of previous example:

 

...
| rex field=Message "\[?\d+\/\d+\/\d+\s+\d+:\d+(:\d+\.\d+)?\]?\s+\[?(?<logLevel>[^\s\]]+)\]?"
| where logLevel = "DEBUG"
| stats values(Message) as Messages by logLevel,Host

 

and then define alert as you need-

r. Ismo 

yeisonv
Explorer

Thank you.

What I saw when executing it is that it brings me only values ​​of those 3 logs, specify what variables I wanted to exit. But it doesn't analyze other logs

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

have you tried it like this:

index=wls sourcetype=wls_managedserver "debug" 
| rex "(\<[^>]+> ){3}\<(?<Host>[^>]+)>\s+<(?<Cluster>[^>]+)>\s+<(?<App>[^>]+)>.*BEA-000000>\s<(?<Message>[^>]+)>"
| rex field=Message "\[?\d+\/\d+\/\d+\s+\d+:\d+(:\d+\.\d+)?\]?\s+\[?(?<logLevel>[^\s\]]+)\]?"
| where logLevel = "DEBUG"
| stats values(Message) as Messages by logLevel,Host

This should found all nodes if those format is same.

One possible change can be that BEA-000000 which should change to BEA-[^\>]+ This should match also to some error messages not only informative.

r. Ismo

yeisonv
Explorer

Thank you very much, He had put together something similar. if i want to send 1 single event found by host?


When I run the query I have many events per host but I would like to send 1 even by email

0 Karma

yeisonv
Explorer

thanks I was able to solve with "dedup"

index=wls sourcetype=wls_managedserver "debug" OR "DEBUG"
| rex "(\<[^>]+> ){3}\<(?<Host>[^>]+)>\s+<(?<Cluster>[^>]+)>\s+<(?<App>[^>]+)>.*BEA-000000>\s<(?<Message>[^>]+)>"
| rex field=Message "\[?\d+\/\d+\/\d+\s+\d+:\d+(:\d+\.\d+)?\]?\s+\[?(?<logLevel>[^\s\]]+)\]?"
| dedup host
| where logLevel = "DEBUG"
| stats values(Message) as Messages by logLevel, Host, Cluster
 

Im happy thanks 

0 Karma
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...