Getting Data In

BREAK_ONLY_BEFORE not working

tmarlette
Motivator

I have a clustered system that I am using, and I'm attempting to break events at the search head level, and it seems right though the events aren't breaking appropriately. I am attempting to break the event at the "Job <myJob>" portion of the messages below. also, for whatever reason, the 'code box' on here replaces the symbol "<" with an ampersand, and a "lt" just as a heads up.

This is an example of the log messages:
Job <myJob>, User <myuser>, Project <default>, Status <RUN>, Queue <normal>,
Command <myCommand>
Mon Apr 27 07:33:03: Submitted from host <myHost>, CWD <$HOME
>;
Mon Apr 27 07:33:04: Started on <myHost>, Execution Home </myHome>, Execution CWD <myCwd>;
Tue Apr 28 09:57:59: Resource usage collected.
The CPU time used is 1345 seconds.
MEM: 114 Mbytes; SWAP: 35.1 Gbytes; NTHREAD: 54
PGID: 9430; PIDs: 9430 9435 9437 9450 9501
PGID: 9461; PIDs: 9461

 MEMORY USAGE:
 MAX MEM: 4.2 Gbytes;  AVG MEM: 194 Mbytes

 SCHEDULING PARAMETERS:
           r15s   r1m  r15m   ut      pg    io   ls    it    tmp    swp    mem
 loadSched   -     -     -   0.9       -     -    -     -     -      -      -  
 loadStop    -     -     -   0.9       -     -    -     -     -      -      -  

 RESOURCE REQUIREMENT DETAILS:
 Combined: select[type == local] order[ut:mem]
 Effective: select[type == local] order[ut:mem] 
------------------------------------------------------------------------------

Job &lt;myJob&gt;, User &lt;myUser&gt;, Project &lt;default&gt;, Status &lt;RUN&gt;, Queue &lt;normal&gt;, 
                     Interactive mode, Command &lt;/myCommand &gt;
Tue Apr 28 01:42:38: Submitted from host &lt;myHost&gt;, CWD &lt;/myCwd;
Tue Apr 28 01:42:38: Started on &lt;myHost&gt;;
Tue Apr 28 09:58:00: Resource usage collected.
                     The CPU time used is 192 seconds.
                     MEM: 11 Mbytes;  SWAP: 1 Gbytes;  NTHREAD: 15
                     PGID: 20416;  PIDs: 20416 
                     PGID: 20425;  PIDs: 20425 20427 20442 


 MEMORY USAGE:
 MAX MEM: 11 Mbytes;  AVG MEM: 10 Mbytes

 SCHEDULING PARAMETERS:
           r15s   r1m  r15m   ut      pg    io   ls    it    tmp    swp    mem
 loadSched   -     -     -   0.9       -     -    -     -     -      -      -  
 loadStop    -     -     -   0.9       -     -    -     -     -      -      -  

 RESOURCE REQUIREMENT DETAILS:
 Combined: select[type == local] order[ut:mem]
 Effective: select[type == local] order[ut:mem] 
------------------------------------------------------------------------------

Job &lt;myJob&gt;, User &lt;myUser&gt;, Project &lt;default&gt;, Status &lt;RUN&gt;, Queue &lt;normal&gt;, I
                     nteractive mode, Command &lt;/myCommand&gt;
Tue Apr 28 02:47:25: Submitted from host &lt;myHost&gt;, CWD &lt;myCwd
                     &gt;;
Tue Apr 28 02:47:25: Started on &lt;myHost&gt;;
Tue Apr 28 09:58:26: Resource usage collected.
                     The CPU time used is 84 seconds.
                     MEM: 8 Mbytes;  SWAP: 928 Mbytes;  NTHREAD: 14
                     PGID: 25895;  PIDs: 25895 
                     PGID: 25898;  PIDs: 25898 25900 25915 


 MEMORY USAGE:
 MAX MEM: 8 Mbytes;  AVG MEM: 7 Mbytes

 SCHEDULING PARAMETERS:
           r15s   r1m  r15m   ut      pg    io   ls    it    tmp    swp    mem
 loadSched   -     -     -   0.9       -     -    -     -     -      -      -  
 loadStop    -     -     -   0.9       -     -    -     -     -      -      -  

 RESOURCE REQUIREMENT DETAILS:
 Combined: select[type == local] order[ut:mem]
 Effective: select[type == local] order[ut:mem] 

and this is the my stanza in props.conf on the search head for this sourcetype:

[mySourcetype]
SHOULD_LINEMERGE = true
BREAK_ONLY_BEFORE = \Job\s.\d+.

When i use regexr.com for this extraction it matches where I need to break the event, however it doesn't seem to be working here. Am I doing something wrong?

0 Karma
1 Solution

tmarlette
Motivator

This is due to the "BREAK_ONLY_BEFORE" being at the search head level, and not the indexing level. Thank you Rich!

in order for "BREAK_ONLY_BEFORE" to work successfully, it MUST be at the indexing tier.

View solution in original post

0 Karma

tmarlette
Motivator

This is due to the "BREAK_ONLY_BEFORE" being at the search head level, and not the indexing level. Thank you Rich!

in order for "BREAK_ONLY_BEFORE" to work successfully, it MUST be at the indexing tier.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Is <MyJob> literal text or a field? What kind of field?

Have you tried putting parens around the regex to create a matching group?

---
If this reply helps you, Karma would be appreciated.
0 Karma

tmarlette
Motivator

This is literal. In the log entry this is an integer which ends up looking like. The brackets are in the log entry as well.

Job <89238764>

I have not tried putting parens around it. I can give it a shot.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Your regex should match, except there's no need to escape the 'J'.

---
If this reply helps you, Karma would be appreciated.
0 Karma

tmarlette
Motivator

I agree. I don't know if I should be putting this at the indexing tier as well though. Do you know if this needs to be there as well?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Yes, you should be setting the props.conf file on your indexer(s).

---
If this reply helps you, Karma would be appreciated.
0 Karma

tmarlette
Motivator

I believe that would be the problem then. When i'm able to get that setting into my indexers, I will let know the results. Currently it only resides in the search heads.

Thank you!

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...