Getting Data In

How to configure line breaks for multiline events?

paqua77
Explorer

Line Breaks in MultiLine Events ?

Line Breakers
BeforeJob and Start Backup
Job ID is Unique
Sample log is 3 events.

If BeforeJob is on a line, break before BeforeJob and do not line break before Start Backup on first occurence of Start Backup

Tried but rules conflict: Need expression for BeforeJob
BREAK_ONLY_BEFORE = BeforeJob | Start Backup
MAX_TIMESTAMP_LOOKAHEAD = 300
MUST_NOT_BREAK_AFTER = BeforeJob
NO_BINARY_CHECK = 1
pulldown_type = 1

01-Jul 22:08 apsrd2058-dir JobId 210: End auto prune.

01-Jul 23:10 apsrd2058-dir JobId 211: shell command: run BeforeJob "/etc/bareos/make_catalog_backup_uhg.pl MyCatalog"
01-Jul 23:10 apsrd2058-dir JobId 211: Start Backup JobId 211, Job=BackupCatalog.2014-07-01_23.10.00_28

Sample Log:
01-Jul 22:08 apsrd2058-dir JobId 210: Start Backup JobId 210, Job=DBSRP0154-mySql-system-catalogs.2014-07-01_22.05.00_27
01-Jul 22:08 apsrd2058-dir JobId 210: Using Device "FileChgr3-Dev1" to write.
01-Jul 22:08 apsrd2058-sd JobId 210: Volume "OPV0037" previously written, moving to end of data.
01-Jul 22:08 apsrd2058-sd JobId 210: Ready to append to end of Volume "OPV0037" size=890611883
01-Jul 22:08 apsrd2058-sd JobId 210: Elapsed time=00:00:01, Transfer rate=119.9 K Bytes/second
01-Jul 22:08 apsrd2058-sd JobId 210: Sending spooled attrs to the Director. Despooling 309 bytes ...
01-Jul 22:08 apsrd2058-dir JobId 210: Bareos apsrd2058-dir 13.2.2 (12Nov13):
Build OS: x86_64-unknown-linux-gnu redhat Red Hat Enterprise Linux Server release 6.3 (Santiago)
JobId: 210
Job: DBSRP0154-mySql-system-catalogs.2014-07-01_22.05.00_27
Backup Level: Full
Client: "dbsrp0154-fd" 5.0.3 (04Aug10) x86_64-koji-linux-gnu,redhat,
FileSet: "mysql system catalog set" 2014-06-25 22:05:04
Pool: "File" (From Job resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "File3" (From Job resource)
Scheduled time: 01-Jul-2014 22:05:00
Start time: 01-Jul-2014 22:08:03
End time: 01-Jul-2014 22:08:04
Elapsed time: 1 sec
Priority: 10
FD Files Written: 1
SD Files Written: 1
FD Bytes Written: 119,761 (119.7 KB)
SD Bytes Written: 119,964 (119.9 KB)
Rate: 119.8 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s): OPV0037
Volume Session Id: 71
Volume Session Time: 1403751883
Last Volume Bytes: 890,732,465 (890.7 MB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK

01-Jul 22:08 apsrd2058-dir JobId 210: Begin pruning Jobs older than 1 month .
01-Jul 22:08 apsrd2058-dir JobId 210: No Jobs found to prune.
01-Jul 22:08 apsrd2058-dir JobId 210: Begin pruning Files.
01-Jul 22:08 apsrd2058-dir JobId 210: No Files found to prune.
01-Jul 22:08 apsrd2058-dir JobId 210: End auto prune.

01-Jul 23:10 apsrd2058-dir JobId 211: shell command: run BeforeJob "/etc/bareos/make_catalog_backup_uhg.pl MyCatalog"
01-Jul 23:10 apsrd2058-dir JobId 211: Start Backup JobId 211, Job=BackupCatalog.2014-07-01_23.10.00_28
01-Jul 23:10 apsrd2058-dir JobId 211: Using Device "FileChgr3-Dev1" to write.
01-Jul 23:10 apsrd2058-sd JobId 211: Volume "OPV0037" previously written, moving to end of data.
01-Jul 23:10 apsrd2058-sd JobId 211: Ready to append to end of Volume "OPV0037" size=890732465
01-Jul 23:10 apsrd2058-sd JobId 211: Elapsed time=00:00:01, Transfer rate=25.14 M Bytes/second
01-Jul 23:10 apsrd2058-sd JobId 211: Sending spooled attrs to the Director. Despooling 293 bytes ...
01-Jul 23:10 apsrd2058-dir JobId 211: Bareos apsrd2058-dir 13.2.2 (12Nov13):
Build OS: x86_64-unknown-linux-gnu redhat Red Hat Enterprise Linux Server release 6.3 (Santiago)
JobId: 211
Job: BackupCatalog.2014-07-01_23.10.00_28
Backup Level: Full
Client: "apsrd2058-fd" 13.2.2 (12Nov13) x86_64-unknown-linux-gnu,redhat,Red Hat Enterprise Linux Server release 6.3 (Santiago)
FileSet: "Catalog" 2014-06-25 22:25:22
Pool: "File" (From Job resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "File3" (From Job resource)
Scheduled time: 01-Jul-2014 23:10:00
Start time: 01-Jul-2014 23:10:03
End time: 01-Jul-2014 23:10:04
Elapsed time: 1 sec
Priority: 11
FD Files Written: 1
SD Files Written: 1
FD Bytes Written: 25,146,795 (25.14 MB)
SD Bytes Written: 25,146,914 (25.14 MB)
Rate: 25146.8 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: no
Volume name(s): OPV0037
Volume Session Id: 72
Volume Session Time: 1403751883
Last Volume Bytes: 915,898,455 (915.8 MB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK

01-Jul 23:10 apsrd2058-dir JobId 211: Begin pruning Jobs older than 2 months .
01-Jul 23:10 apsrd2058-dir JobId 211: No Jobs found to prune.
01-Jul 23:10 apsrd2058-dir JobId 211: Begin pruning Files.
01-Jul 23:10 apsrd2058-dir JobId 211: Pruned Files from 1 Jobs for client apsrd2058-fd from catalog.
01-Jul 23:10 apsrd2058-dir JobId 211: End auto prune.

01-Jul 23:10 apsrd2058-dir JobId 211: shell command: run AfterJob "/etc/bareos/delete_catalog_backup_uhg"
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + db_name=BUAASbareos
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + wd=/var/lib/bareos
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + max_versions=5
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + max_days=7
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: ++ find /var/lib/bareos -name '*BUAASbareos*sql'
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: ++ wc -l
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: find: failed to restore initial working directory: Permission denied
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + filecount=11
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + echo 'filecount = 11'
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: filecount = 11
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + '[' 11 -gt 5 ']'
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + find /var/lib/bareos -name '*BUAASbareos*sql' -ctime +7 -exec rm -f '{}' ';'
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: find: failed to restore initial working directory: Permission denied
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + '[' -f /var/lib/bareos/BUAASbareos.sql ']'
01-Jul 23:10 apsrd2058-dir JobId 211: AfterJob: + rm -f /var/lib/bareos/BUAASbareos.sql
02-Jul 21:10 apsrd2058-dir JobId 212: Start Backup JobId 212, Job=DBSED1397-SqlServer.2014-07-02_21.10.00_29
02-Jul 21:10 apsrd2058-dir JobId 212: Using Device "FileChgr3-Dev1" to write.
02-Jul 21:10 apsrd2058-sd JobId 212: Volume "OPV0037" previously written, moving to end of data.
02-Jul 21:10 apsrd2058-sd JobId 212: Ready to append to end of Volume "OPV0037" size=915898455
02-Jul 21:10 dbsed1397-fd JobId 212: Generate VSS snapshots. Driver="Win64 VSS", Drive(s)="E"
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "Task Scheduler Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "VSS Metadata Store Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "Performance Counters Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "System Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "SqlServerWriter", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "ASR Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "Shadow Copy Optimization Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "Registry Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "BITS Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "COM+ REGDB Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 dbsed1397-fd JobId 212: VSS Writer (BackupComplete): "WMI Writer", State: 0x1 (VSS_WS_STABLE)
02-Jul 21:10 apsrd2058-sd JobId 212: Elapsed time=00:00:13, Transfer rate=34.21 K Bytes/second
02-Jul 21:10 apsrd2058-sd JobId 212: Sending spooled attrs to the Director. Despooling 1,130 bytes ...
02-Jul 21:10 apsrd2058-dir JobId 212: Bareos apsrd2058-dir 13.2.2 (12Nov13):
Build OS: x86_64-unknown-linux-gnu redhat Red Hat Enterprise Linux Server release 6.3 (Santiago)
JobId: 212
Job: DBSED1397-SqlServer.2014-07-02_21.10.00_29
Backup Level: Full
Client: "dbsed1397-fd" 5.2.10 (28Jun12) Microsoft Windows Server 2008 R2 Enterprise Edition Service Pack 1 (build 7601), 64-bit,Cross-compile,Win64
FileSet: "SQL Server Set" 2014-06-25 21:36:10
Pool: "File" (From Job resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "File3" (From Job resource)
Scheduled time: 02-Jul-2014 21:10:00
Start time: 02-Jul-2014 21:10:03
End time: 02-Jul-2014 21:10:17
Elapsed time: 14 secs
Priority: 10
FD Files Written: 3
SD Files Written: 3
FD Bytes Written: 443,984 (443.9 KB)
SD Bytes Written: 444,742 (444.7 KB)
Rate: 31.7 KB/s
Software Compression: 85.7 % (gzip)
VSS: yes
Encryption: no
Accurate: no
Volume name(s): OPV0037
Volume Session Id: 73
Volume Session Time: 1403751883
Last Volume Bytes: 916,344,491 (916.3 MB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK

02-Jul 21:10 apsrd2058-dir JobId 212: Begin pruning Jobs older than 12 months .
02-Jul 21:10 apsrd2058-dir JobId 212: No Jobs found to prune.
02-Jul 21:10 apsrd2058-dir JobId 212: Begin pruning Files.
02-Jul 21:10 apsrd2058-dir JobId 212: No Files found to prune.
02-Jul 21:10 apsrd2058-dir JobId 212: End auto prune.

0 Karma
1 Solution

ShaneNewman
Motivator

Props.conf

BREAK_ONLY_BEFORE=^\d+\-\w+\s\d{2}\:\d{2}
MAX_TIMESTAMP_LOOKAHEAD=150
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=true
TIME_FORMAT=%d-%b %H:%M

Then, append this to your search to setup a transaction (what you actually should be doing)

rex field=_raw "JobId\s(?<job_id>\d+)" | sort + _time | transaction job_id

View solution in original post

0 Karma

ShaneNewman
Motivator

Props.conf

BREAK_ONLY_BEFORE=^\d+\-\w+\s\d{2}\:\d{2}
MAX_TIMESTAMP_LOOKAHEAD=150
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=true
TIME_FORMAT=%d-%b %H:%M

Then, append this to your search to setup a transaction (what you actually should be doing)

rex field=_raw "JobId\s(?<job_id>\d+)" | sort + _time | transaction job_id
0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...