All Apps and Add-ons

Oracle - How to have a single event for one log file ?

mva
Explorer

Hi Splunk team,

I have Oracle logs export and i would like the whole content of the log file in a single event that includes all the line in my log file (only .log files). The log file looks like this :

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: grants on tables/views/sequences/roles will not be exported
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user TOTO
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user TOTO
About to export TOTO's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export TOTO's tables via Conventional Path ...
. . exporting table ACCOUNT 5 rows exported
... a lots of tables and rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

Splunk put this log file in 2 events :
- one with the firsts 35 lines
- another event with the lines to the end of my log file

I want it to be in only 1 event and for the moment, i'm using the following configuration :

props.conf :
[Oracle]

SHOULD_LINEMERGE = true

MAX_EVENTS = 500000

BREAK_ONLY_BEFORE = !!!!!

CHARSET = AUTO

and my inputs.conf :

[monitor://E:\Oracle\Backup\Datapump]
disabled = false

whitelist = (?i).*.log$

sourcetype=Oracle

index = oracle

time_before_close = 60

I have tried many things but nothing is working.
Thanks !

Tags (1)

mva
Explorer

I'm still trying to make it work. In my forwarder log files, when my log files are modified, i have errors that appears in my forwarder log file :

01-04-2012 22:00:21.880 +0100 WARN TcpOutputFd - Connect to x.x.x.x:9997 failed. No connection could be made because the target machine actively refused it.
01-04-2012 22:00:37.521 +0100 ERROR TcpOutputFd - Connection to host=x.x.x.x:9997 failed
01-04-2012 22:03:06.650 +0100 INFO TcpOutputProc - Connected to idx=x.x.x.x:9997

Connections are allowed on port 9997, that's why i don't understand why he can't communicate with my Splunk Server.

Anyone have an idea ?

0 Karma

mva
Explorer

Hi Splunk support and everyone,

Unfortunately, this is not working. I'm copying the .log files generated from my Oracle export and it is not indexed in Splunk even if the date from the log file is not the same.

(I'm using xcopy to copy all the .log files generated)

I have nothing in my Splunkd.log that tell me that splunk can't index those files (on my client forwarder).

Here is the content of my inputs.conf file :

[monitor://my_folder]
disabled = false
sourcetype=oracle
crcSalt =
index = ora
[monitor://my_other_folder]
disabled = false
sourcetype=oracle_ref
crcSalt =
index = ora_ref

When i'm writing something in one of my log files, it is indexed.

Do you know why Splunk can't index my log files that are copied every day ? Is it because of xcopy (from a .cmd script) ?

Thanks in advance.

0 Karma

mva
Explorer

I copy all the log files in another folder and i'm indexing this new folder, but Splunk don't index them...

Everyday, i'm deleting all log files in this destination folder and then copy le log files from my Oracle export in this folder. It seems that Splunk don't index them again whereas they have another date time.

0 Karma

_d_
Splunk Employee
Splunk Employee

Try the following in your props:

[Oracle]
LINE_BREAKER=((?!))
SHOULD_LINEMERGE=true
TRUNCATE=100000

Hope this helps.

> please upvote and accept answer if you find it useful - thanks!

mva
Explorer

Yes, i restarted my Splunkforwarder service and splunkd service but my logs are still in 2 parts. I have modified my scripts to copy all .log files in another folder at the end of my Oracle exports (using exp.exe) and i'm trying now to index only the folder where i copied the .log files.

I will come again to post my results.

0 Karma

mva
Explorer

Thanks for your answer but i have still the same problem. My logfile is still divided in 2 parts when doing an export of my oracle user. The first in 25 line and then, the second in 314 lines.
The first part :

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
;;;
Connect\xE9 \xE0 : Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the OLAP and Data Mining options
Estimation en cours \xE0 l'aide de la m\xE9thode BLOCKS ...
Traitement du type d'objet SCHEMA_EXPORT/TABLE/TABLE_DATA
Estimation totale \xE0 l'aide le la m\xE9thode BLOCKS : 88.25 MB
Traitement du type d'objet SCHEMA_EXPORT/USER
Traitement du type d'objet SCHEMA_EXPORT/SYSTEM_GRANT
Traitement du type d'objet SCHEMA_EXPORT/ROLE_GRANT
Traitement du type d'objet SCHEMA_EXPORT/DEFAULT_ROLE
Traitement du type d'objet SCHEMA_EXPORT/TABLESPACE_QUOTA
Traitement du type d'objet SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Traitement du type d'objet SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Traitement du type d'objet SCHEMA_EXPORT/TABLE/TABLE
Traitement du type d'objet SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Traitement du type d'objet SCHEMA_EXPORT/TABLE/INDEX/INDEX
Traitement du type d'objet SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Traitement du type d'objet SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Traitement du type d'objet SCHEMA_EXPORT/TABLE/COMMENT

And then the second event (1 minute after the first one):

Traitement du type d'objet SCHEMA_EXPORT/VIEW/VIEW
Traitement du type d'objet SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Traitement du type d'objet SCHEMA_EXPORT/TABLE/TRIGGER
Traitement du type d'objet SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . export : "CSSY_ATTRIBUTEINFO" 8.907 MB 44436 lignes
. . export : "CSSY_ATTRIBUTEINFOBAK" 8.907 MB 44436 lignes
. . export : "CSSY_PRINTJOB" 7.524 MB 65 lignes
. . export : "CSSY_DESCRIPTIONLOB" 2.144 MB 867 lignes
. . export : "CSSY_DESCRIPTION" 4.428 MB 25049 lignes
. . export : "CSPU_PRLINE" 1.774 MB 7506 lignes
Show all 314 lines

Any other solutions ?

0 Karma

joonradley
Path Finder

It can be that Splunk is indexing part of the file and then on the next monitor cycle it indexes the rest. Try and create the file in another directory then move it to the monitor directory.

Also did you restart Splunk after the change?

0 Karma

joonradley
Path Finder

Use the following when capturing the entire log into a single event:

[Oracle]
LINE_BREAKER = \Z
SHOULD_LINEMERGE = true
TRUNCATE = 0

Try also to specify a TIME_PREFIX and TIME_FORMAT here.

You may also want to add the following to your monitor stanza in inputs.conf:
crcSalt = <SOURCE>

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...