Getting Data In

Splunk Forwarder transmit Incomplete events for Oracle GG Logs

kvallala
Explorer

Here is my log section. lines in bold are not being written to Splunk. They all are on different lines in Log so i expect they all will be individual events for log.


Handler: "filewriter": filewriter(FileWriterHandler)


DEBUG 2020-05-14 14:32:19.000659 [pool-2-thread-1] - UserExitDataSource.getStatusReport
DEBUG 2020-05-14 14:32:19.000659 [pool-2-thread-1] - [104250] getStatusReport: Thu May 14 14:32:19 PDT 2020
INFO 2020-05-14 14:32:19.000659 [pool-2-thread-1] - Memory at Status : Max: 455.50 MB, Total: 84.50 MB, Free: 19.48 MB, Used: 65.02 MB
INFO 2020-05-14 14:32:19.000659 [pool-2-thread-1] - Status report: Thu May 14 14:32:19 PDT 2020


Status Report for UserExit

Total elapsed time: 36 days 4:44:30.013 [total = 3127470 sec = 52124 min = 868 hr ] => Total time since first event
Event processing time: 0:04:54.386 [total = 294 sec = 4 min ] => Time spent sending msgs (max: 131 ms)
Metadata process time: 0:00:00.001 [total = 1 ms ] => Time spent receiving metadata (3 tables, 93 columns)

Operations Received/Sent: 1584578 / 1584578
Rate (overall): 0 op/s (peak: 0 op/s)
(per event): 5389 op/s

Transactions Received/Sent: 37 / 37
Rate (overall): 0 tx/s (peak: 0 tx/s)
(per event): 0 tx/s

1584578 records processed as of Thu May 14 14:32:19 PDT 2020 (rate 0/sec, delta 0)


however, log behavior is
This above entire section is written to log every 30 sec. So if i look at my log lines count it increases 31 lines every time
cat qqqq.log|wc -l
505331
cat qqqq.log|wc -l
505362

I am not able to figure out why is that log being written partially? please help

0 Karma
1 Solution

to4kawa
Ultra Champion

props.conf

[GGLog]
SHOULD_LINEMERGE = false
LINE_BREAKER=\*{49}([\r\n]+)DEBUG
NO_BINARY_CHECK=true
TRUNCATE=0
disabled=false
pulldown_type=true

Hi @kvallala
How about this?

View solution in original post

0 Karma

to4kawa
Ultra Champion

props.conf

[GGLog]
SHOULD_LINEMERGE = false
LINE_BREAKER=\*{49}([\r\n]+)DEBUG
NO_BINARY_CHECK=true
TRUNCATE=0
disabled=false
pulldown_type=true

Hi @kvallala
How about this?

0 Karma

kvallala
Explorer

@to4kawa thanks for suggestion. I will try that and let you know. What if i wanted to consider entire part from Handler to the * at the end as one single event? The entire piece of section is written to log at same time.

0 Karma

kvallala
Explorer

this answer helped me in right direction

0 Karma

to4kawa
Ultra Champion

please provide sample logs ,but my answer is updated.

0 Karma

kvallala
Explorer

here is sample log (you will see 3 sections each ends with "Handler: "filewriter": filewriter(FileWriterHandler)". Each section begins with blank lines before it starts the line with DEBUG.
Each of this section to the log is written in all at once ,and that repeats in the log for every 30 secs.
Along with particular format of section, i found its writing other lines too. How can i ensure i get this all into log.

DEBUG 2020-05-18 12:46:46.000365 [main] - == JNI == createColumnValue()
DEBUG 2020-05-18 12:46:46.000365 [main] - == JNI == createOperation(AP.AP_INVOICES_ALL, 15, 4909, 25284690, 4909, 25287278, 3, [B@1c70cf02, [B@246cc4b6)
DEBUG 2020-05-18 12:46:46.000365 [main] - create token, key='R' (R), value=AAEVU6AGUAAKBayAAP (AAEVU6AGUAAKBayAAP), isSet=HAS_VALUE (HAS_VALUE)
DEBUG 2020-05-18 12:46:46.000365 [main] - create token, key='TKN-COMMITTIMESTAMP' (TKN-COMMITTIMESTAMP), value=2020-05-18 12:46:25.000000 (2020-05-18 12:46:25.000000), isSet=HAS_VALUE (HAS_VALUE)
DEBUG 2020-05-18 12:46:46.000365 [main] - create token, key='TKN-FILESEQNO' (TKN-FILESEQNO), value= (), isSet=HAS_VALUE (HAS_VALUE)
DEBUG 2020-05-18 12:46:46.000365 [main] - create token, key='TKN-FILERBA' (TKN-FILERBA), value= (), isSet=HAS_VALUE (HAS_VALUE)
DEBUG 2020-05-18 12:46:47.000291 [pool-2-thread-1] - UserExitDataSource.getStatusReport
DEBUG 2020-05-18 12:46:47.000291 [pool-2-thread-1] -  [115511] getStatusReport: Mon May 18 12:46:47 PDT 2020
INFO  2020-05-18 12:46:47.000292 [pool-2-thread-1] - Memory at Status : Max: 455.50 MB, Total: 296.50 MB, Free: 170.68 MB, Used: 125.82 MB
INFO  2020-05-18 12:46:47.000292 [pool-2-thread-1] - Status report: Mon May 18 12:46:47 PDT 2020
*************************************************
    Status Report for UserExit
*************************************************

  Total elapsed time:        40 days 2:35:00.012 [total = 3465300 sec = 57755 min = 962 hr ]   => Total time since first event
     Event processing time:  1:35:35.979 [total = 5735 sec = 95 min = 1 hr ]   => Time spent sending msgs (max: 160 ms)
     Metadata process time:  0:00:00.005 [total = 5 ms ]   => Time spent receiving metadata (14 tables, 1978 columns)

  Operations Received/Sent:  25834852 / 25834852
     Rate (overall):         7 op/s    (peak: 2182 op/s)
          (per event):       4504 op/s

  Transactions Received/Sent: 61653 / 61653
     Rate (overall):         0 tx/s    (peak: 1 tx/s)
          (per event):       10 tx/s

 25834852 records processed as of Mon May 18 12:46:47 PDT 2020 (rate 12/sec, delta 367)

*************************************************

-------------------------------------------------
Handler: "filewriter": filewriter(FileWriterHandler)

*************************************************



DEBUG 2020-05-18 12:46:51.000167 [FutureTaskScheduler] - Scheduling Future Task for Callable oracle.goldengate.handler.filewriter.AvroFileWriterManager.FileRollTask
INFO  2020-05-18 12:46:51.000167 [FutureTaskScheduler] - New Thread Added to the pool, Name[TaskEngine_78920:132654]
DEBUG 2020-05-18 12:46:51.000167 [TaskEngine_78920] - In BeforeExecute for FileRollTask
DEBUG 2020-05-18 12:46:51.000168 [TaskEngine_78920(FileRollTask)] - Roll Task running
INFO  2020-05-18 12:46:51.000168 [TaskEngine_78920(FileRollTask)] - New Thread Added to the pool, Name[TaskEngine_78921:132655]
DEBUG 2020-05-18 12:46:51.000168 [TaskEngine_78920] - In AfterExecute for FileRollTask
DEBUG 2020-05-18 12:46:51.000168 [TaskEngine_78921] - In BeforeExecute for FileFinalizeTask
DEBUG 2020-05-18 12:46:51.000168 [TaskEngine_78920] - Execution Complete for oracle.goldengate.handler.filewriter.AvroFileWriterManager$FileRollTask, ElapsedTime[0.79772ms] CPU Time[374,353ns] Enqueue Time[0ms] Wait Count[0] Wait Time[0m
s] Block Count[0] Block Time[0ms]
DEBUG 2020-05-18 12:46:52.000369 [TaskEngine_78920] - In AfterExecute for FileFinalizeTask
DEBUG 2020-05-18 12:46:52.000369 [TaskEngine_78920] - Execution Complete for oracle.goldengate.handler.filewriter.FileFinalizeManager$FileFinalizeTask, ElapsedTime[174.78134ms] CPU Time[223,899ns] Enqueue Time[0ms] Wait Count[1] Wait Tim
e[174ms] Block Count[0] Block Time[0ms]
DEBUG 2020-05-18 12:47:17.000291 [pool-2-thread-1] - UserExitDataSource.getStatusReport
DEBUG 2020-05-18 12:47:17.000292 [pool-2-thread-1] -  [115512] getStatusReport: Mon May 18 12:47:17 PDT 2020
INFO  2020-05-18 12:47:17.000292 [pool-2-thread-1] - Memory at Status : Max: 455.50 MB, Total: 296.50 MB, Free: 165.63 MB, Used: 130.87 MB
INFO  2020-05-18 12:47:17.000292 [pool-2-thread-1] - Status report: Mon May 18 12:47:17 PDT 2020
*************************************************
    Status Report for UserExit
*************************************************

  Total elapsed time:        40 days 2:35:30.012 [total = 3465330 sec = 57755 min = 962 hr ]   => Total time since first event
     Event processing time:  1:35:35.979 [total = 5735 sec = 95 min = 1 hr ]   => Time spent sending msgs (max: 160 ms)
     Metadata process time:  0:00:00.005 [total = 5 ms ]   => Time spent receiving metadata (14 tables, 1978 columns)

  Operations Received/Sent:  25834852 / 25834852
     Rate (overall):         7 op/s    (peak: 2182 op/s)
          (per event):       4504 op/s

  Transactions Received/Sent: 61653 / 61653
     Rate (overall):         0 tx/s    (peak: 1 tx/s)
          (per event):       10 tx/s

 25834852 records processed as of Mon May 18 12:47:17 PDT 2020 (rate 0/sec, delta 0)

*************************************************

-------------------------------------------------
Handler: "filewriter": filewriter(FileWriterHandler)

*************************************************



INFO  2020-05-18 12:47:22.000195 [TaskEngine_78924] - Thread[TaskEngine_78924:132660] Removed from the pool. Tasks Executed[1] CPU Usage[0]
INFO  2020-05-18 12:47:22.000202 [TaskEngine_78921] - Thread[TaskEngine_78921:132655] Removed from the pool. Tasks Executed[2] CPU Usage[0]
INFO  2020-05-18 12:47:22.000224 [TaskEngine_78923] - Thread[TaskEngine_78923:132658] Removed from the pool. Tasks Executed[1] CPU Usage[0]
INFO  2020-05-18 12:47:22.000311 [TaskEngine_78922] - Thread[TaskEngine_78922:132657] Removed from the pool. Tasks Executed[2] CPU Usage[0]
INFO  2020-05-18 12:47:22.000369 [TaskEngine_78920] - Thread[TaskEngine_78920:132654] Removed from the pool. Tasks Executed[2] CPU Usage[0]
DEBUG 2020-05-18 12:47:47.000291 [pool-2-thread-1] - UserExitDataSource.getStatusReport
DEBUG 2020-05-18 12:47:47.000292 [pool-2-thread-1] -  [115513] getStatusReport: Mon May 18 12:47:47 PDT 2020
INFO  2020-05-18 12:47:47.000292 [pool-2-thread-1] - Memory at Status : Max: 455.50 MB, Total: 296.50 MB, Free: 165.60 MB, Used: 130.90 MB
INFO  2020-05-18 12:47:47.000292 [pool-2-thread-1] - Status report: Mon May 18 12:47:47 PDT 2020
*************************************************
    Status Report for UserExit
*************************************************

  Total elapsed time:        40 days 2:36:00.012 [total = 3465360 sec = 57756 min = 962 hr ]   => Total time since first event
     Event processing time:  1:35:35.979 [total = 5735 sec = 95 min = 1 hr ]   => Time spent sending msgs (max: 160 ms)
     Metadata process time:  0:00:00.005 [total = 5 ms ]   => Time spent receiving metadata (14 tables, 1978 columns)

  Operations Received/Sent:  25834852 / 25834852
     Rate (overall):         7 op/s    (peak: 2182 op/s)
          (per event):       4504 op/s

  Transactions Received/Sent: 61653 / 61653
     Rate (overall):         0 tx/s    (peak: 1 tx/s)
          (per event):       10 tx/s

 25834852 records processed as of Mon May 18 12:47:47 PDT 2020 (rate 0/sec, delta 0)

*************************************************

-------------------------------------------------
Handler: "filewriter": filewriter(FileWriterHandler)

*************************************************
0 Karma

kvallala
Explorer

as of now, i don't have dev environment so it takes time to deploy props.conf to indexer in INT environment, as it has to go through an MR, and i need to wait whether it works really or not.
I am looking at on how
1. How to keep entire section as one event, if that is not possible
2. Ensure each of the lines in a section are written to splunk (this is not happening now with those in bold are missing)

Thanks so much for looking.

0 Karma

to4kawa
Ultra Champion

1 yes, check my answer.
2 I see, you should make transforms.conf

what's MR? Splunk admin?

0 Karma

kvallala
Explorer

I mean, Migration Request. A code change that i need to submit Enterprise splunk Admin.
Also pardon basic questions, as am newbie to splunk yet.
Thanks again. let me get back.

0 Karma

to4kawa
Ultra Champion

I'm an amateur so please tell me:

  1. Does Splunk admin make props.conf?
  2. Does Splunk admin extract fields?
0 Karma

kvallala
Explorer

Currently am using my org TEST environment where i use the default indexers, so its been used by everyone who use TEST. so we need to submit the git making changes to props.conf of default indexer(app), and they review and push it to splunk (probably they use Jenkins for that)
Splunk Admin here only to review and accept or reject what we ask in props.conf

0 Karma

kvallala
Explorer

@to4kawa I have updated my sample log - can you pls check again. My assumption was incorrect that log will only have exact format of sections, but it has other various messages. How can i ensure i capture all of them.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

What are the inputs.conf and props.conf settings?

---
If this reply helps you, Karma would be appreciated.
0 Karma

kvallala
Explorer

inputs.conf
[monitor:///ogg/dipc/dicloud/dirxxx/*.log]
sourcetype = GGLog
index = app

I don't have any updates for props.conf, so its nothing for this source type as of now.

I am looking at on how
1. How to keep entire section as one event, if that is not possible
2. Ensure each of the lines in a section are written to splunk (this is not happening now with those in bold are missing)

Thanks so much for looking.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Every sourcetype should have props defined. Otherwise, you may get unpredictable results.

---
If this reply helps you, Karma would be appreciated.
0 Karma

kvallala
Explorer

Thanks, yes thats what seems happening. However my log has various types of events, as this is not that am creating but its a application log.

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...