I've searched for an answer to this but cannot see one, so apologies if this has been answered before.
I am using DB Connect 2 to pull big data (about 60000 events in 30 minuts from one database) from a variety of Oracle databases into indexes. I noticed that not all events are indexed. When I check the health tab in DB connect everything seems OK.
1. When I go into the DB Connect Operations tab, I can verify that the data is there when I do the query preview, it's making it into the index.
2. When I am using dbquery in the search, all events from the database were returned, that's right.
3. I checked my _internal index, I did not find any errors.
4. Decreasing the "Fetch Size" parameter (from 5000 to 800, then to 300) seems to reduces the number of missed events, but still not all data is indexed.
5. The indexer often lacks a free swap, although there is free RAM. Maybe this is the problem?
Any help in where I can look to troubleshoot would be appreciated.
We have 1 indexer and several search head:
Splunk Enterprise Server 6.5.2
Linux, 47.1 GB Physical Memory, 12 CPU Cores
connection = ...
enable_query_wrapping = 1
index = cft_docum
input_timestamp_column_fullname = (001) NULL.TIME.TIMESTAMP
input_timestamp_column_name = TIME
interval = 60
max_rows = 5000000
mode = tail
output_timestamp_format = yyyy-MM-dd HH:mm:ss
query = SELECT /+ opt_param('db_file_multiblock_read_count',1) */ ...
WHERE t1.TIME > sysdate-1/24\
AND t1.STATE_ID = t2.ID\
AND t1.CLASS_ID = t2.CLASS_ID\
AND t1.OBJ_ID = t3.ID\
AND t1.OBJ_ID = t4.ID
source = //...
sourcetype = ...
tail_rising_column_fullname = (001) NULL.TIME.TIMESTAMP
tail_rising_column_name = TIME
ui_query_catalog = NULL
ui_query_mode = advanced
disabled = 0
auto_disable = false
tail_rising_column_checkpoint_value = 1563454881499
fetch_size = 800
... View more
Hi all, We enlarged RAM and CPU capabilites on the indexer and search head. After that we had a problem on these servers: the free disk space decreases dramatically on the search head (in a few minutes to 0%), and the free swap space decreases dramatically on the indexer. After servers reboot the problem temporarily disappears. The problem repeats in a few days. There was no such problem before enlarging RAM and CPU capabilites. In the logs splunk did not find anytheng. Can this problem be related to the enlarging RAM and CPU capabilites? Where else to look for the cause of this problem? Guys could you comment on this?
We have 1 indexer and several search head: Splunk Enterprise Server 6.5.2 Linux, 47.1 GB Physical Memory, 12 CPU Cores
... View more