All Apps and Add-ons

Splunk DB Connect connection to Hana

dineshraj9
Builder

Has anyone configured Splunk DB Connect app to pull data from SAP Hana DB using ngdbc.jar?

Do we need to add a new db connection type in db_connection_types.conf?

Thanks!

1 Solution

djackson_splunk
Splunk Employee
Splunk Employee

SAP provides the JDBC client, ngdbc.jar, which is found in the HANA HDB Client package.
Put that driver in: $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers

You will need to manually create a db_connection_types.conf in:
$SPLUNK_HOME/etc/apps/splunk_app_db_connect/local

In this conf file, you need the following.

[hana]

displayName = HANA

jdbcUrlFormat = jdbc:sap://:/

jdbcDriverClass = com.sap.db.jdbc.Driver

supportedVersions = 2.0

Also be aware when setting up the connection, that the port is 3xx15 where xx is the HANA instance number. For example if instance 90, then port is 39015. Database is the HANA SID.

Here is an example of a connection for HANA...

alt text

I've tested this with HANA Express Edition 2.0 and Splunk 7.0.0 and Splunk DB Connect 3.1.1

Here is a successful connection...

alt text

View solution in original post

martingutmann
New Member

Hi

I get the same error:

2018-03-20 11:04:50.927 +0100 [QuartzScheduler_Worker-5] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader
java.lang.NullPointerException: null
at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.isOracleInput(DbInputRecordReader.java:113)
at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.getRowProcessor(DbInputRecordReader.java:107)
at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:91)
at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:117)
at org.easybatch.core.job.BatchJob.call(BatchJob.java:74)
at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)

When I configure the DB Input I can pull the data in the wizard. (It's slow but it works)

I had to configure HANA in the default folder. Otherwise it would never show up as a connection type.

Any ideas?

0 Karma

martingutmann
New Member

Same problem here. Interestingly when creating the DB Input task it works. But when the job is getting executed I get connection refused.

0 Karma

jcoates
Communicator

if you can create an input successfully but it then can't run, there's two possibilities I can think of:

  1. You're running in a different Splunk context than you created the input in and can't access the DB Connect artifacts needed
  2. The database is returning something different in production than it returned in testing, causing DB Connect or Splunk to reject the data instead of indexing it.
0 Karma

martingutmann
New Member

Hi
Thanks for your suggestions.

  • I gave the connection full read & write permission. I tried both Splunk DB Connect and Search & Reporting as the app.

  • This is a simple set up on my laptop. Both Splunk (local installation on Windows) and HANA (running in VM Ware) are the same.

  • I tried limiting the number of rows and columns to 1x10. Does not help.

  • I am a bit confused about the isOracleInput in the Java error log.

0 Karma

djackson_splunk
Splunk Employee
Splunk Employee

SAP provides the JDBC client, ngdbc.jar, which is found in the HANA HDB Client package.
Put that driver in: $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers

You will need to manually create a db_connection_types.conf in:
$SPLUNK_HOME/etc/apps/splunk_app_db_connect/local

In this conf file, you need the following.

[hana]

displayName = HANA

jdbcUrlFormat = jdbc:sap://:/

jdbcDriverClass = com.sap.db.jdbc.Driver

supportedVersions = 2.0

Also be aware when setting up the connection, that the port is 3xx15 where xx is the HANA instance number. For example if instance 90, then port is 39015. Database is the HANA SID.

Here is an example of a connection for HANA...

alt text

I've tested this with HANA Express Edition 2.0 and Splunk 7.0.0 and Splunk DB Connect 3.1.1

Here is a successful connection...

alt text

View solution in original post

jbhojak
Loves-to-Learn Lots

HEY @djackson_splunk  @jcoates , 

I am also trying to connect to the SAP database via db connect app (version - 3.4.2) and I came across your solution but its not working for me. 

As per your answer I placed the ngdbc.jar file in this directory $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers 

as well as created the db_connection_types.conf in the $SPLUNK_HOME/etc/apps/splunk_app_db_connect/local directory. 

Below are my config of the db_connection_types.conf file. 

displayName = SAP HANA

serviceClass = com.splunk.dbx2.DefaultDBX2JDBC

jdbcUrlFormat = jdbc:sap://:/

jdbcDriverClass = com.sap.cloud.db.jdbc

database = XXXXXXX

port = 41608

I see the connection is up in the UI but its not picking up the driver (Note - driver is there in the directory)

Please help me with this issue or let me know where am I going wrong with above configuration. Your help is appreciated. 

Links I referred - 

Tags (1)
0 Karma

syedabdulkather
New Member

@djackson I have installed ngdbc.jar driver as mentioned and Splunk has picked up the driver.

Able to retrieve results using SQL Explorer

2018-07-30 04:16:14.901 -0700 INFO  c.s.dbx.server.task.listeners.JobMetricsListener - action=collect_job_metrics connection=HANA1400 jdbc_url=jdbc:sap://hostname:30015/tenant status=FAILED input_name=MYINPUT_NAME batch_size=1000 error_threshold=N/A is_jmx_monitoring=false start_time=2018-07-30_04:15:16 end_time=2018-07-30_04:16:14 duration=58477 read_count=0 write_count=0 filtered_count=0 error_count=0

However, scheduling data inputs is failing with status=FAILED

Tried enabling debug logs but the logging is always INFO Is there something I am missing?

0 Karma

rajeshmeea21
Explorer

Hello,

We are getting following error while connecting to HANA database using Splunk DB Connect App. Any suggestions?

2018-01-23 11:20:52.412 +1100  [QuartzScheduler_Worker-6] INFO  org.easybatch.core.job.BatchJob - Job 'HanaInput' starting
2018-01-23 11:20:52.412 +1100  [QuartzScheduler_Worker-6] INFO  org.easybatch.core.job.BatchJob - Batch size: 1,000
2018-01-23 11:20:52.412 +1100  [QuartzScheduler_Worker-6] INFO  org.easybatch.core.job.BatchJob - Error threshold: N/A
2018-01-23 11:20:52.412 +1100  [QuartzScheduler_Worker-6] INFO  org.easybatch.core.job.BatchJob - Jmx monitoring: false
2018-01-23 11:20:52.413 +1100  [QuartzScheduler_Worker-6] INFO  c.s.d.s.dbinput.recordreader.DbInputRecordReader - action=db_input_record_reader_is_opened input_task="HanaInput" query=SELECT key, value FROM sys.m_host_information;
2018-01-23 11:20:52.418 +1100  [QuartzScheduler_Worker-6] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader
java.lang.NullPointerException: null
    at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.isOracleInput(DbInputRecordReader.java:111)
    at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.getRowProcessor(DbInputRecordReader.java:105)
    at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:89)
    at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:117)
    at org.easybatch.core.job.BatchJob.call(BatchJob.java:74)
    at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
    at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
    at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
2018-01-23 11:20:52.418 +1100  [QuartzScheduler_Worker-6] INFO  org.easybatch.core.job.BatchJob - Job 'HanaInput' finished with status: FAILED

Thanks in advance.

0 Karma

jcoates
Communicator

2018-01-23 11:20:52.418 +1100 [QuartzScheduler_Worker-6] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader
looks like it's unable to communicate with the database -- did you follow djackson's instructions on this page? If so, I would check database permissions for the service account you've selected.

0 Karma

jcoates
Communicator
0 Karma

Tune In & Win!

Don't miss out on your
chance to take home free
prizes by helping our players
save the Splunk Cloudom!

Dungeons & Data
Monsters: Splunk O11y
Day Editions Games
stream live:
5/4 at 6:30pm PST
5/5 at 7:00pm PST
on