Splunk Search

How to get sum of a column

New Member

Hello,

I am new in Splunk and trying to figure out sum of a column.

i run following sql query on database:
SELECT count(distinct successTransaction) FROM testDB.TranTable; // it gives me 11 records which is true.
SELECT sum(successTransaction) FROM testDB.TranTable; // it gives me 64152 which is true.

I have made mysql db connection using Splunk DB connect.
i run followin query on splunk:
index=mysql_testDB_index | stats distinct_count(successTransaction) // it gives me 11 records which is true.
index=mysql_testDB_index | stats sum(successTransaction) as SuccessTransaction // it gives me 12253032

Thanks for the help.

0 Karma
1 Solution

Builder

Hello,

When you use SELECT sum(successTransaction) FROM testDB.TranTable;the result is 64152 OK

When you use index=mysql_testDB_index | stats sum(successTransaction) as SuccessTransaction the result is 12253032 KO

And Mathematically : 12253032/64152 = 191

The problem you index the same event several time, because database connect you have in option do that, when you configure db to run by cron expression SO.

Solution??

is here

https://answers.splunk.com/answers/73554/avoid-duplicate-events-using-splunk-db-connect.html

Happy Splunk

View solution in original post

0 Karma

Builder

Hello,

When you use SELECT sum(successTransaction) FROM testDB.TranTable;the result is 64152 OK

When you use index=mysql_testDB_index | stats sum(successTransaction) as SuccessTransaction the result is 12253032 KO

And Mathematically : 12253032/64152 = 191

The problem you index the same event several time, because database connect you have in option do that, when you configure db to run by cron expression SO.

Solution??

is here

https://answers.splunk.com/answers/73554/avoid-duplicate-events-using-splunk-db-connect.html

Happy Splunk

View solution in original post

0 Karma

New Member

Thanks. there is another option dbxquery. if want to fire sql only.

0 Karma

Hai,

First check the Events in splunk and rows in database matches or not.
Check what kind of input type you have provided either Batch or rising while setting up DB connect.
If your using Batch each time the complete set of data from your database will be loaded into splunk (so you can get huge duplicate).
If your using Rising, you can set the rising column and checkpoint. so already existing data wont be loaded into splunk again.

0 Karma

New Member

Thanks for giving input type clarification. there is duplicate data in my splunk.

0 Karma

Contributor

Did you count of events in sql side and splunk side match ??

0 Karma

New Member

How to check events in sql side. i have not a timestamp column.

0 Karma

SplunkTrust
SplunkTrust

I think addcoltotals is what your looking for

index=mysql_testDB_index 
| stats dc(successTransaction) AS successTransaction
| addcoltotals 

http://docs.splunk.com/Documentation/Splunk/7.0.3/SearchReference/Addcoltotals

0 Karma

New Member

Hi.. now i have create new sql connection and gave input type- rising column (id)

i run following sql query on database:
select sum(successDimension1), sum(successDimension2), sum(successDimension3) from testDB.TranTable
// 64152 123787 2258371 which is true
i run following query on splunk:
index=mysql_testDB_index | dedup successDimension1 successDimension2 successDimension3 | stats sum(successDimension1) AS successDimension1, sum(successDimension2) as successDimension2, sum(successDimension3) as successDimension3
// 64152 123787 2241023 sum(successDimension3) are not coming right.

0 Karma

New Member

I have run this query. this shows 2 records of 11.

0 Karma

Champion

Are you dumping data OR using rising column for DB connect input? Can you share the input conf?

0 Karma

New Member

now i have create new sql connection and gave input type- rising column (id)

i run following sql query on database:
select sum(successDimension1), sum(successDimension2), sum(successDimension3) from testDB.TranTable
// 64152 123787 2258371 which is true
i run following query on splunk:
index=mysql_testDB_index | dedup successDimension1 successDimension2 successDimension3 | stats sum(successDimension1) AS successDimension1, sum(successDimension2) as successDimension2, sum(successDimension3) as successDimension3
// 64152 123787 2241023 -- sum(successDimension3) are not coming right.

0 Karma

New Member

I have first create identities, connection, index (App- Splunk DB connect) and input. i have used both of the input type- Batch and Rising. still facing the issue. data is not dynamic right now.

0 Karma

New Member

input.conf file:

[default]
index = default
_rcvbuf = 1572864
host = $decideOnStartup
evt_resolve_ad_obj = 0
evt_dc_name=
evt_dns_name=

[blacklist:$SPLUNK_HOME\etc\auth]

[monitor://$SPLUNK_HOME\var\log\splunk]
index = _internal

[monitor://$SPLUNK_HOME\var\log\splunk\license_usage_summary.log]
index = _telemetry

[monitor://$SPLUNK_HOME\etc\splunk.version]
_TCP_ROUTING = *
index = _internal
sourcetype=splunk_version

[batch://$SPLUNK_HOME\var\spool\splunk]
move_policy = sinkhole
crcSalt =

[batch://$SPLUNK_HOME\var\spool\splunk...stash_new]
queue = stashparsing
sourcetype = stash_new
move_policy = sinkhole
crcSalt =

[fschange:$SPLUNK_HOME\etc]

poll every 10 minutes

pollPeriod = 600

generate audit events into the audit index, instead of fschange events

signedaudit=true
recurse=true
followLinks=false
hashMaxSize=-1
fullEvent=false
sendEventMaxSize=-1
filesPerDelay = 10
delayInMills = 100

[udp]
connection_host=ip

[tcp]
acceptFrom=*
connection_host=dns

[splunktcp]
route=has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:indexQueue;absent_key:_linebreaker:parsingQueue
acceptFrom=*
connection_host=ip

[script]
interval = 60.0
start_by_shell = false

[SSL]

SSL settings

The following provides modern TLS configuration that guarantees forward-

secrecy and efficiency. This configuration drops support for old Splunk

versions (Splunk 5.x and earlier).

To add support for Splunk 5.x set sslVersions to tls and add this to the

end of cipherSuite:

DHE-RSA-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:AES128-SHA

and this, in case Diffie Hellman is not configured:

AES256-SHA:AES128-SHA

sslVersions = tls1.2
cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ecdhCurves = prime256v1, secp384r1, secp521r1

allowSslRenegotiation = true
sslQuietShutdown = false

[script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path]
disabled = 0
interval = 10000000
source = wmi
sourcetype = wmi
queue = winparsing
persistentQueueSize=200MB

default single instance modular input restarts

[admon]
interval=60
baseline=0

[MonitorNoHandle]
interval=60

[WinEventLog]
interval=60
evt_resolve_ad_obj = 0
evt_dc_name=
evt_dns_name=

[WinNetMon]
interval=60

[WinPrintMon]
interval=60

[WinRegMon]
interval=60
baseline=0

[perfmon]
interval=300

[powershell]
interval=60

[powershell2]
interval=60

0 Karma

New Member

I have first create identity then create a index and make sql connection and select input type Batch
this is my input.conf file under C:\Program Files\Splunk\etc\system\default

Version 7.0.3

DO NOT EDIT THIS FILE!

Changes to default files will be lost on update and are difficult to

manage and support.

Please make any changes to system defaults by overriding them in

apps or $SPLUNK_HOME/etc/system/local

(See "Configuration file precedence" in the web documentation).

To override a specific setting, copy the name of the stanza and

setting to the file where you wish to override it.

This file contains possible attributes and values you can use to

configure inputs, distributed inputs and file system monitoring.

[default]
index = default
_rcvbuf = 1572864
host = $decideOnStartup
evt_resolve_ad_obj = 0
evt_dc_name=
evt_dns_name=

[blacklist:$SPLUNK_HOME\etc\auth]

[monitor://$SPLUNK_HOME\var\log\splunk]
index = _internal

[monitor://$SPLUNK_HOME\var\log\splunk\license_usage_summary.log]
index = _telemetry

[monitor://$SPLUNK_HOME\etc\splunk.version]
_TCP_ROUTING = *
index = _internal
sourcetype=splunk_version

[batch://$SPLUNK_HOME\var\spool\splunk]
move_policy = sinkhole
crcSalt =

[batch://$SPLUNK_HOME\var\spool\splunk...stash_new]
queue = stashparsing
sourcetype = stash_new
move_policy = sinkhole
crcSalt =

[fschange:$SPLUNK_HOME\etc]

poll every 10 minutes

pollPeriod = 600

generate audit events into the audit index, instead of fschange events

signedaudit=true
recurse=true
followLinks=false
hashMaxSize=-1
fullEvent=false
sendEventMaxSize=-1
filesPerDelay = 10
delayInMills = 100

[udp]
connection_host=ip

[tcp]
acceptFrom=*
connection_host=dns

[splunktcp]
route=has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:indexQueue;absent_key:_linebreaker:parsingQueue
acceptFrom=*
connection_host=ip

[script]
interval = 60.0
start_by_shell = false

[SSL]

SSL settings

The following provides modern TLS configuration that guarantees forward-

secrecy and efficiency. This configuration drops support for old Splunk

versions (Splunk 5.x and earlier).

To add support for Splunk 5.x set sslVersions to tls and add this to the

end of cipherSuite:

DHE-RSA-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:AES128-SHA

and this, in case Diffie Hellman is not configured:

AES256-SHA:AES128-SHA

sslVersions = tls1.2
cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ecdhCurves = prime256v1, secp384r1, secp521r1

allowSslRenegotiation = true
sslQuietShutdown = false

[script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path]
disabled = 0
interval = 10000000
source = wmi
sourcetype = wmi
queue = winparsing
persistentQueueSize=200MB

default single instance modular input restarts

[admon]
interval=60
baseline=0

[MonitorNoHandle]
interval=60

[WinEventLog]
interval=60
evt_resolve_ad_obj = 0
evt_dc_name=
evt_dns_name=

[WinNetMon]
interval=60

[WinPrintMon]
interval=60

[WinRegMon]
interval=60
baseline=0

[perfmon]
interval=300

[powershell]
interval=60

[powershell2]
interval=60

0 Karma