All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, Our environment consists of an indexer cluster and independent SHs. ES runs on a single SH. We are seeing memory usage spikes on indexer at certain times of the day/night. There is no c... See more...
Hi Everyone, Our environment consists of an indexer cluster and independent SHs. ES runs on a single SH. We are seeing memory usage spikes on indexer at certain times of the day/night. There is no consistency or pattern to this. Resource usage drops after a few hour usually without much intervention. Sometimes a peer is considered "down" when there is excessive memory and cpu usage on that peer. When this happens, the cluster tries to recover which causes a lot of unnecessary "bucket fixup". We have not upgraded the servers recently or updated ES.  I can provide more details based on your questions. Here are a few observations: 1. When the memory spikes on indexers, there are multiple executions of the datamodel accelerations running during the same instant (referring to the _time). Count is 2 or 3. Max concurrency for datamodels is set to 3.  At other times (when memory usage is low), only 1 execution is seen.  Please see below for clarification of the count I am referring to:   2.  On some days, search concurrency in the cluster was too high (over 200). Am working on reducing the number of concurrent searches allowed on SH and available to scheduled searches. But this is also not consistent. For example, we did not have that many concurrent users or searches in the environment but we still had high memory usage across indexers.  Any help or insight would be appreciated. Working with support as well but it's unclear why the datamodels suddenly push the indexers to use over 80% of memory. Our machines are over-provisioned for the most part.  For example, the acceleration. that normally takes less than 3G would suddenly take over 5G or 9G of memory   Thanks!    
Hello All! I am trying to parse McAfee firewall logs but the props.conf I am using doesn't seem to work. This is my props.conf: [source::<source location>] TIME_PREFIX = Time:\s+ TIME_FORMAT = %... See more...
Hello All! I am trying to parse McAfee firewall logs but the props.conf I am using doesn't seem to work. This is my props.conf: [source::<source location>] TIME_PREFIX = Time:\s+ TIME_FORMAT = %m/%d/%Y %H:%M:%S LINE_BREAKER = ([\r\n]+)Time: SHOULD_LINEMERGE = false This is the log that I want to break at each timestamp line: Time: 03/04/2021 17:31:18 Event: Traffic IP Address: 172.16.0.21 Description: Path: Message: Blocked Incoming UDP - Source 172.16.0.21 : (54915) Destination 172.16.0.255 : (54915) Matched Rule: Block all traffic Time: 03/04/2021 17:31:19 Event: Traffic IP Address: 172.16.0.21 Description: Path: Message: Blocked Incoming UDP - Source 172.16.0.21 : (54915) Destination 172.16.0.255 : (54915) Matched Rule: Block all traffic Time: 03/04/2021 17:31:20 Event: Traffic IP Address: 172.16.0.21 Description: Path: Message: Blocked Incoming UDP - Source 172.16.0.21 : (54915) Destination 172.16.0.255 : (54915) Matched Rule: Block all traffic Time: 03/04/2021 17:31:21 Event: Traffic IP Address: 172.16.0.21 Description: Path: Message: Blocked Incoming UDP - Source 172.16.0.21 : (54915) Destination 172.16.0.255 : (54915) Matched Rule: Block all traffic Time: 03/04/2021 17:31:22 Event: Traffic IP Address: 172.16.0.21 Description: Path: Message: Blocked Incoming UDP - Source 172.16.0.21 : (54915) Destination 172.16.0.255 : (54915) Matched Rule: Block all traffic ...   Thank you so much!
Hi Everyone, I need to set one alert when more than  10 API calls Noderesponse-time is beyond threshold(5000ms) Below is my search: index=abc ns=gateway (nodeUrl ="*") Trace_Id=* "*" | stats count... See more...
Hi Everyone, I need to set one alert when more than  10 API calls Noderesponse-time is beyond threshold(5000ms) Below is my search: index=abc ns=gateway (nodeUrl ="*") Trace_Id=* "*" | stats count by Trace_Id Span_Id ns app_name Log_Time caller nodeUrl nodeHttpStatus nodeResponseTime |rename caller as "Caller"|rename nodeUrl as "Node" |rename nodeHttpStatus as "NodeHttpStatus"|rename nodeResponseTime as "NodeResponseTime"| fields - count|replace "https://datagraphaccountnode/graphql" with "Account" |where NodeResponseTime >5000 Can someone guide me how to set the condition for more then 10 API calls in search. That means when more then 10 NodeResponseTime should be greater then  5000ms. Can someone guide me how to set that condition. Thanks in advance
Hi, I'm trying to build a splunk query to calculate error rate breaches. Essentially, how often in 5 minute intervals, do we surpass our error rate threshold. I'm having difficulty though in getting... See more...
Hi, I'm trying to build a splunk query to calculate error rate breaches. Essentially, how often in 5 minute intervals, do we surpass our error rate threshold. I'm having difficulty though in getting my query to work and wanted to get assistance. The issue that I'm having is that I'm unable to get the actual count of the logs that match "tags{}"=error. This is always returning 0 for me. What's odd though, is that this is a valid field on the log object and can be queried for, so I'm not why the count isn't being captured. Context: tags is an array of string. So for example, tags: [info, error, metric, …] This is my query:     index=my-index | bucket _time span=5m | stats count(eval("tags{}"=error)) as errorCount, count as total by _time | eval errorRate = (errorCount/total) | eval breaches=if(errorRate > .05, 1, 0) | stats sum(breaches) as breachCount, count(total) as totalSamples | eval HowOftenWeHitOurTarget=100*(1-(breachCount/totalSamples))      
I have the below JSON feed that I can see from a straight search. I'm trying to get some stats especially for pools-availabilityState by name and pools-status.statusReason by name. Tried spath but it... See more...
I have the below JSON feed that I can see from a straight search. I'm trying to get some stats especially for pools-availabilityState by name and pools-status.statusReason by name. Tried spath but it didnt not help. How I do this. TIA   { [-] clientSslProfiles: { [+] } deviceGroups: { [+] } httpProfiles: { [+] } iRules: { [+] } ltmPolicies: { [+] } networkTunnels: { [+] } pools: { [-] /Common/Ex-pool: { [-] activeMemberCnt: 0 availabilityState: offline curPriogrp: 0 enabledState: enabled highestPriogrp: 0 lowestPriogrp: 0 members: { [+] } mr.msgIn: 0 mr.msgOut: 0 mr.reqIn: 0 mr.reqOut: 0 mr.respIn: 0 mr.respOut: 0 name: /Common/Ex-pool serverside.bitsIn: 0 serverside.bitsOut: 0 serverside.curConns: 0 serverside.maxConns: 0 serverside.pktsIn: 0 serverside.pktsOut: 0 serverside.totConns: 0 status.statusReason: The children pool member(s) are down tenant: Common totRequests: 0 } /Common/F2F3: { [-] activeMemberCnt: 1 availabilityState: available curPriogrp: 0 enabledState: enabled highestPriogrp: 0 lowestPriogrp: 0 members: { [+] } mr.msgIn: 0 mr.msgOut: 0 mr.reqIn: 0 mr.reqOut: 0 mr.respIn: 0 mr.respOut: 0 name: /Common/F2F3 serverside.bitsIn: 0 serverside.bitsOut: 0 serverside.curConns: 0 serverside.maxConns: 0 serverside.pktsIn: 0 serverside.pktsOut: 0 serverside.totConns: 0 status.statusReason: The pool is available tenant: Common totRequests: 0 } /Common/2F2F: { [-] activeMemberCnt: 1 availabilityState: available curPriogrp: 0 description: enabledState: enabled highestPriogrp: 0 lowestPriogrp: 0 members: { [+] } mr.msgIn: 0 mr.msgOut: 0 mr.reqIn: 0 mr.reqOut: 0 mr.respIn: 0 mr.respOut: 0 name: /Common/2F2F serverside.bitsIn: 0 serverside.bitsOut: 0 serverside.curConns: 0 serverside.maxConns: 0 serverside.pktsIn: 0 serverside.pktsOut: 0 serverside.totConns: 0 status.statusReason: The pool is available tenant: Common totRequests: 0 } } serverSslProfiles: { [+] } sslCerts: { [+] } system: { [+] } telemetryEventCategory: systemInfo telemetryServiceInfo: { [+] } virtualServers: { [+] } }
Hi Everyone, I have one panel which consists of data like below: _raw                                                                                                                                ... See more...
Hi Everyone, I have one panel which consists of data like below: _raw                                                                                                                                                                                             host 2021-03-04 04:27:13,219 INFO [Server-296] on.c.s.StandardProcessScheduler                     abc.phx.xcp.com Disabling StandardControllerServiceNode versionedComponentId=null,                                             processGroup=StandardProcessGroup        2021-03-04 04:27:13,219 INFO [Server-296] on.c.s.StandardProcessScheduler                     abc.phx.vpp.com Disabling StandardControllerServiceNode versionedComponentId=null,                                             processGroup=StandardProcessGroup      The issue I am facing is I want to remove the duplicates on basis of host. I used dedup but all are removed . But its not giving me correct value. Can some one guide me how can I remove duplicates . Below is my  query: <query>index=abc sourcetype=xyz source="app.log" info $process_tok1$ | rex field=_raw "(?&lt;id&gt;[A_Za-z0-9]{8}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{12})" | join type=outer id [inputlookup nifi_api_parent_e1.csv]|search $ckey$|eval ClickHere=url|rex field=url mode=sed "s/\\/\\//\\//g s/https:/https:\\//g"|dedup host | table _time _raw host id parent_chain url </query>                                                 
Looking for an alternative way to forward logs to splunk for legacy Windows server 2003/2008r1. I dont see a universal fowarder available for these old server versions. However I still would like to ... See more...
Looking for an alternative way to forward logs to splunk for legacy Windows server 2003/2008r1. I dont see a universal fowarder available for these old server versions. However I still would like to bring logs into splunk. Please assist in providing ideas on how to make this happen. Thanks in advance.
I apologize in advance as variations of this question have been asked before and I Googled it to no end and I just don't get it as how it applies in my case.  I have a simple four server lab all serv... See more...
I apologize in advance as variations of this question have been asked before and I Googled it to no end and I just don't get it as how it applies in my case.  I have a simple four server lab all servers running latest version of Splunk on Windows Server 2016 (Cluster Master, two Indexers, one Deployment Server) all I am trying to do is create one simple Test Index in the Index Cluster, I have read the online documentation downwards upwards and sideways so please don't point me at Splunks Overdocumentation, not being rude just so many words and so little content  Here are the steps I follow Created a simple index.conf file under C:\Program Files\Splunk\etc\master-apps\_cluster\local on the Cluster Master with only this in it [testClusterIndex] repFactor=auto homePath = $SPLUNK_DB/testClusterIndex/db/ coldPath = $SPLUNK_DB/testClusterIndex/colddb/ thawedPath = $SPLUNK_DB/testClusterIndex/thaweddb/   then I use the web to Push the bundle and that's when I get this warning: Controller: [Not Critical]No spec file for: C:\Program Files\Splunk\etc\master-apps\_cluster\local\index.conf what spec file? where is it supposed to exist and what is it supposed to have in it and why?   so what happens after I push the Bundle, well my index.conf ends up in the C:\Program Files\Splunk\etc\slave-apps\_cluster\local directory on both Indexers but no new Index is created under C:\Program Files\Splunk\var\lib\splunk aka the  $SPLUNK_DB default location I have rebooted every server 17 times, and nothing, I have tried pushing a second Bundle with this in it   [testClusterIndex] repFactor=auto homePath = $SPLUNK_DB/testClusterIndex/db/ coldPath = $SPLUNK_DB/testClusterIndex/colddb/ thawedPath = $SPLUNK_DB/testClusterIndex/thaweddb/ [idx1] repFactor = auto homePath = $SPLUNK_DB/$_index_name/db/ coldPath = $SPLUNK_DB/$_index_name/colddb/ thawedPath = $SPLUNK_DB/$_index_name/thaweddb/   and it replaced that first file in the slave-apps dir but still no new Indexes on the Indexers   please help me I'm running out of beer to figure this out    
I am adding some CMK (checkmk) data to splunk using a custom deployment app. I will be creating a new index. I have some specific questions about the sourcetype. 1. How do I choose the sourcetype to... See more...
I am adding some CMK (checkmk) data to splunk using a custom deployment app. I will be creating a new index. I have some specific questions about the sourcetype. 1. How do I choose the sourcetype to put in the inputs.conf file? Are there guidelines or documentation that can help me choose the correct sourcetype or define a new one. 2. I read some documentation that suggested that splunk will choose the most appropriate sourcetype for you. Is this correct? If so, what should I put in the inputs.conf file?  3. If I simply make up a new sourcetype and put it in the inputs.conf, does splunk create it for me? Is doing this not a good idea? Here is an example of the checkmk data: [1614356357] SERVICE ALERT: ServerNameXXX;Memory;OK;HARD;1;OK - RAM used: 10.76 GB of 15.67 GB, Swap used: 1.96 GB of 4 GB, Total virtual memory used: 12.72 GB of 19.67 GB (64.7%)  
When I go to the monitoring console and take a look at the forwarders, the console shows them as all missing yet our environment is receiving logs from many of them.  I tried to rebuild it in the mon... See more...
When I go to the monitoring console and take a look at the forwarders, the console shows them as all missing yet our environment is receiving logs from many of them.  I tried to rebuild it in the monitoring console, but it fails.  Not sure how to get this fixed.
I have been working to try to authenticate with SAML and have been unable to figure out why we're not able make it work.   We have stripped all the security configs out to try to get it to start a... See more...
I have been working to try to authenticate with SAML and have been unable to figure out why we're not able make it work.   We have stripped all the security configs out to try to get it to start authenticating (in expectation of turning them back on later) but we seem to be stuck and missing something. Full disclosure: This is the first time I've implemented SAML and it's the first time the SAML admin has tried to hook to Splunk. My authentication.conf is as follows:     [saml] fqdn = <our_fqdn> entityId = <our_id> idpSSOUrl = https://idp.address.com/idp/SSO.saml2 inboundSignatureAlgorithm = RSA-SHA256 issuerId = <our_issuer_id> redirectPort = 8000 replicateCertificates = true signAuthnRequest = false signatureAlgorithm = RSA-SHA256 signedAssertion = false sloBinding = HTTP-POST ssoBinding = HTTP-POST idpCertPath = $SPLUNK_HOME/etc/auth/idpCerts/ clientCert = $SPLUNK_HOME/etc/auth/mycerts/mycert.pem caCertFile = $SPLUNK_HOME/etc/auth/mycerts/cacert.pem [authentication] authSettings = saml authType = SAML [roleMap_SAML] admin = splunk_enterprise_admin       From splunkd.log: ERROR Saml - Failed to parse issuer.  Could not evaluate xpath expression //saml:Assertion/saml:Issuer or no matching nodes found.  No value found in SAMLResponse for key=//saml:Assertion/saml:Issuer  
How can I create search for temporary users in privileged groups? Domain Admins, Enterprise Admins, Schema Admins, Account Operators, Administrators, Backup Operators, Incoming Forest Trust Builders,... See more...
How can I create search for temporary users in privileged groups? Domain Admins, Enterprise Admins, Schema Admins, Account Operators, Administrators, Backup Operators, Incoming Forest Trust Builders, Server Operators.  I'm struggling
Hi Upgrade from splunk-7.3.1 to splunk-8.1.1 have some issue:   1-when I going to "search page" at this url http://IP:9000/app/search/search suddenly service stopped! I can't find any clue from l... See more...
Hi Upgrade from splunk-7.3.1 to splunk-8.1.1 have some issue:   1-when I going to "search page" at this url http://IP:9000/app/search/search suddenly service stopped! I can't find any clue from logs. FYI when I going to other page service not fail and work correctly, but on search page stopped! 2-give this error (i try to change password as mention here but service still crash) TailReader-0 Root Cause(s): The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data. Last 50 related messages: 03-04-2021 21:23:44.451 +0330 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 03-04-2021 21:23:38.011 +0330 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-04-2021 21:23:38.011 +0330 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 03-04-2021 21:23:38.007 +0330 INFO TailReader - batchreader0 waiting to be un-paused 03-04-2021 21:23:38.007 +0330 INFO TailReader - Starting batchreader0 thread 03-04-2021 21:23:38.007 +0330 INFO TailReader - Registering metrics callback for: batchreader0 03-04-2021 21:23:38.005 +0330 INFO TailReader - tailreader0 waiting to be un-paused 03-04-2021 21:23:38.005 +0330 INFO TailReader - Starting tailreader0 thread 03-04-2021 21:23:38.004 +0330 INFO TailReader - Registering metrics callback for: tailreader0   Any idea? Thanks,
I have a process where I load data into database tables.   My log file has the following entries for each : TableLoad=Begin TableName=Tablename TableLoad=End I have confirmed that I have the ab... See more...
I have a process where I load data into database tables.   My log file has the following entries for each : TableLoad=Begin TableName=Tablename TableLoad=End I have confirmed that I have the above 3 entries in the above order for each table.  I also have a transaction defined as follows: transaction startswith="TableLoad=Begin" endswith="TableLoad=End" My Splunk report creates a separate line for each table, along with a status of the load for each.  This works works well up to the point where my log file reports an error.    The error is reported correctly, but the remaining lines are missing fields.  For example, the next line of the report displays the next table name in the sequence, and nothing else.   Each of the remaining lines are missing fields.   Any idea of what I'm doing wrong?  
Hi All, Is it possible to perform Eval then perform lookup ? If the eval return null then perform lookupA.csv. If eval return notnull, then perform lookupB.csv   thanks! 
I have below HTTP events where in I am trying to extract status code, response time and URL. I am using the following rex query below. This query works fine to find error 200, 400 and 500. But not fo... See more...
I have below HTTP events where in I am trying to extract status code, response time and URL. I am using the following rex query below. This query works fine to find error 200, 400 and 500. But not for 30* errors. If you refer the below 302 events its has event like (302 - ). other status has like(200 87909) (400 568). Can you help me with the expression that is missing so that it extract for all the codes. I verfied it in "https://regex101.com/r/bVp3gz/1" as well. =============== HTTP\/1.1\"\s(?\d+)\s(?\d+)\s"(?[^\"]*)" ========================== 11.111.111.1 [04/Mar/2021:09:05:40 -0600] 1061614 "GET /merced/content/frag/breeze/bootstrap/fonts/icomoon.ttf?az1hj2 HTTP/1.1" 200 95364 "https://sfdfdsfsd-sfsdfasf.topms.com/mxxx/treports/prepackaged/O-Rx_Agent_MyUnacknowledgedCoachingSessions?lang=en_US" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" - B6613D2EAB90BFAB32BD90BA61E8280E.app1 11.111.111.11 [04/Mar/2021:09:36:41 -0600] 169017 "GET /delegate/forwarderServlet/process.do?url=%2Fmerced%2Fdashboards%2FO-Rx_Agent_HomePage_Dash%3Flang%3Den_US&appid=xxx HTTP/1.1" 302 - "https://sfdfdsfsd-sfsdfasf.topms.com/group/npm/o-rx_agent_homepage_dash" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.190 Safari/537.36" - C9A3D61145B15DEFCD8BD71736242EA8.tomcat2   11.111.111.1 [04/Mar/2021:08:35:20 -0600] 17580 "GET /merced/populate?assistant=person&query=jomalyn%2520mallari&policyName=%2Fcom%2Fmerced%2Fmodels%2Femployee%2Fpolicies%2FCoachingWritePolicy&fieldName=EEDRFE HTTP/1.1" 500 977 "https://sfdfdsfsd-sfsdfasf.topms.com/mxxx/forms/BPLCoachingSessionForm?lang=en_US" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" - ABEAA3DED8CBC4163D544C986AA455DA.app4   11.111.111.1 [04/Mar/2021:10:00:27 -0600] 0 "GET /nice-documentation/javascripts/MercedHelpLib.js?browserId=other&minifierType=js&languageId=en_US&b=0000&t=1612576281967 HTTP/1.1" 404 1083 "https://sfdfdsfsd-sfsdfasf.topms.com/group/xxxx/o-rx_agent_homepage_dash" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" - A8B66CD6680C8747D4C878CAAE64B1D7.tomcat1
Hello, I am encountering an issue with the event times for a specific set of logs.  We have been using Splunk Cloud for a little over a year now and I haven't experienced this issue with other sourc... See more...
Hello, I am encountering an issue with the event times for a specific set of logs.  We have been using Splunk Cloud for a little over a year now and I haven't experienced this issue with other sources/data. Most of the servers are located in PST and the events are being logged UTC.  The software vendor has a TA that is supposed to fix time issues on ingest, but that hasn't been updated since 2018 and Splunk won't install it on the Cloud tenant.   How can I enforce the proper time zones at the HWF level?  We have a HWF in each region where the logs are originating.   Thanks, Garry
Hi All, We are migrating SHC members from old to new datacenter. There are total 3 members as a part of SHC. Please tell us which is the best approach to follow. 1. Add 1 new node to the SHC, have... See more...
Hi All, We are migrating SHC members from old to new datacenter. There are total 3 members as a part of SHC. Please tell us which is the best approach to follow. 1. Add 1 new node to the SHC, have the things replicated and then decommission the old node. Repeat this step untill all the nodes are migrated with replicated bundles. 2. Stop Old SHC members, take backup of SHC run time bundles. Install 3 new members and push configs with Deployer first and later restore the old SHC run time bundles. Appreciate your help on this @gcusello @somesoni2 
Hi , please help me with regex expression to capture the data in below part which is in bold and underlined. e+o.in_zpiystoc.stkdrtyini.600.1.txt.1.yyyymmddhhmmss e+o.drlugrbuyhe.xml.1.yyyymmddhhm... See more...
Hi , please help me with regex expression to capture the data in below part which is in bold and underlined. e+o.in_zpiystoc.stkdrtyini.600.1.txt.1.yyyymmddhhmmss e+o.drlugrbuyhe.xml.1.yyyymmddhhmmss k+d.zpiyxery.npoudatri.600.gpg.1.20210127014546.gpg   i need to ignore the starting x+y values and capture only the data present before dateformat and ignore everything after date(including date).
inputs setup correctly as I can access to other tables but I get the following error for index=_internal sourcetype="dbx*" error I can not launch queries directly in splunk but rights are fine . Oth... See more...
inputs setup correctly as I can access to other tables but I get the following error for index=_internal sourcetype="dbx*" error I can not launch queries directly in splunk but rights are fine . Other tables seem to be able to read from splunk by using the inputs file, not the gui but I think they do not have null fields. Trying to read tables from a MSSql database from Splunk in Centos with a RO account.   2021-03-04 14:32:50.559 +0000 [QuartzScheduler_Worker-7] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader java.lang.IllegalArgumentException: argument "content" is null at com.fasterxml.jackson.databind.ObjectMapper._assertNotNull(ObjectMapper.java:4735) at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3433) at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.parseOutput(DbInputCheckpointRepository.java:200) at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.loadImpl(DbInputCheckpointRepository.java:269) at com.splunk.dbx.server.dbinput.task.DbInputCheckpointRepository.load(DbInputCheckpointRepository.java:148) at com.splunk.dbx.server.dbinput.task.DbInputTask.loadCheckpoint(DbInputTask.java:142) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.executeQuery(DbInputRecordReader.java:79) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:52) at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:140) at org.easybatch.core.job.BatchJob.call(BatchJob.java:97) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) can anybody help ? No Java expertise in house.