All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I can't use drill downs in Splunk mobile if the dashboard was created in Dashboard Studio. We try to create the same dashboard with the same panel in Dashboard Studio and a classic. In the Splunk... See more...
I can't use drill downs in Splunk mobile if the dashboard was created in Dashboard Studio. We try to create the same dashboard with the same panel in Dashboard Studio and a classic. In the Splunk app, only the one that was created as classic has the drill down function. In the documentation I do not find any reference to the use of this function through these apps.
[| makeresults | addinfo | eval earliest=relative_time(info_min_time,"@d+7h") | eval latest=relative_time(info_min_time,"@d+31h") | fields earliest latest]| fields file_name batch_count entry_add... See more...
[| makeresults | addinfo | eval earliest=relative_time(info_min_time,"@d+7h") | eval latest=relative_time(info_min_time,"@d+31h") | fields earliest latest]| fields file_name batch_count entry_addenda_count total_debit_amount total_credit_amount |dedup file_name | eval total_debit_amount=total_debit_amount/100, total_credit_amount=total_credit_amount/100 | table _time file_name batch_count entry_addenda_count total_debit_amount total_credit_amount I am using above query But want to show 2 different time zone PST and UTC in the table. Right now the time shown is in UTC  
Hi Folks,  I could use some help with this query.   index=address_index earliest=-30m address [ search index=registration_index earliest=-30m | `get_ip_location(src_ip)` | rename user as email... See more...
Hi Folks,  I could use some help with this query.   index=address_index earliest=-30m address [ search index=registration_index earliest=-30m | `get_ip_location(src_ip)` | rename user as email | dedup email | table email src_ip ip_location | return 15 $email] | rex field=_raw "REGEX xmlfield" | xmlkv xmlfield | eval email=lower(trim(EMAIL_ADDRESS)) | eval city=lower(trim(CITY)) | eval address=lower(trim(ADDRESS1)) | eval state=lower(trim(STATE)) | stats values(city) as city values(state) as state values(address) as address by email     The inner search looks for all the registrations for the past 30 mins. Then, the return command passes the email to the outer search, which then queries the address index for an address on file according to the email. my goal, right now, is to pass 2 parameters to the outer search, an email and the src_ip/ip_location. problem: when I attempt to add a second parameter to the return command, in addition to email, the query no longer works. The ultimate goal is to build a search that queries registrations from met online, use the get_ip_location on the originating IP address, then compare that ip_location with their address on file (which is usually in the address index). However, when I try to following query, I get no results:   index=address_index earliest=-30m address [ search index=registration_index earliest=-30m | `get_ip_location(src_ip)` | rename user as email | dedup email | table email src_ip ip_location | return 15 $email $ip_location] | rex field=_raw "REGEX xmlfield" | xmlkv xmlfield | eval email=lower(trim(EMAIL_ADDRESS)) | eval city=lower(trim(CITY)) | eval address=lower(trim(ADDRESS1)) | eval state=lower(trim(STATE)) | stats values(city) as city values(state) as state values(address) as address by email ip_location   How can I pass these 2 values, $email and $ip_location, to the outer search?
Hi All, we are facing a issue in Splunk Add-on for Microsoft Cloud Services event hub input, there are multiple inputs we have created and almost all the inputs are collecting partial logs. We are c... See more...
Hi All, we are facing a issue in Splunk Add-on for Microsoft Cloud Services event hub input, there are multiple inputs we have created and almost all the inputs are collecting partial logs. We are checking the count of event at Azure Log Analytics workspace and at the same time checking events on Splunk there is random difference in event collection. There are no errors in internal logs, although we can see some warning messages, we tried increasing the ingestion pipeline to 4. tried disabling all the inputs but kept only one to check if that's making any issue.  Splunk deployment is single instance test environment where  32vCPU and 64 GB memory is assigned, storage is more than 800 IOPS. Not much of the application are installed.  Splunk support case is also opened but till now they haven't able to find any root cause. Need suggestions and inputs if someone else has faced such issue. Little back ground on architecture, we have multiple data sources (Azure Activity & AD) sending logs to one event hub and we are segregating the sourcetypes in splunk by transforming data based on category and resourceId.  Please help to resolve this issue.    Thanks Bhaskar  
Hello, I've been asked to create a report that will show the number of events from the 2 previous quarters by country, the monthly average, and the quarterly percent increase: Country Q1'22... See more...
Hello, I've been asked to create a report that will show the number of events from the 2 previous quarters by country, the monthly average, and the quarterly percent increase: Country Q1'22 Total Q1'22 Monthly Avg Q2'22 Total Q2'22 Monthly Avg Q2'22 Percent Increase US 300000 100000 330000 110000 10% UK 60000 20000 61000 20333 2% Canada 1200 400 1500 500 25%   Using this:     index=mydata earliest=-2q@q latest=-q@q | chart dc(ID) as count_earlier by Country | appendcols [ search index=mydata earliest=-q@q latest=@q | chart dc(ID) as count_later by Country] | eval ave_earlier=round(count_earlier/3,0) | eval ave_later=round(count_later/3,0) | eval DiffPer=round(((count_later - count_earlier) / count_earlier) * 100,0)."%" | table ReportersCountry,count_earlier,ave_earlier,count_later,ave_later,DiffPer     Now I'm trying to rename count_earlier, ave_earlier, count_later, and ave_later to be the quarter labels. I've been using:     | convert TIMEFORMAT="%m" ctime(_time) AS month | rex field=date_year "\d{2}(?<short_year>\d{2})" | eval quarter=case(month<=3,"Q1",month<=6,"Q2",month<=9,"Q3",month<=12,"Q4",1=1,"missing")."'".short_year     And have been trying to use eval {} to rename the columns but haven't quite figured it out. I also tried using chart which allows me to get the quarter headers, but then I couldn't figure out how to calculate the percent difference column.  Thanks for any help in advance!
Hello, We are in process of upgrading our Splunk Infrastructure from ver 7.x to 8.x. Before migration, we are stuck in the functionality testing of Alert manager ver 3.08 on Splunk Enterprise 8.1.4... See more...
Hello, We are in process of upgrading our Splunk Infrastructure from ver 7.x to 8.x. Before migration, we are stuck in the functionality testing of Alert manager ver 3.08 on Splunk Enterprise 8.1.4 (Dev env). We are seeing that the alerts which gets triggered with Alert Manager ver 3.08, are not getting converted from "New" status to "auto_assigned".  We have checked the configurations and logs, we couldn't find anything which can give us the clarity on this issue. Kindly help us in this issue, if you can provide alternate way to find out the root cause of this.   Note: I am attaching logs from  the Dev environment where there is no log entry of  "Set status of incident <id> to auto_assigned".    
I'm trying to get a list of fields by sourcetype without going down the route of fieldsummary and thought analyzing the props configs would be a good place to start.  I'm starting with EVAL genera... See more...
I'm trying to get a list of fields by sourcetype without going down the route of fieldsummary and thought analyzing the props configs would be a good place to start.  I'm starting with EVAL generated fields but not having any luck on the foreach section. Any pointers would be much appreciated.   | rest splunk_server=local /servicesNS/-/-/configs/conf-props | table title EVAL-a* | eval eval_fields="" | foreach EVAL-* [ eval eval_fields=if(isnotnull(<<FIELD>>), mvappend(eval_fields,'<<MATCHSTR>>'), eval_fields) ] | table title eval_fields *  
Hi Team,   I wanted to count response time for each hours from application logs, wanted to create dashboard using line graph Please find below app logs {"TIMESTAMP":"2022-09-29 T11:31:49.038 ... See more...
Hi Team,   I wanted to count response time for each hours from application logs, wanted to create dashboard using line graph Please find below app logs {"TIMESTAMP":"2022-09-29 T11:31:49.038 GMT'Z","MESSAGE":"response=","LOGGER":"com.fedex.cds.ws.PerfInterceptor","THREAD":"http-nio-8080-exec-2089","LOG_LEVEL":"DEBUG","DataCenter":"1","EndUserId":"APP943415","Stanza":"etnmsMasterSubRangeStanza","ResponseTime":"268","Operation":"queryByIndex","Domain":"etnms","EAI":"APP943415","TransactionId":"ecd29878-e4f9-48db-ab29-a7fa98ba6be7","EAI_NAME":"cds","EAI_NBR":"APP943415"}
Hi there, I am new to this kind of analysis within Splunk but i've been asked to create a filter on events where the closed date is before the start date. This is the search I have created but ca... See more...
Hi there, I am new to this kind of analysis within Splunk but i've been asked to create a filter on events where the closed date is before the start date. This is the search I have created but can't get it working: index=main sourcetype="CRA_Consumer_Txt_data" | eval close_date=strftime(strptime(close_date,"%d%m%Y"),"%d/%m/%Y") | eval start_date=strftime(strptime(start_date,"%d%m%Y"),"%d/%m/%Y") | search close_date < start_date | table start_date, close_date   This is an example of what even is shown when i run that search   start_date        close_date 30/04/2021        23/05/2021
I'm getting this error after upgrading Microsoft 365 app in Splunk  error - Error in 'SearchParser': The search specifies a macro 'm365_default_index' that cannot be found. Reasons include: the ma... See more...
I'm getting this error after upgrading Microsoft 365 app in Splunk  error - Error in 'SearchParser': The search specifies a macro 'm365_default_index' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information.
Hello, I have  JSON source where one of the fields has an escape character in the field name. Well actually I cannot see that in highlighted mode or in the extracted field list, it appears as "Repor... See more...
Hello, I have  JSON source where one of the fields has an escape character in the field name. Well actually I cannot see that in highlighted mode or in the extracted field list, it appears as "Report Refresh Date", just in raw view: "\ufeffReport Refresh Date".  Naturally, the searches are not working with what appears as the "Report Refresh Date". If I select the field it appears as " 'red dot' Report Refresh Date", the search work, but can not be saved - it's "forget" the red dot, as any copy/ paste attempt does out of the search bar. How can I rid of this? By adding  FIELDALIAS-alias1 = "\ufeffReport Refresh Date" AS "Report_Refresh_Date" to props.conf would help?
Hi there, Is it possible to forward the logs from event hub in azure cloud which is not publicly accessible into Splunk via the Microsoft cloud service add-on? Thanks in advance!
Hi all, not a question, just an attempt to save other ppl some time. We had the need of calculating the score of CVSS3.1 Vector strings. We first used an external python script but that comes with ... See more...
Hi all, not a question, just an attempt to save other ppl some time. We had the need of calculating the score of CVSS3.1 Vector strings. We first used an external python script but that comes with a cost. Hence I decided to implement a cvss calculator in SPL. It's already tested against a wide range of vector strings and seems pretty robust.  here is what we created as a macro named `cvss31` https://github.com/niko31337/splcvss31/blob/main/cvss31.macro to call the macro and get the relevant fields, you do something like this:       | makeresults | eval cvss_vector_string="CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:P/RL:O/RC:C/CR:M/IR:M/AR:H/MUI:R" `cvss31` | table cvss_vector_string, main_score, main_rating, base_score, temporal_score, environmental_score, cvss_error       and you end up with this result: Make sure you always check for errors in the cvss_error field... cheers, Niko  
Hello experts, I'm trying to deploy splunk via the Nutanix calm blueprint. I could not install it successfully with LOGON_USERNAME & LOGON_PASSWORD parameters. I was only able to install it with th... See more...
Hello experts, I'm trying to deploy splunk via the Nutanix calm blueprint. I could not install it successfully with LOGON_USERNAME & LOGON_PASSWORD parameters. I was only able to install it with this powershell script. Start-Process -Filepath "c:\_Splunk\Splunk_Silent.bat" -Wait   Inside the bat file msiexec.exe /I splunkforwarder-7.3.0-657388c7a488-x64-release-64bit.msi DEPLOYMENT_SERVER="<ipadd>:8089" LAUNCHSPLUNK=1 AGREETOLICENSE=Yes /quiet   But when I include the LOGON_USERNAME & LOGON_PASSWORD parameters that's where the installation fails.  
Hi @gcusello  Need one more help, from the below log, i am able to remove all the wild characters using below script, but need to retain the commas in the texts (in italics) between  Message and Det... See more...
Hi @gcusello  Need one more help, from the below log, i am able to remove all the wild characters using below script, but need to retain the commas in the texts (in italics) between  Message and Details, similarly for Message and Success Appreciate the help | eval "EM"=if(isnotnull('cip:AuditMessage.MessageText'),'cip:AuditMessage.MessageText',"Data Not Found") | rex field=EM max_match=0 "(?<ErrMes>\w+)" | eval ErrorMessage = mvjoin (ErrMes, " ") | rex field=ErrorMessage Message\s+(?<ErrorResponse>.*)\s+Details\s+Message\s+(?<ErrorResponse2>.*)\s+Success {"@odata.context":"https://apistaging.payspace.com/odata/v1.0/11997/$metadata#Employee/$entity","Message":"The Nationality field is required., The Province field is required., The Code field is required., The Country field is required.","Details":[{"Message":"The Nationality field is required."},{"Message":"The Province field is required."},{"Message":"The Code field is required."},{"Message":"The Country field is required."}],"Success":false}    
Hello Splunkers!! I have two weeks events week 1 & week 2. Here I need to compare event of Week 1 & Week 2. The highlighted red one is the new event in Week 2. Like wise I have 100 of events in wee... See more...
Hello Splunkers!! I have two weeks events week 1 & week 2. Here I need to compare event of Week 1 & Week 2. The highlighted red one is the new event in Week 2. Like wise I have 100 of events in week 1 and week 2. If any new events comes in week 2  I need that result. Please let me know how to approach with this. Message Week 1 Week 2 Template:account/backToAccount Template:account/backToAccount "enableEnhancedCheckout" is not defined "enableEnhancedCheckout" is not defined   "product" is not defined Below is the query I have created so far : index="ABC" ("ERROR" OR "EXCEPTION") earliest=-7d latest=now() | rex field=_raw "Error\s(?<Message>.+)MulesoftAdyenNotification" | rex field=_raw "fetchSeoContent\(\)\s(?<Exception>.+)" | rex field=_raw "Error:(?<Error2>.+)" | rex field=_raw "(?<ErrorM>Error in template script)+" | rex field=_raw "(?ms)^(?:[^\\|\\n]*\\|){3}(?P<Component>[^\\|]+)" | rex "service=(?<Service>[A-Za-z._]+)" |rex "Sites-(?<Country>[A-Z]{2})" | eval Error_Exception_7d= coalesce(Message,Error2,Exception,ErrorM) | stats count by Error_Exception_7d | sort - count | appendcols [ search index="ABC" ("ERROR" OR "EXCEPTION") earliest=-14d latest=-8d | rex field=_raw "Error\s(?<Message>.+)MulesoftAdyenNotification" | rex field=_raw "fetchSeoContent\(\)\s(?<Exception>.+)" | rex field=_raw "Error:(?<Error2>.+)" | rex field=_raw "(?<ErrorM>Error in template script)+" | rex field=_raw "(?ms)^(?:[^\\|\\n]*\\|){3}(?P<Component>[^\\|]+)" | rex "service=(?<Service>[A-Za-z._]+)" |rex "Sites-(?<Country>[A-Z]{2})" | eval Error_Exception_14d= coalesce(Message,Error2,Exception,ErrorM) | stats count by Error_Exception_14d | sort - count] | stats count by Error_Exception_14d Error_Exception_7d
I'm using DB Connect app with MongoDB Other inputs are very nice but only one input has a problem..   I did search _internal log and then I encountered below error     [dw-63 - POST /api/... See more...
I'm using DB Connect app with MongoDB Other inputs are very nice but only one input has a problem..   I did search _internal log and then I encountered below error     [dw-63 - POST /api/connections/LifeRecord/querymetadata] ERROR c.s.d.s.a.s.d.impl.DatabaseMetadataServiceImpl - Unable to get query metadata java.sql.SQLException: Non-existing table referenced: exam at unity.parser.PTreeBuilderValidater.parseTableIdentifier(PTreeBuilderValidater.java:2551) at unity.parser.PTreeBuilderValidater.processTableRef(PTreeBuilderValidater.java:1203) at unity.parser.PTreeBuilderValidater.processTableReferences(PTreeBuilderValidater.java:1498) at unity.parser.PTreeBuilderValidater.ParseQuery(PTreeBuilderValidater.java:1113) at unity.parser.PTreeBuilderValidater.buildLQTree(PTreeBuilderValidater.java:967) at unity.parser.GlobalParser.parse(GlobalParser.java:101) at mongodb.jdbc.MongoPreparedStatement.parseQuery(MongoPreparedStatement.java:279) at mongodb.jdbc.MongoPreparedStatement.getMetaData(MongoPreparedStatement.java:249) at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.getMetaData(HikariProxyPreparedStatement.java) at com.splunk.dbx.connector.connector.impl.JdbcConnectorImpl.getPrepareStatementMetaData(JdbcConnectorImpl.java:271) at com.splunk.dbx.connector.connector.impl.JdbcConnectorImpl.getQueryMetadata(JdbcConnectorImpl.java:124) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getQueryMetadata(DatabaseMetadataServiceImpl.java:130) at com.splunk.dbx.server.api.resource.ConnectionResource.getQueryMetadata(ConnectionResource.java:166) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1651) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:567) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1377) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:507) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1292) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:501) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.base/java.lang.Thread.run(Thread.java:829)     this error occured at restart the app   And then when arrived scheduled time, I encountered below error     [QuartzScheduler_Worker-10] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader com.splunk.dbx.server.exception.ConfMigrationFailException: Fail to migrate conf input, stanza name: <input name>. at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.migrateInputConfiguration(DbInputRecordReader.java:104) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:50) at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:140) at org.easybatch.core.job.BatchJob.call(BatchJob.java:97) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)       I need help.. What's the matter in this situation? Other Scheduled Input Jobs are very nice
Hi, I add a drildown in a Dashboard. By this piece of code, I have a list of equipement in the drilldown |inputlookup lookup.csv |stats count by id |fields id Is it possible to add an option "Al... See more...
Hi, I add a drildown in a Dashboard. By this piece of code, I have a list of equipement in the drilldown |inputlookup lookup.csv |stats count by id |fields id Is it possible to add an option "All" like the filter in the Excel? If yes, how can I do it please? If no, is there any idea that I can do? Basically  I want to choose 1 single equipement or all equipement.   Thanks, Julia
Hello everyone, I am fairly new to Splunk and learning on the fly, so it would be super nice of someone can help me solve this issue or guide me how to deal with it for now. We had index cluste... See more...
Hello everyone, I am fairly new to Splunk and learning on the fly, so it would be super nice of someone can help me solve this issue or guide me how to deal with it for now. We had index cluster of 5 nodes in 2 sites.  Site1 has 2 nodes with 9.1 TB each and Site 2 has 3 nodes with 20TB each... don't ask me why. There was some confusion with the initial installation of the machines. So we have almost full disk in site 1 - 98% of storage used and around 50% in machines in site2. Yesterday we added another in instance to site 1 with 20TB of disk, but it does not seem to offload the other 2 in site1. What are our options here? Shall we run index rebalancing from the manager node? Every guidance will be much appreciated. Regards