All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need a list of indexes that are newly created in the last 30 days and need the creation date of those indexes. I have used this query : | rest /services/data/indexes | search totalEventC... See more...
I need a list of indexes that are newly created in the last 30 days and need the creation date of those indexes. I have used this query : | rest /services/data/indexes | search totalEventCount > 0 | eval now=strftime(now(), "%Y-%m-%d") | stats first(minTime) as first_date first(now) as now first(currentDBSizeMB) as currentDBSizeMB by title | eval comparison_date=now()-30*86400 | sort - first_date | eval first_date=strptime(first_date,"%Y-%m-%dT%H:%M:%S.%6N") | eval status=if(first_date
I am using python sdk to connect with splunk. after running python script I am getting this error . Please help me to solve this . similar question also asked here but no resolution found yet.cli... See more...
I am using python sdk to connect with splunk. after running python script I am getting this error . Please help me to solve this . similar question also asked here but no resolution found yet.click here https://answers.splunk.com/answers/23655/fatal-unable-to-read-the-job-status.html
HI, I wanted to show the Application running on line chart. Process start_time is _time and have duration and application . Wanted to show the chart by application which starts at _time and come... See more...
HI, I wanted to show the Application running on line chart. Process start_time is _time and have duration and application . Wanted to show the chart by application which starts at _time and comes down after the given duration. For example, below is two events: _time host duration Application 2020-04-21 16:51:29 ABC 01:43 XXXX 2020-04-21 16:46:29 ABC 03:56 XXXX 2020-04-21 16:56:29 ABC 06:43 YYYY so, first event it should show the line goes up at 2020-04-21 16:51:29 and after 1:43 duration line goes down. Is this possible? Please suggest.
hello, i have this query: | tstats count as daily_count summariesonly=true allow_old_summaries=true from datamodel="events_prod" by events.eventtype events.tail_id sourcetype _time span=1d... See more...
hello, i have this query: | tstats count as daily_count summariesonly=true allow_old_summaries=true from datamodel="events_prod" by events.eventtype events.tail_id sourcetype _time span=1d | eval day=strftime(_time, "%Y-%m-%d") | multireport [ table daily_count, events.eventtype, day, events.tail_id, sourcetype] [ stats values(events.eventtype) as events.eventtype, values(day) as day, values(events.tail_id) as events.tail_id | mvexpand events.eventtype | mvexpand day | mvexpand events.tail_id | eval daily_count=0 ] | eventstats first(sourcetype) as sourcetype by events.eventtype | stats first(daily_count) as daily_count by events.eventtype, day, events.tail_id, sourcetype |rename day as _time | streamstats sum(daily_count) as general by events.tail_id sourcetype time_window=30d | where general!=0 | streamstats sum(daily_count) as monthly_count by events.eventtype events.tail_id time_window=30d | table events.eventtype, monthly_count which calculate number of events for each eventType for period of 30 days . also its needed to add to the calculation days with no events so i've added to the query days with number of events=0 I want to clear from the calculation raws that there is no events from their sourcetype and their tail_id for the last 30 days and clear the raws that their daily calculation = 0 in the empty time period what should i add to my query ? thanks
Hi, I just tried to deploy a Splunk ES Sandbox and also registered a new account at the same time. The flow was roughly as follows: I registered a user account I received a Splunk ES Sand... See more...
Hi, I just tried to deploy a Splunk ES Sandbox and also registered a new account at the same time. The flow was roughly as follows: I registered a user account I received a Splunk ES Sandbox email with instance URL I verified my email address I attempted to access the URL but got this error: https://exsso.splunk.com/idp/SSO.saml2 Error Unexpected System Error Sorry for the inconvenience. Please contact your administrator for assistance and provide the reference number below to help locate and correct the problem. Reference#: wcexsv I attempted to access the instance from my account when logged in on Splunk.com and got a 404 stating Invalid Account in URL. I would appreciate some help as I'm eager to try out Splunk.
Dear All, I have a problem. I'm wondering if anyone has experienced an issue where the disk data displayed by appdynamics is not the same as the data on the server. The data on appdynamics display... See more...
Dear All, I have a problem. I'm wondering if anyone has experienced an issue where the disk data displayed by appdynamics is not the same as the data on the server. The data on appdynamics displays disk usage at OPT by 53%, while on the server by 56% there is a difference of 3%, as attached. please advice. Shandi Aji p   
I want to add a image/logo in the XML dashboard. The image in the GitHub repository. Please find the image link. Please let me know how can I achieve it. https://raw.githubusercontent.com/rajapatra... See more...
I want to add a image/logo in the XML dashboard. The image in the GitHub repository. Please find the image link. Please let me know how can I achieve it. https://raw.githubusercontent.com/rajapatra/Test/master/bg.jpg Thanks in Advance.
I've a dashboard where one can fill the input values and hit the submit button. I execute a query and show the results in a table. if someone clicks on a row (drilldown set on row) it opens another... See more...
I've a dashboard where one can fill the input values and hit the submit button. I execute a query and show the results in a table. if someone clicks on a row (drilldown set on row) it opens another dashboard and show the additional information for clicked row. Here in second dashboard, I had to execute the search query again (with tokens received from first dashboard). Since I have all the fields from first dashboard (some fields are hidden) and I could pass them to another dashboard via setting the tokens. Now, how can I use values of passed tokens in table of second dashboard so could avoid running the search query again. I tried to use html table but with styling and all, i think it's an overkill for such a simple task. Any help would be appreciated.
Hi All! I want to get health rule violations via API from the controller. For applications, it works well. I followed this guide:  https://docs.appdynamics.com/display/PRO45/Events+and+Action+Su... See more...
Hi All! I want to get health rule violations via API from the controller. For applications, it works well. I followed this guide:  https://docs.appdynamics.com/display/PRO45/Events+and+Action+Suppression+API Though, I want to extract the health rules and the violations for Servers and Databases as well. But I did not find the correct API endpoint for these. Has anyone tried similar before? Any clue/hint would be much appreciated Best, Sandor
We had provided created certificates and provided all information in web.conf [settings] enableSplunkWebSSL = 1 privKeyPath = /opt/splunk/etc/auth/mycerts/CertAwsDev/private.key serverCert = /opt... See more...
We had provided created certificates and provided all information in web.conf [settings] enableSplunkWebSSL = 1 privKeyPath = /opt/splunk/etc/auth/mycerts/CertAwsDev/private.key serverCert = /opt/splunk/etc/auth/mycerts/CertAwsDev/Cert.pem httpport = 443 But we are not getting the 443v port established in the server netstat -aen | grep 443 For this reason it is coming unhealthy (502) in AWS target groups
please help me in indexing source field value into new fields value during index time. please help with transform/props.conf i need to extract the source field only the script name with the new f... See more...
please help me in indexing source field value into new fields value during index time. please help with transform/props.conf i need to extract the source field only the script name with the new field. source field value will be /splunk_home/etc/apps/bin/python.py
how can i extract content of first bracket if it is string? e.g: 2020-04-21 23:59:59,093 INFO xxx.xxx-zz-00000 [process] start[ppp] time[00] tag[xxx] 2020-04-21 23:59:59,093 INFO xxx.xxx-zz-... See more...
how can i extract content of first bracket if it is string? e.g: 2020-04-21 23:59:59,093 INFO xxx.xxx-zz-00000 [process] start[ppp] time[00] tag[xxx] 2020-04-21 23:59:59,093 INFO xxx.xxx-zz-00000 [1234567] start[ppp] time[00] tag[xxx] .... expected result: process have huge log file need to extract process with this conditions 1-content of first bracket 2-it must be string not number! Thanks,
Hi All, Need your suggestion & guidance to attain the below. To get more space on the dashboard i want to move the input filters to the header section itself.(Attached in the pic). Edited: @to4... See more...
Hi All, Need your suggestion & guidance to attain the below. To get more space on the dashboard i want to move the input filters to the header section itself.(Attached in the pic). Edited: @to4kawa please find the attached pic, let me know if you still need clarification.
Hi, I would like to extract field values from UI using the field transformations and field extractions from settings. I have added the field extractions and referenced it to use a field tran... See more...
Hi, I would like to extract field values from UI using the field transformations and field extractions from settings. I have added the field extractions and referenced it to use a field transform. I follow the same naming conventions for other normal field extractions with transforms and it works well. props.conf [sourcetype] REPORT-IP = REPORT-IP transforms.conf [REPORT-IP] FORMAT = IP::$1 MV_ADD = 1 REGEX = c=IN IP4 (\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) SOURCE_KEY = list I have verified the regex and it captures well when checked in general, however I don't see the required multi value field in Splunk with the values extracted. Is anything else missing ? Thanks
Hi Splunk users, I know that we can change ownership of alerts in this file: /etc/apps/App_Name/metadata/local.meta But I need a strategy where I can change the ownership of alerts in bulk (w... See more...
Hi Splunk users, I know that we can change ownership of alerts in this file: /etc/apps/App_Name/metadata/local.meta But I need a strategy where I can change the ownership of alerts in bulk (we have too many alerts) this is an example of one of the alert stanza in my local.meta file. [savedsearches/myAlert1] export = none owner = admin version = 7.3.0 modtime = 1577505159.261205000 I tried to replace the stanza head with [savedsearches/*] hoping that the wild card will represent all of the alerts in this app, but it did not work. Looking for suggestion. Regards.
Hello all, I have RHEL 8.1 with Linux 4.x Kernel. The splunk-8.0.2-a7f645ddaf91-linux-2.6-x86_64.rpm should be the right version for my system, right? Just a bit concern when seeing the 2.6 numbe... See more...
Hello all, I have RHEL 8.1 with Linux 4.x Kernel. The splunk-8.0.2-a7f645ddaf91-linux-2.6-x86_64.rpm should be the right version for my system, right? Just a bit concern when seeing the 2.6 number inside the RPM package name. 8.0.2: splunk-8.0.2-a7f645ddaf91-linux-2.6-x86_64.rpm splunk-8.0.2-a7f645ddaf91-linux-2.6-amd64.deb splunk-8.0.2-a7f645ddaf91-Linux-x86_64.tgz Thanks in advance!
I'm using the TA pfsense app and I am trying to fix some sourcetype extraction issues. The current app is supposed to use a transform to extract the sourcetype. Most logs have prepended time stamp, b... See more...
I'm using the TA pfsense app and I am trying to fix some sourcetype extraction issues. The current app is supposed to use a transform to extract the sourcetype. Most logs have prepended time stamp, but nginx does not. There is a regex that uses a non-capture group to grab the timestamp in most logs and then select the log type for source. I edited this and added an or statement to get the nginx logs, but it does seem to work. So I then created a second transform for sourcetype to get the nginx logs, but that is not working either. What is the proper way to parse the same log stream multiple time inline with a regex and use a transform to label both logs with their proper sourcetypes. I can't see to find a good method for this in the docs. Thanks.
Having trouble establishing connection to Teradata with Splunk DBX v 3.3.0. Using JDK v1.8.0_131 Teradata JDBC driver v16.20 terajdbc4.jar saved in /Applications/Splunk/etc/apps/splunk_app_... See more...
Having trouble establishing connection to Teradata with Splunk DBX v 3.3.0. Using JDK v1.8.0_131 Teradata JDBC driver v16.20 terajdbc4.jar saved in /Applications/Splunk/etc/apps/splunk_app_db_connect/drivers tdgssconfig.jar saved in /Applications/Splunk/etc/apps/splunk_app_db_connect/drivers/terajdbc4-libs /Applications/Splunk/etc/apps/splunk_app_db_connect/local/db_connections.conf has below settings populated from identities and connections setup. [Splunk_TD] connection_type = teradata database = js255063 disabled = 0 host = host.teradata.com identity = SPlunk_to_TD jdbcUseSSL = false localTimezoneConversionEnabled = false port = 1025 readonly = false timezone = Australia/Sydney customizedJdbcUrl = When trying to save above connections setting I get error 2020-04-22 13:08:22.479 +1000 [dw-59 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: ddf18f64885b8955 java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:116) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatusOfEntity(ConnectionResource.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) ...
I am using Splunk version 7.3.2. I am trying to find the runtime input configuration on a Splunk heavy forwarder using Rest API endpoint. I have noticed a difference in results when accessing the... See more...
I am using Splunk version 7.3.2. I am trying to find the runtime input configuration on a Splunk heavy forwarder using Rest API endpoint. I have noticed a difference in results when accessing the input configuration on a heavy forwarder using btool in comparison to Splunk API endpoint. Some of the monitoring input stanzas that appear on the Splunk instance when using btool are not part of the API call results. Rest API Call: The Rest API call uses the ‘GET’ method to retrieve the configured inputs from the endpoint ‘https://somethingishere:8089/services/data/inputs/monitor’. The btool command: Command used for ‘btool ‘ and filtered on the keyword syslog: ./splunk btool inputs list Is there an endpoint to capture the current running monitor input configurations?? ..or another way of doing this.
my splunk version is 8.0.1 in Windows, I am using DB connect and dbxquery to get data from oracle 11g database , driver is ojdbc6. I just search with a very simple query "| dbxquery connection="or... See more...
my splunk version is 8.0.1 in Windows, I am using DB connect and dbxquery to get data from oracle 11g database , driver is ojdbc6. I just search with a very simple query "| dbxquery connection="orcl_local" query="select * from dept", the table is a new created testing table with 3 column and only 14 records. I could get some records from query but with data missing in last record for last column (blank). After I insert more records into the table, sometimes I got missing 1~2 records , sometimes I got data missing in other column as well.