All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I don't know what to specify in the time_format so that it captures the date (<ActionDate>) and time (<ActionTime>), whose data is separated into separate lines. XML file <Interceptor> <AttackCoo... See more...
I don't know what to specify in the time_format so that it captures the date (<ActionDate>) and time (<ActionTime>), whose data is separated into separate lines. XML file <Interceptor> <AttackCoords>-80.33100097073213,25.10742916222947</AttackCoords> <Outcome>Interdiction</Outcome> <Infiltrators>23</Infiltrators> <Enforcer>Ironwood</Enforcer> <ActionDate>2013-04-24</ActionDate> <ActionTime>00:07:00</ActionTime> <RecordNotes></RecordNotes> <NumEscaped>0</NumEscaped> <LaunchCoords>-80.23429525620114,24.08680387475695</LaunchCoords> <AttackVessel>Rustic</AttackVessel> </Interceptor> This is the configuration that I have in my props.conf BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = LINE_BREAKER = </Interceptor>([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = disabled = false pulldown_type = true TIME_FORMAT = %Y-%m-%d %H:%M:%S TIME_PREFIX = <ActionDate>  The TIME_FORMAT part is what I have to correct. I tried to put this in, but it didn't work. TIME_FORMAT= %Y-%m-%d</ActionDate>%n<ActionTime>%H:%M:%S  Any ideas
I am getting only 100 data using this option, could someone suggest how we can get all client details. import splunklib.client as client import splunklib.results as results import json service = ... See more...
I am getting only 100 data using this option, could someone suggest how we can get all client details. import splunklib.client as client import splunklib.results as results import json service = client.connect(host = 'SPLUNKSERVER', port=8089, username = 'admin', password = 'PASSWEORD') kwargs_oneshot = {"earliest_time": "-2h",                            "latest_time": "now"} searchquery_oneshot = "| metadata type=hosts index=wineventlog, **{'exec_mode':'blocking'}" oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot) reader = results.ResultsReader(oneshotsearch_results) for item in reader:               print(item)
We are attempting to add three nodes to an existing SHC. We are able to add them, however they do not survive a restart without going back to a "DOWN" state. The fix is to stop splunk, clean raft, an... See more...
We are attempting to add three nodes to an existing SHC. We are able to add them, however they do not survive a restart without going back to a "DOWN" state. The fix is to stop splunk, clean raft, and add it back. However doing this only changes it's status to "UP" until the next restart. /opt/splunk/bin/splunk clean raft /opt/splunk/bin/splunk start /opt/splunk/bin/splunk add shcluster-member -current_member_uri https://<url>:8089 Any ideas? Should we perform a bootstrap as shown here? https://docs.splunk.com/Documentation/Splunk/8.0.4/DistSearch/Handleraftissues    
We recently upgraded from 5.0.4 to 8.0.3 ... and one of the remaining issues from that upgrade are a couple of views that are no longer functioning.  The message is: "Looks like this view is using A... See more...
We recently upgraded from 5.0.4 to 8.0.3 ... and one of the remaining issues from that upgrade are a couple of views that are no longer functioning.  The message is: "Looks like this view is using Advanced XML, which has been removed from Splunk Enterprise." I have very little HTML experience and even less when it's combined with Splunk.  Can anyone help me to identify the Advanced XML that is referenced in the message?  Here is my view source. <view autoCancelInterval="90" isSticky="False" isVisible="true" onunloadCancelJobs="true" template="dashboard.html"> <label>PROD</label> <module name="SideviewUtils" layoutPanel="appHeader" /> <module name="AccountBar" layoutPanel="appHeader" /> <module name="AppBar" layoutPanel="appHeader" /> <module name="Message" layoutPanel="messaging"> <param name="filter">*</param> <param name="maxSize">2</param> <param name="clearOnJobDispatch">False</param> </module> <module name="TimeRangePicker" layoutPanel="panel_row1_col1" autoRun="True"> <module name="SideviewUtils" layoutPanel="appHeader" /> <module name="HTML"> <param name="html"><![CDATA[Instructions: Click on the javathreadid field to search for the corresponding SelectSite apps_log events]]></param> </module> <param name="searchWhenChanged">True</param> <param name="selected">Last 30 minutes</param> <module name="Search" layoutPanel="panel_row2_col1" autoRun="True"> <param name="search"><![CDATA[index=db2 sourcetype=lockevents participant_no=1 | rex field=_raw "stmt_text=(?<rqstr_sql>.*)" | rex field=_raw "event_timestamp=(?<lock_ts>.*)" | eval rqstr_appserver=case( hostname="XX.XXX.XXX.45", "serverA.mycompany.com", hostname="XX.XXX.XXX.43", "serverB.mycompany.com", hostname="XX.XXX.XXX.23", "serverC.mycompany.com", hostname="XX.XXX.XXX.41", "serverD.mycompany.com", hostname="XX.XXX.XXX.46", "serverE.mycompany.com", hostname="XX.XXX.XXX.24", "serverF.mycompany.com" ) | rename javathreadid as rqstr_javathreadid, application_handle as rqstr_apphandle | join type=inner event_id [ search index=db2 sourcetype=lockevents participant_no=2 | rex field=_raw "stmt_text=(?<owner_sql>.*)" | rename javathreadid as owner_javathreadid, application_handle as owner_apphandle | eval owner_appserver=case( hostname="XX.XXX.XXX.45", "server.mycompany.com", hostname="XX.XXX.XXX.43", "server.mycompany.com", hostname="XX.XXX.XXX.23", "server.mycompany.com", hostname="XX.XXX.XXX.41", "server.mycompany.com", hostname="XX.XXX.XXX.46", "serverE.mycompany.com", hostname="XX.XXX.XXX.24", "serverF.mycompany.com" ) | fields event_id, owner_sql, owner_javathreadid, owner_apphandle, owner_appserver ] | fillnull value="None Available" | dedup lock_ts, host, event_id, event_type, rqstr_javathreadid, owner_javathreadid, rqstr_apphandle, owner_apphandle, tablename, rqstr_appserver, owner_appserver | table lock_ts, host, event_id, event_type, rqstr_javathreadid, owner_javathreadid, rqstr_apphandle, owner_apphandle, tablename, rqstr_appserver, owner_appserver | sort -lock_ts, -event_id +participant_no]]></param> <module name="Pager"> <module name="SimpleResultsTable" > <param name="drilldown">row</param> <module name="Search" layoutPanel="panel_row3_col1"> <param name="search"><![CDATA[(index=db2 host="$click.fields.host$") OR (index=db2uit host="$click.fields.host$") event_id=$click.fields.event_id$ application_handle=$click.fields.rqstr_apphandle$ | rex field=_raw "stmt_text=(?<sql>.*)" | rex field=_raw "event_timestamp=(?<lock_ts>.*)" | rex field=_raw "stmt_first_use_time=(?<stmt_first_use_ts>.*)" | rex field=_raw "stmt_last_use_time=(?<stmt_last_use_ts>.*)" | table activity_id, stmt_first_use_ts, stmt_last_use_ts, sql | sort -activity_id ]]></param> <module name="HTML"> <param name="html"><![CDATA[<h3>Requester SQL Statement Search:</h3><b>$search$</b>]]></param> </module> <module name="SimpleResultsTable"></module> </module> <module name="Search" layoutPanel="panel_row3_col2"> <param name="search"><![CDATA[(index=db2 host="$click.fields.host$") OR (index=db2uit host="$click.fields.host$") event_id=$click.fields.event_id$ application_handle=$click.fields.owner_apphandle$ | rex field=_raw "stmt_text=(?<sql>.*)" | rex field=_raw "event_timestamp=(?<lock_ts>.*)" | rex field=_raw "stmt_first_use_time=(?<stmt_first_use_ts>.*)" | rex field=_raw "stmt_last_use_time=(?<stmt_last_use_ts>.*)" | table activity_id, stmt_first_use_ts, stmt_last_use_ts, sql | sort -activity_id ]]></param> <module name="HTML"> <param name="html"><![CDATA[<h3>Owner SQL Statement Search:</h3><b>$search$</b>]]></param> </module> <module name="SimpleResultsTable"></module> </module> <module name="Search" layoutPanel="panel_row4_col1"> <param name="search"><![CDATA[(index=ss host="$click.fields.rqstr_appserver$") OR (index=ssuit host="$click.fields.rqstr_appserver$") $click.fields.rqstr_javathreadid$ ]]></param> <module name="HTML"> <param name="html"><![CDATA[<h3>Requester Search:</h3><b>$search$</b>]]></param> </module> <module name="SoftWrap"> <module name="RowNumbers"> <module name="MaxLines"> <param name="options"> <list> <param name="text">5</param> <param name="selected">True</param> <param name="value">5</param> </list> <list> <param name="text">10</param> <param name="value">10</param> </list> <list> <param name="text">20</param> <param name="value">20</param> </list> <list> <param name="text">50</param> <param name="value">50</param> </list> <list> <param name="text">100</param> <param name="value">100</param> </list> <list> <param name="text">200</param> <param name="value">200</param> </list> <list> <param name="text">All</param> <param name="value">0</param> </list> </param> <module name="Segmentation"> <param name="options"> <list> <param name="text">inner</param> <param name="selected">True</param> <param name="value">inner</param> </list> <list> <param name="text">outer</param> <param name="value">outer</param> </list> <list> <param name="text">full</param> <param name="value">full</param> </list> <list> <param name="text">raw</param> <param name="value">raw</param> </list> </param> <module name="Events"> <param name="allowTermClicks">False</param> <param name="fields">series source kb eps</param> <param name="resizeMode">fixed</param> <param name="height">300px</param> </module> </module> </module> </module> </module> </module> <module name="Search" layoutPanel="panel_row4_col2"> <param name="search"><![CDATA[(index=ss host="$click.fields.owner_appserver$") OR (index=ssuit host="$click.fields.owner_appserver$") $click.fields.owner_javathreadid$ ]]></param> <module name="HTML"> <param name="html"><![CDATA[<h3>Owner Search:</h3><b>$search$</b>]]></param> </module> <module name="SoftWrap"> <module name="RowNumbers"> <module name="MaxLines"> <param name="options"> <list> <param name="text">5</param> <param name="selected">True</param> <param name="value">5</param> </list> <list> <param name="text">10</param> <param name="value">10</param> </list> <list> <param name="text">20</param> <param name="value">20</param> </list> <list> <param name="text">50</param> <param name="value">50</param> </list> <list> <param name="text">100</param> <param name="value">100</param> </list> <list> <param name="text">200</param> <param name="value">200</param> </list> <list> <param name="text">All</param> <param name="value">0</param> </list> </param> <module name="Segmentation"> <param name="options"> <list> <param name="text">inner</param> <param name="selected">True</param> <param name="value">inner</param> </list> <list> <param name="text">outer</param> <param name="value">outer</param> </list> <list> <param name="text">full</param> <param name="value">full</param> </list> <list> <param name="text">raw</param> <param name="value">raw</param> </list> </param> <module name="Events"> <param name="allowTermClicks">False</param> <param name="fields">series source kb eps</param> <param name="resizeMode">fixed</param> <param name="height">300px</param> </module> </module> </module> </module> </module> </module> </module> </module> </module> </module> </view>  
Hi Folks, we would like Load balancer to check Splunk search head health before redirecting. Could you please share the URL that be configured on load balancer? looking at using /debug/echo?pin... See more...
Hi Folks, we would like Load balancer to check Splunk search head health before redirecting. Could you please share the URL that be configured on load balancer? looking at using /debug/echo?ping=OK and parsing for the OK, but that is less than optimal as debug has to be set in web.conf Please suggest an alternative.
My splunk services wont start    06-17-2020 11:13:37.100 -0400 ERROR BTreeCP - open failed to restore checkpoint in btree='/opt/splunk/var/lib/splunk/fishbucket/splunk_private_db', it may be corrup... See more...
My splunk services wont start    06-17-2020 11:13:37.100 -0400 ERROR BTreeCP - open failed to restore checkpoint in btree='/opt/splunk/var/lib/splunk/fishbucket/splunk_private_db', it may be corrupted -- run `SPLUNK_HOME/bin/btprobe -d '/opt/splunk/var/lib/splunk/fishbucket' -r` to attempt to repair   This splunk is enterprise security plus heavy forwarder and has version 7.x on it. All my other servers are on 8.x.   Due to lot of dependencies on python 2.7 we did not upgrade this one but it worked fine for many months.
Hi, I have used the below saved search to append the data every 15 mins into the lookup file. I use the lookup file in my dashboard and update a field in the record and use outputlookup to store the... See more...
Hi, I have used the below saved search to append the data every 15 mins into the lookup file. I use the lookup file in my dashboard and update a field in the record and use outputlookup to store the modified information. Currently I am not able to see any new record in the lookup file, when I did a search I could see only 10000 records shown in the statistics. If I remove the old record from the lookup then I am able to load the new data in the lookup. Is there any solution here? I want to store at least a month data in the lookup and every day close to 2000 to 7000 records will be written to the lookup file.  Saved search:     index="application_xyz" source=abc.log SEVERITY_LEVEL=ERROR | eval Date_Time = strftime(_time,"%d-%m-%Y %H:%M:%S") | eval "Package_Name, Procedure_Name, File_Name, Host_Name" = PACKAGE_NAME+","+PROCEDURE_NAME+","+FILE_NAME+","+HOST_NAME | makemv delim="," "Package_Name, Procedure_Name, File_Name, Host_Name" | rename RECORD_NUMBER as ID PLATFORM_INSTANCE AS Platform INSTITUTION_NUMBER AS Institution MESSAGE_TEXT as Message_Text PROGRAM_ID as Program_ID PROGRAM_RUN_ID as Program_Run_ID PROGRAM_NAME as Program_Name | eval Ack = "", Ack_By = "" | table ID Platform Institution Date_Time Program_ID, Program_Run_ID, Program_Name "Package_Name, Procedure_Name, File_Name, Host_Name" Message_Text Ack | outputlookup ram_error.csv append=true      
First time when dashboard is loading change condition is detected and correct part is replaced in Web Request Panel. When i change the value in drop down, Change condition is not getting recognized a... See more...
First time when dashboard is loading change condition is detected and correct part is replaced in Web Request Panel. When i change the value in drop down, Change condition is not getting recognized and query in panel is like below "cart | timechart count"   <form> <label>Test</label> <fieldset submitButton="false" autoRun="true"> <input type="time" token="timePicker"> <label /> <default> <earliest>-4h@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="serviceName"> <label>Service</label> <choice value="">Select a Service</choice> <choice value="lpd">LPD</choice> <choice value="cart">Cart</choice> <choice value="identity">Identity</choice> <default>Select a Service</default> <initialValue>Select a Service</initialValue> <change> <condition value="lpd"> <set token="serviceName">index=test1</set> </condition> <condition value="cart"> <set token="serviceName">index=test2</set> </condition> <condition value="identity"> <set token="serviceName">index=test3</set> </condition> </change> </input> <input type="radio" token="byHost" searchWhenChanged="true"> <label>By Host</label> <choice value="by host">Yes</choice> <choice value="">No</choice> <default>by host</default> <initialValue>by host</initialValue> </input> </fieldset> <row> <panel> <title>Web Request</title> <chart> <search> <query>$serviceName$ | timechart count</query> <earliest>$timePicker.earliest$</earliest> <latest>$timePicker.latest$</latest> </search> <option name="charting.chart">line</option> </chart> </panel> </row> </form>  
Hi, I'm trying to use DB-Connect to get logs from a MySQL database that I have hosted on another device on the same network. I am unable to save the connection. Error is as such: 2020-06-17 22:00:0... See more...
Hi, I'm trying to use DB-Connect to get logs from a MySQL database that I have hosted on another device on the same network. I am unable to save the connection. Error is as such: 2020-06-17 22:00:02.526 +0800 [dw-68 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 8e37d8d073fb043c java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getConnectionStatus(DatabaseMetadataServiceImpl.java:116) at com.splunk.dbx.server.api.resource.ConnectionResource.getConnectionStatusOfEntity(ConnectionResource.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1297) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1212) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:500) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:270) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:388) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.lang.Thread.run(Unknown Source)   I have verified that I am able to connect to the database on the Splunk Instance through DBVisualiser. Even if I install Splunk on the database server itself, it is still unable to save the connection.   Versions: Java: 1.8.0_251 DB-Connect: 3.3.1 JDBC Driver for MySQL: mysql-connector-java-8.0.20  
Hi, I have a requirement where I need to calculate how much the data is flowing in and out of my web servers. I have few fields in my logs like  requested_content -> page/assets/image/css which is ... See more...
Hi, I have a requirement where I need to calculate how much the data is flowing in and out of my web servers. I have few fields in my logs like  requested_content -> page/assets/image/css which is being requested. size -> size of the page/assets/image/css. Is there a possibility to calculate the bandwidth per day or per week? I am not sure how to start with the query so any help or advice is appreciated.
I am attempting to index just a few interesting events from an application's log files. These are unstructured text files. I do not want to index the entire log files, as those are at least 400MB per... See more...
I am attempting to index just a few interesting events from an application's log files. These are unstructured text files. I do not want to index the entire log files, as those are at least 400MB per file. The events that I want to extract may not even add up to 4MB per day. If I run a search with regex on the complete logs that were already indexed in a test run, I get just the required events. So this works.. index=someindex sourcetype=somesourcetype | regex _raw="my_regex_to_look_for_specific_text"   But when I add the same regex as a whitelist for future events, it does not index any new logs at all. If I take off the whitelist, the logs come in. [monitor://E:\Program Files\some app\Logs\...\servername_LOGTYPE_*.txt] disabled=0 index=someindex sourcetype=somesourcetype renderXml=false whitelist1 = _raw = "my_regex_to_look_for_specific_text"   The documentation seems to covers lot on whitelisting file names, and not content within the files. https://docs.splunk.com/Documentation/Splunk/8.0.4/Data/Whitelistorblacklistspecificincomingdata The only piece relevant to what I'm attempting to do is an example to blacklist the EventCode field with the value 4622. [WinEventLog:Security] blacklist1 = EventCode = "4662" Message = "Account Name:\s+(example account)"   The only difference I can see is that my logs are unstructured and do not have fields parsed by splunk. So that leaves me with _raw as a field for my whitelist. Is there a way to do the whitelisting of specific content in the _raw field? Or any other way?
Hi Everyone,   We are receiving below Data from HEC Token into Splunk. { "mirId": "Mule-111", "appVersion": "v1", "businessGroup": "Ecomm-Direct2Customer", "compress": false, "appName": "dev-pross... See more...
Hi Everyone,   We are receiving below Data from HEC Token into Splunk. { "mirId": "Mule-111", "appVersion": "v1", "businessGroup": "Ecomm-Direct2Customer", "compress": false, "appName": "dev-pross-Ecomm-int-v1", "relational_correlationId": "22572801-b09e-11ea-9659-023335c1afde", "tracePointDescription": "Capture payload", "threadName": "[MuleRuntime].cpuLight.13: [adaptive-logger-test].adaptive-loggerFlow.CPU_LITE @6dbce5f9", "content": { "exception": "", "payload": "https://s3.console.aws.amazon.com/s3/object/unilever-ai-operationalframework/LEVEREDGE/prod/dispatchAdvice/c96d4fa0-a5fb-11ea-95a5-0281bc7d1468ERROR-2020-06-04T00:39:25.154Z1591231167.txt?region=us-east-2&amp;tab=overview", "businessFields": { }, "category": "org.unilever.apps.adaptiveloggertest" }, "environment": "TJ-Ecomm-Dev", "LogMessage": "Test-TJ-SCHED", "correlationId": "227425e0-b09e-11ea-9659-023335c1afde", "interfaceName": "Process finance Layer", "tracePoint": "START", "timestamp": "2020-06-17T13:26:21.759Z" } We are trying to create a Indexed field for correlationId at Indexing time by using transforms.conf, props.conf,  fields.conf . How to link these conf files with inputs.conf. Please help me in this. Thanks&Regards, Manikanth
Greetings, I have a search string for the event and have been asked to figure out how to create a report that only emails if there were none of the events in a 24 hour period, looking back 35 days. ... See more...
Greetings, I have a search string for the event and have been asked to figure out how to create a report that only emails if there were none of the events in a 24 hour period, looking back 35 days. index=myindex tag::host=DDD field1="AA" field2="9" field3="GGGG" field4="888" | table _time host field1 field2 field3 field4 Thanks in advance for any suggestions!
We are seeing tens of thousands of these events daily from Splunk trying to parse the timestamp for events in our IHS stats_log files: 06-17-2020 08:09:29.089 -0400 WARN DateParserVerbose - Failed t... See more...
We are seeing tens of thousands of these events daily from Splunk trying to parse the timestamp for events in our IHS stats_log files: 06-17-2020 08:09:29.089 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (27) characters of event. Defaulting to timestamp of previous event (Wed Jun 17 07:53:07 2020). This is our current props.conf stanza: [ihs_stats_log] SHOULD_LINEMERGE=false MAX_TIMESTAMP_LOOKAHEAD=27 TIME_PREFIX=\[ TIME_FORMAT=%Y %m %d %H:%M:%S:%3N TZ = GMT TRANSFORMS-null = ihs_stats_setnull, ihs_stats_setnull_2 And each event looks something like this: [2020 06 17 04:01:54:505],EMRPROF5,XML_FEB_S_P,SSL,indpoma4,WZ ,9RT0221A,DAL_PERSISTENT,DAL_PERSISTENT,10.200.142.36,1,0,917,39,101,0,7152,140,TAPFIGA ,0000,- Can anyone see where we went wrong with our props.conf file and why it's not recognizing those event time stamps? Thanks in advance for your reply.
Hi Community, my Isilon OS: iOneFS Version: 8.1.2.0 ive got the following two errors, depends on Splunk and Addon version combinations: Encountered the following error while trying to update: ... See more...
Hi Community, my Isilon OS: iOneFS Version: 8.1.2.0 ive got the following two errors, depends on Splunk and Addon version combinations: Encountered the following error while trying to update: Error while posting to url=/servicesNS/nobody/TA_EMC-Isilon/isiloncustom/isilonendpoint/setupentity Error occurred while authenticating to server - Splunk 7.3.6 and emc-isilon-add-on-for-splunk-enterprise_240 > working - Splunk 7.3.6 and emc-isilon-add-on-for-splunk-enterprise_250 > working - Splunk 7.3.6 and emc-isilon-add-on-for-splunk-enterprise_260 > not working - Splunk 7.3.6 and dell-emc-isilon-add-on-for-splunk-enterprise_270 > not working - Splunk 8.0.4.1and emc-isilon-add-on-for-splunk-enterprise_260 > not working - Splunk 8.0.4.1and dell-emc-isilon-add-on-for-splunk-enterprise_270 > not working MY oppinion is, that ther is a problem to check the certificate option. All working Addon versions have a checkbox to enable or disable to check certificate. My Problem is to combine Splunk 8 and my Isilon system... Please advice - Markus
Can someone please give me a copy of list of 'options' that I can use to modify my Donut Chart. I need to display the count per slice. Thanks.  I am using the aplura_viz Donut. aplura_viz 
Hello, I use cp_log_export on my checkpoint management server to send logs (CEF format) to my syslog-ng server and on the same server, my splunk forwarder send log to my splunk server. I got too ma... See more...
Hello, I use cp_log_export on my checkpoint management server to send logs (CEF format) to my syslog-ng server and on the same server, my splunk forwarder send log to my splunk server. I got too many logs from checkpoint and I would like to know if I can did some filters on splunk forwarder or syslog-ng server before ?  for the moment in my syslog-ng I filter uniquely with tho IP source of my checkpoint management server and in my splunk forwarder with the sourcetype (checkpoint:cef).     After regarding the logs on splunk, I would like to get only the logs from the cef_product MTA, can I filter that on splunk forwarder or syslog-ng ? I prefer filtered syslog-ng to reduce the amount of logs in my syslog server.     Regards,    
Hello, We are using  AppDynamics Controller build 20.4.2-3673   Controller Time: 11:53:18AM [GMT+0000] 17/06/2020, we want to uninstall/Delete database agent from 5 servers, as per documentation,... See more...
Hello, We are using  AppDynamics Controller build 20.4.2-3673   Controller Time: 11:53:18AM [GMT+0000] 17/06/2020, we want to uninstall/Delete database agent from 5 servers, as per documentation, we have stopped jvm agent & deleted folder from that servers, but still licences are not released. Can someone please help us with this?   BR, Ganesh
I am looking to create a proof-of-concept that intergrates splunk via API queries, however it appears the Splunk Enterprise trial doesn't allow it.  I have also seen on this forum that Splunk Cloud T... See more...
I am looking to create a proof-of-concept that intergrates splunk via API queries, however it appears the Splunk Enterprise trial doesn't allow it.  I have also seen on this forum that Splunk Cloud Trial doesn't allow api access.  Is there a version that allows API calls?
Hi  Query 1:   | pivot mongo ServerStatus max(currentConnections) SPLITCOL host | fieldsummary | fields field, max | rename field as host, max as max_host | stats sum(max_host) as Total | sea... See more...
Hi  Query 1:   | pivot mongo ServerStatus max(currentConnections) SPLITCOL host | fieldsummary | fields field, max | rename field as host, max as max_host | stats sum(max_host) as Total | search Total>20000     This  above is displaying  the total number of connections as expected i want to add a if condition like whenever i met this condition i want to display connections per host as per below query. i tried including the search inside but this is not helping in my case   eval flag=if(condition, [SEARCH QUERY], null())   Query 2:     | pivot mongo ServerStatus max(currentConnections) SPLITCOL host | fieldsummary | fields field , max | rename field AS host, max AS max_host | eval host=host." (".max_host.")" | fields host | mvcombine delim=" , " host | nomv host       Result: host1 (1414) host2 (1415) host3  (1416) host4  (3532)   Both Queries are working as expected but i'm looking if i can connect them on condition like execute query 2 only on condition of Total connections exceed. Any help is appreciated , thank you