All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team, message_id status time 2020-01-21T13:09:14.416164Z PROCESSED 2020-02-19T01:50:05.55630875Z 2020-01-21T13:09:14.416164Z PROCESSING 2020-02-19T01:50:04.621606854Z 2020-01-21T13:09:44.586... See more...
Hi team, message_id status time 2020-01-21T13:09:14.416164Z PROCESSED 2020-02-19T01:50:05.55630875Z 2020-01-21T13:09:14.416164Z PROCESSING 2020-02-19T01:50:04.621606854Z 2020-01-21T13:09:44.586501Z ERROR 2020-02-19T01:50:04.305742277Z 2020-01-21T13:09:44.586501Z PROCESSING 2020-02-19T01:50:04.233225192Z 2020-01-21T13:09:44.586416Z PROCESSED 2020-02-19T01:50:04.142651435Z 2020-01-21T13:09:44.586416Z PROCESSING 2020-02-19T01:50:03.826457927Z 2020-01-21T13:09:44.586321Z PROCESSED 2020-02-19T01:50:03.745964666Z 2020-01-21T13:09:44.586321Z PROCESSING 2020-02-19T01:50:03.449583679Z 2020-01-21T13:09:44.586190Z PROCESSED 2020-02-19T01:50:03.337887858Z 2020-01-21T13:09:44.586190Z PROCESSING 2020-02-19T01:50:03.086329734Z 2020-01-21T13:09:44.586063Z PROCESSED 2020-02-19T01:50:03.00531639Z 2020-01-21T13:09:44.586063Z PROCESSING 2020-02-19T01:50:02.735821778Z I have a three columns: message_id, status, time I need to get the count for status column like PROCESSED = ? PROCESSING = ? ERROR = ? And finally, once we will get the count for ERROR,Processed,Processing then i need to do the subtraction like below: Total = ERROR+PROCESSED-PROCESSING Total = ? I'm using below query to get the total but it does not work:: |rex field=log ".* Updated the Message Id : (?[^ ]). status : (?.*)" | table message_id, status, time | stats count by status | eval total = ERROR + PROCESSED - PROCESSING
Hi Team, We are using Splunk Cloud in our environment. Previously we are running with 7.1.6.2 Splunk Cloud version and when we were using this version I can able to create the Index and also I ca... See more...
Hi Team, We are using Splunk Cloud in our environment. Previously we are running with 7.1.6.2 Splunk Cloud version and when we were using this version I can able to create the Index and also I can able to provide the Max Size value of each index which i am creating. But now we have upgraded to version 7.2.9.1 in Splunk Cloud hence when i navigated to Settings -->Index. And when i try to create a new index at that time there is no option as Max Size. So can any of you help me why the max size field has been removed from 7.2.9.1 version and what would be the max size value by default it assigns when we create a index. Since few of the index will be grow larger so how it works. Or is it an issue with 7.2.9.1 version and if we upgrade to latest version will it work. So kindly check and update on the same.
We are running splunk 6.3.3 with a clustered environment (index cluster and search head cluster) Below is how our indexes are configured, [indexname] repFactor=auto homePath = $SPLUNK_DB/ind... See more...
We are running splunk 6.3.3 with a clustered environment (index cluster and search head cluster) Below is how our indexes are configured, [indexname] repFactor=auto homePath = $SPLUNK_DB/indexname/db coldPath = $SPLUNK_COLD_DB/indexname/colddb thawedPath = $SPLUNK_DB/indexname/thaweddb maxWarmDBCount = 60 frozenTimePeriodInSecs = 2592000 $SPLUNK_DB is a physical drive directly mounted on the indexer /Splunk $SPLUNK_COLD_DB is NFS volume mounted on the indexers at /Data $SPLUNK_COLD_DB is already at its max (15TB) we cannot increase the size of it any more. How can we add another $SPLUNK_COLD_DB and start sending events to that at the same $SPLUNK_COLD_DB is still available for users to search events from. Thanks.
Hi When we are pulling logs through AWS TA from s3 buckets we are getting an error 02-11-2020 07:43:09.395 +0000 ERROR AdminManagerExternal - Unexpected error "<class 'boto.exception.S3ResponseEr... See more...
Hi When we are pulling logs through AWS TA from s3 buckets we are getting an error 02-11-2020 07:43:09.395 +0000 ERROR AdminManagerExternal - Unexpected error "<class 'boto.exception.S3ResponseError'>" from python handler: "S3ResponseError: 400 Bad Request\n". We are using IAM role to fetch the logs from AWS AWS version 4.6.1 splunk HF version 7.2 Any body knows what is the issue
Hi, I am using a Splunk indexer as a deployment server. I have installed forwarders in about 15 machines and I am fetching the logs from these machines. However after creating the server class... See more...
Hi, I am using a Splunk indexer as a deployment server. I have installed forwarders in about 15 machines and I am fetching the logs from these machines. However after creating the server classes and deploying apps, only machines are phoning home. The other machines are not phoning home. Can you please help on identifying the issue behind this. I have installed the splunk enterprise edition 8.x on a windows server 2016 and on trial license. Thanks
Hi, The Android build fails when using the latest android gradle plugin (3.5.3) and the latest appdynamics gradle plugin. Here's the error: FAILURE: Build failed with an exception. * What went wro... See more...
Hi, The Android build fails when using the latest android gradle plugin (3.5.3) and the latest appdynamics gradle plugin. Here's the error: FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:transformClassesWithAppDynamicsForUatRelease'. > No such property: androidBuilder for class: com.android.build.gradle.AppPlugin * Try: Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:transformClassesWithAppDynamicsForUatRelease'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$3.accept(ExecuteActionsTaskExecuter.java:151) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$3.accept(ExecuteActionsTaskExecuter.java:148) at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:191) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:141) at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:75) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:62) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:108) at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67) at org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:94) at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:95) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:73) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49) at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416) at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406) at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165) at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250) at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102) at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36) at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:49) at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:43) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336) at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:322) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:134) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:129) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:202) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:193) at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:129) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) Caused by: groovy.lang.MissingPropertyException: No such property: androidBuilder for class: com.android.build.gradle.AppPlugin at com.appdynamics.android.gradle.TransformBasedPlugin$ADTransform.transform(TransformBasedPlugin.groovy:200) at com.android.build.api.transform.Transform.transform(Transform.java:302) at com.android.build.gradle.internal.pipeline.TransformTask$2.call(TransformTask.java:239) at com.android.build.gradle.internal.pipeline.TransformTask$2.call(TransformTask.java:235) at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:106) at com.android.build.gradle.internal.pipeline.TransformTask.transform(TransformTask.java:230) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:103) at org.gradle.api.internal.project.taskfactory.IncrementalTaskInputsTaskAction.doExecute(IncrementalTaskInputsTaskAction.java:46) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:41) at org.gradle.api.internal.project.taskfactory.AbstractIncrementalTaskAction.execute(AbstractIncrementalTaskAction.java:25) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:28) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$5.run(ExecuteActionsTaskExecuter.java:404) at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:402) at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:394) at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165) at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250) at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158) at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:92) at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:393) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:376) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.access$200(ExecuteActionsTaskExecuter.java:80) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$TaskExecution.execute(ExecuteActionsTaskExecuter.java:213) at org.gradle.internal.execution.steps.ExecuteStep.lambda$execute$0(ExecuteStep.java:32) at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:32) at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:26) at org.gradle.internal.execution.steps.CleanupOutputsStep.execute(CleanupOutputsStep.java:58) at org.gradle.internal.execution.steps.CleanupOutputsStep.execute(CleanupOutputsStep.java:35) at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:48) at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:33) at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:39) at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:73) at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:54) at org.gradle.internal.execution.steps.CatchExceptionStep.execute(CatchExceptionStep.java:35) at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:51) at org.gradle.internal.execution.steps.SnapshotOutputsStep.execute(SnapshotOutputsStep.java:45) at org.gradle.internal.execution.steps.SnapshotOutputsStep.execute(SnapshotOutputsStep.java:31) at org.gradle.internal.execution.steps.CacheStep.executeWithoutCache(CacheStep.java:201) at org.gradle.internal.execution.steps.CacheStep.execute(CacheStep.java:70) at org.gradle.internal.execution.steps.CacheStep.execute(CacheStep.java:45) at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:49) at org.gradle.internal.execution.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:43) at org.gradle.internal.execution.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:32) at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:38) at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:24) at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:96) at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$0(SkipUpToDateStep.java:89) at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:54) at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:38) at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:77) at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:37) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:36) at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:26) at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:90) at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:48) at org.gradle.internal.execution.impl.DefaultWorkExecutor.execute(DefaultWorkExecutor.java:33) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:120) ... 35 more My top-level gradle file looks like this: // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { repositories { mavenCentral() jcenter() maven { url 'https://maven.google.com/' name 'Google' } } dependencies { classpath 'com.android.tools.build:gradle:3.5.3' classpath 'com.appdynamics:appdynamics-gradle-plugin:4.5.6.1384' // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { mavenCentral() jcenter() maven { url 'https://maven.google.com' } maven { url 'https://maven.google.com/' name 'Google' } } }
Hey Splunkers, When you login to splunk portal, right-click on the splunk page > inspect > application > cookies > |splunk URL| Here you see list of tokens, for the applcation, for your URL. I... See more...
Hey Splunkers, When you login to splunk portal, right-click on the splunk page > inspect > application > cookies > |splunk URL| Here you see list of tokens, for the applcation, for your URL. I want to change that expiry value to session, for splunkweb_csrf_token_443. Any kind of help is much appreciated. TIA
I would like to know if Splunk Add-on for Apache Web server is compatible with Oracle HTTP Server Powered by Apache version 1.3.19
I have the same question here that was asked about this other app: https://answers.splunk.com/answers/625373/does-this-app-support-fetching-data-via-proxy.html Does anybody have a code snippet ... See more...
I have the same question here that was asked about this other app: https://answers.splunk.com/answers/625373/does-this-app-support-fetching-data-via-proxy.html Does anybody have a code snippet that can be dropped in to this Jira Issues Collector to allow it to connect when behind a proxy? Below is the HTTP Error that is seen in index=_internal. file=http.py, func_name=_initialize_connection, code_line_no=253 INFO Proxy is not enabled for http connection. file=http.py, func_name=_retry_send_request_if_needed, code_line_no=220 INFO Invoking request to [https://xxxxxx.atlassian.net/xxxxxx] using [GET] method file=http.py, func_name=_retry_send_request_if_needed, code_line_no=228 ERROR Could not send request File "/opt/splunk/etc/apps/TA-jira-issues-collector/bin/ta_jira_issues_collector/aob_py2/cloudconnectlib/core/engine.py", line 291, in _send_request response = self._client.send(request) File "/opt/splunk/etc/apps/TA-jira-issues-collector/bin/ta_jira_issues_collector/aob_py2/cloudconnectlib/core/http.py", line 275, in send url, request.method, request.headers, request.body File "/opt/splunk/etc/apps/TA-jira-issues-collector/bin/ta_jira_issues_collector/aob_py2/cloudconnectlib/core/http.py", line 229, in _retry_send_request_if_needed raise HTTPError('HTTP Error %s' % str(err)) HTTPError: HTTP Error Unable to find the server at xxxxxx.atlassian.net
The stanza [CLIENT_USER_xml] in transforms.conf has an error in it - the "REGEX = " needs to be removed from the start of the second line.
For this use case see the message below we like to extract is . I can extract this 1st part ok but can not extract the 2nd part Needed information from Event ID message: Ist part ... See more...
For this use case see the message below we like to extract is . I can extract this 1st part ok but can not extract the 2nd part Needed information from Event ID message: Ist part --Header--- --Data Results -- Account Name: test01 New Process Name: C:\Program Files\WinZip\Utils ComputerName= server001 2nd Part This is the only information we need from the multi line error message Client IP address: 10.10.00.10:34567 Identity the client attempted to authenticate as: Test\SVC_testLDAP =================================== See Event ID error below as an example: LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4888 EventType=0 Type=Information ComputerName= server001 TaskCategory=Process Creation OpCode=Info RecordNumber=934605653 Keywords=Audit Success Message=A new process has been created. Creator Subject: Security ID: NT AUTHORITY\SYSTEM Account Name: test01 Account Domain: Test Logon ID: 12345test Target Subject: Security ID: NULL SID Account Name: Test Account Domain: - Logon ID: 0x0 Process Information: New Process ID: 0x2030 New Process Name: C:\Program Files\etc\zip.exe Token Elevation Type: TokenElevationTypeDefault (1) Creator Process ID: 0ss0x Message=The following client performed a SASL (Negotiate/Kerberos/NTLM/Digest) LDAP bind without requesting signing (integrity verification), or performed a simple bind over a clear text (non-SSL/TLS-encrypted) LDAP connection. Client IP address: 10.10.00.10:34567 Identity the client attempted to authenticate as: Test\SVC_testLDAP Binding Type: 3
バージョン7.1.4のSplunkでタイムピッカーで日付範囲を指定した場合以下の図のように、yyyy/mm/dd 00:00:00となりますが、 yyyy/mm/dd 24:00:00で選択することは可能でしょうか。 バージョン7.0.1の場合、以下の図のように、yyyy/mm/dd 24:00:00で選択できていました。 バージョンの違いによるものかもわか... See more...
バージョン7.1.4のSplunkでタイムピッカーで日付範囲を指定した場合以下の図のように、yyyy/mm/dd 00:00:00となりますが、 yyyy/mm/dd 24:00:00で選択することは可能でしょうか。 バージョン7.0.1の場合、以下の図のように、yyyy/mm/dd 24:00:00で選択できていました。 バージョンの違いによるものかもわかっておりません。 ご回答頂けると幸いです。
All, Anyone ever post to HEC using PHP? Got a working example? Or see where I am going wrong? <?php $ch = curl_init("https://10.x.x.x:8088/services/collector/event"); curl_set... See more...
All, Anyone ever post to HEC using PHP? Got a working example? Or see where I am going wrong? <?php $ch = curl_init("https://10.x.x.x:8088/services/collector/event"); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERPWD, "x:MYTOKEN"); curl_setopt($ch, CURLOPT_POSTFIELDS, "number=10"); $output = curl_exec($ch); curl_close($ch); echo $output; ?>
I have created a template dashboard that I will be using to create dashboards for a many customers. This dashboard is set up to display all services we provide, regardless if that particular custome... See more...
I have created a template dashboard that I will be using to create dashboards for a many customers. This dashboard is set up to display all services we provide, regardless if that particular customer has that service installed. So if the customer doesn't have that service, the search displays a "No Results Found" message on that panel. Rather than have a customer view the report and freak out that we might not be getting data from them (some end users don't know what services their site has installed and what they don't), I want to display a custom message that tells them it could either be no data points or could be that they don't have the service. I have tried just about every suggestion I could find on other similar posts but I am not getting the success that others did. I'm hoping someone out there has another suggestion to help me reach a solution. Here is my search query: index=indexname sourcetype=sourcename [| inputlookup MasterList.csv | search CustomerName="*CustomerName*" | table propertyId | format] | lookup MasterList.csv propertyId | timechart span=1mon count by CustomerName | fields _time *customerOne | appendpipe [ stats count | eval NoResults="There is no data for this time period. This could possibly be to the service not being available" | where count=0 | table NoResults] However, this is the what the panel looks like in the dashboard: Some one else suggesting adding _time=now() to the append but then I get this: | appendpipe [ stats count | eval NoResults="There is no data for this time period. This could possibly be to the service not being available", _time=now() | where count=0 | fields - count] Is there anyway to have the timechart NOT displayed and just have the custom NoResult text?
All, I enabled the packages input on Splunk_TA_nix on my CentOS 7 box. I get 790 packages back. How ever when I get the same data from the command line I get 796 packages. #rpm --query --all... See more...
All, I enabled the packages input on Splunk_TA_nix on my CentOS 7 box. I get 790 packages back. How ever when I get the same data from the command line I get 796 packages. #rpm --query --all | wc 796 796 26418** Something seems off. Any ideas? Output from btool sourcetype = [package] ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = true BREAK_ONLY_BEFORE = BREAK_ONLY_BEFORE_DATE = True CHARSET = UTF-8 DATETIME_CONFIG = CURRENT DEPTH_LIMIT = 1000 HEADER_MODE = KV_MODE = multi LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER = ^((?!))$ LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = false TRANSFORMS = TRUNCATE = 1000000 detect_trailing_nulls = false maxDist = 100 priority = sourcetype =
All, I am creating an app and was hoping to set the default to dark mode, is there a simple XML or conf file I should edit? Looks like each dashboards needs "" hoping for a global setting. t... See more...
All, I am creating an app and was hoping to set the default to dark mode, is there a simple XML or conf file I should edit? Looks like each dashboards needs "" hoping for a global setting. thanks -Daniel
I have CSV files that are point-in-time snapshots of a configuration. If any part of the CSV changes, I'd like the contents of the entire CSV file to be re-indexed and not just the lines that changed... See more...
I have CSV files that are point-in-time snapshots of a configuration. If any part of the CSV changes, I'd like the contents of the entire CSV file to be re-indexed and not just the lines that changed. I hope to reference each "version" of the CSV's contents in Splunk by the index time. I've tried playing with the different options for the CHECK_METHOD option for props.conf, but it continues to only index the lines that have changed rather than the entire file. inputs.conf: [monitor://C:\baselines\BaselinePorts.csv] index = tracking sourcetype = baselines props.conf [baselines] FIELD_DELIMITER=, HEADER_FIELD_DELIMITER=, CHECK_METHOD=endpoint_md5
Hi all, First time posting here and it's the first time I've been playing with Splunk. Downloaded and installed on Windows 10 (which is already seeming like a mistake - Splunk's handling of Syslog... See more...
Hi all, First time posting here and it's the first time I've been playing with Splunk. Downloaded and installed on Windows 10 (which is already seeming like a mistake - Splunk's handling of Syslog isn't great from what I've seen so far - one source on UDP/514 is bonkers!). I've used a lot of different SIEM tools in my time and have been working in IT Security for a number of years now; so I have a strong understanding of the usual process for log ingestion and it can usually be split into the following categories (this isn't a 'standard', it's just how I work things out in my head): Collection - How the logs are transported towards a SIEM (Syslog, Agent, API, etc.) Ingestion - Accepting the log sources into a SIEM. Parsing - Figuring out what logs belong to what log source types (i.e. CISCO ASA log belongs to CISCO ASA Version 'x') Indexing - Putting the correct Sections of logs into the correct columns (authentication event ID into that column) Association/Rules - Understanding which Events are successful logins and creating rules to link what you want to be alerted on. Searching and alerting - How alerts are displayed and searching through historical data using correlation rules etc. I understand Splunk isn't a 'SIEM' as such, so I'm not expecting to do the correlation bit just yet (probably an advanced way of achieving this, but I'm currently struggling with what I think is 3, 4, and 5. I've managed to get some of my logs into Splunk (3 x Windows devices and 1 x , so I'm pretty happy with the Collection and Ingestion side of things, but I've downloaded and installed two separate Security 'Apps' (InfoSec App for Splunk and Splunk Security Essentials) and neither appear to be understanding the logs that are being ingested. For instance, if I navigate to the "InfoSec App for Splunk", and just go Continuous Monitoring -> Firewalls or Network Traffic, I get absolutely nothing. See below: However, I know that the logs are arriving because if I go to "Search & Reporting", type the hostname in, I'm getting results back: I'm using a Sophos NGFW as my Core Firewall which has all sorts of features enabled on it (IPS, URL Filtering, DNS Alerting, QoS, etc.) Issue is that the apps don't appear to be seeing the logs which makes me thing that it's something to do with categories 3 to 5. I just don't know which one. I've downloaded, installed, and accelerated CIM, I've installed this add-on (which I thought covered the parsing and indexing stages); which leads me to believe it could be an association/rules issue. (Apparently I can't post links. Add on is "Sophos XG Technical Add-on") My major problem here is that I simply do not understand Splunk well enough to figure this out. So I was hoping some of you lovely people could help! Best,
| inputlookup scanner_visibility.csv | lookup visibility_blue.csv Acronym AS application local=t OUTPUTNEW "Risk Score" | lookup server_dump.csv Acronym AS application local=t OUTPUTNEW "Autho... See more...
| inputlookup scanner_visibility.csv | lookup visibility_blue.csv Acronym AS application local=t OUTPUTNEW "Risk Score" | lookup server_dump.csv Acronym AS application local=t OUTPUTNEW "Authorization Removal Date" | rename norton_assets as norton | lookup servertypes_scanner_weights.csv servertype OUTPUTNEW norton_weight nessus_weight metasploit_weight nexpose_weight | eval norton = if(like(norton, "%2019") AND relative_time(now(), "-30d@d") < strptime(norton,"%m/%d/%Y"), norton_weight, 0) | eval nessus = if(like(nessus, "%2019") AND relative_time(now(), "-30d@d") < strptime(nessus,"%m/%d/%Y"), nessus_weight, 0) | eval metasploit = if(like(metasploit, "%2019") AND relative_time(now(), "-30d@d") < strptime(metasploit,"%m/%d/%Y"), metasploit_weight,0) | eval nexpose = if(like(nexpose, "%2019") AND relative_time(now(), "-30d@d") < strptime(nexpose,"%m/%d/%Y"), nexpose_weight, 0) |eventstats count(ip) as total sum(norton) as norton_count sum(nessus) as nessus_count sum(meteasploit) as metasploit_count sum(nexpose) as nexpose_count | eval norton_score = round (((norton_count / total)*100), 2) | eval nessus_score = round (((nessus_count / total)*100), 2) | eval metasploit_score = round (((metasploit_count / total)*100), 2) | eval nexpose_score = round (((nexpose_count / total)*100), 2) | eval date = strftime(now(), "%m/%d/%Y") | eval _time = strptime(date, "%m/%d/%Y") | fields _time date norton_score nessus_score metasploit_score nexpose_score Above, is two abbreviated/dummy data CSV’s (visibility and scoring) and SPL that generates a risk score based on its visibility and scoring CSV’s. The code and everything works, however we want to get more accurate scores. If you look at the visibility CSV it has dates on when a particular system was seen by scanner and the servertypes_scanner_weights.csv will give it a weight. Those weights are then added up using “sum” and then divided by the total to get a % score. The issue is, some of our system devices cannot be seen by a few of the scanners. For example, the Cisco_ASA and Juniper_Switch cannot be seen by Meteasploit and Nexpose scanners, so they don’t get weights for those scanners and we increase the weights for the scanners that can see those two devices. Any ideas in SPL how to exclude systems that can’t be seen by Metasploit and Nexpose scanners so they are not counted in the "sum" of the final score for each scanner?
Hi fellow Splunk users, I need help to set up search query (later will be saved as an alert) to check failed login attempts to our ec2 instances. In my organization, we dont allow SSH login. O... See more...
Hi fellow Splunk users, I need help to set up search query (later will be saved as an alert) to check failed login attempts to our ec2 instances. In my organization, we dont allow SSH login. On top of that, I also want to see if a person tried to change any sensitive config files inside that instance. Logs are already coming from aws cloudtrail, below is what I got so far. Thanks in advance for all the help and input. index="main" sourcetype="aws:cloudtrail" | spath errorCode | search errorCode=AccessDenied { [-] awsRegion: eu-west-1 errorCode: AccessDenied errorMessage: User: User is not authorized to perform: glue:GetSecurityConfigurations eventID: faf2053d-2bd2-41b3-93ff-a7e841979cea eventName: GetSecurityConfigurations eventSource: glue.amazonaws.com eventTime: 2020-02-20T20:43:14Z eventType: AwsApiCall eventVersion: 1.05 recipientAccountId: 155166966842 requestID: 86a20648-687a-4c3e-9f4a-ce07f1704217 requestParameters: null responseElements: null sourceIPAddress: 18.221.72.80 userAgent: aws-sdk-java/1.11.699 Linux/4.14.77-70.59.amzn1.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/25.202-b08 java/1.8.0_202 groovy/2.4.15 vendor/Oracle_Corporation userIdentity: { [+] } }