All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok, thank you.   In one of the cases that I didn't get my alert triggered,  TimeIndexed = 2024-04-04 01:01:59 _time=04/04/2024 00:56:08.600
The simple answer is no - once the event is there, it is there until it expires i.e. passes the retention period, or it is deleted (which is a complex and risky operation). A more complex answer is ... See more...
The simple answer is no - once the event is there, it is there until it expires i.e. passes the retention period, or it is deleted (which is a complex and risky operation). A more complex answer is yes, but it involves the delete command which has its challenges and is restricted by role, i.e. only those who have been assigned the role can delete events, and even then, there aren't actually deleted, they are just made invisible to searches. A pragmatic answer (as you hinted at) is that you deduplicate or merge the events with the same event_id through your searches, which, as you point out, has its own challenges.
There's an app for that at https://splunkbase.splunk.com/app/3465 See the Splunk Validate Architectures document (https://www.splunk.com/en_us/resources/splunk-validated-architectures.html) for sugg... See more...
There's an app for that at https://splunkbase.splunk.com/app/3465 See the Splunk Validate Architectures document (https://www.splunk.com/en_us/resources/splunk-validated-architectures.html) for suggested specs.
The stats command can group results by multiple fields.  Try | stats count by Service,Operation
Do you mean something like this? | stats count by Service, Operation
Using the new  appdynamics-gradle-plugin 24.1.0 makes the build fail with the following exception: Execution failed for task ':ControlCenterApp:expandReleaseArtProfileWildcards'. > A failure o... See more...
Using the new  appdynamics-gradle-plugin 24.1.0 makes the build fail with the following exception: Execution failed for task ':ControlCenterApp:expandReleaseArtProfileWildcards'. > A failure occurred while executing com.android.build.gradle.internal.tasks.ExpandArtProfileWildcardsTask$ExpandArtProfileWildcardsWorkAction > invalid END header (bad central directory offset) The relevant stacktrace part: ... Caused by: java.util.zip.ZipException: invalid END header (bad central directory offset) at com.android.tools.profgen.ArchiveClassFileResourceProvider.getClassFileResources(ClassFileResource.kt:62) at com.android.tools.profgen.ProfgenUtilsKt.expandWildcards(ProfgenUtils.kt:60) at com.android.build.gradle.internal.tasks.ExpandArtProfileWildcardsTask$ExpandArtProfileWildcardsWorkAction.run(ExpandArtProfileWildcardsTask.kt:120) at com.android.build.gradle.internal.profile.ProfileAwareWorkAction.execute(ProfileAwareWorkAction.kt:74) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:63) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:66) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:62) at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:62) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:59) at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:170) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:187) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:120) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:162) at org.gradle.internal.Factories$1.create(Factories.java:31) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:264) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:128) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:133) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:157) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:126) ... 2 more Environment: com.android.tools.build:gradle:8.3.1 Any ideas what is happening here? It kind of sounds like there is something wrong with the library-packaging itself.. but thats just a guess and in fact i could open the renamed jar with a zip-tool myself. cheers, Jerg
Yes, extract the full timestamp (including the date), then parse it with strptime() into an epoch time value (number of seconds since 1970), then format it with strftime() using the relevant time var... See more...
Yes, extract the full timestamp (including the date), then parse it with strptime() into an epoch time value (number of seconds since 1970), then format it with strftime() using the relevant time variables Date and time format variables - Splunk Documentation
Hello, I am trying to install splunk operator using the operator Hub in OpenShift cluster through webconsole but it is failing with below error  installing: waiting for deployment splunk-operator-... See more...
Hello, I am trying to install splunk operator using the operator Hub in OpenShift cluster through webconsole but it is failing with below error  installing: waiting for deployment splunk-operator-controller-manager to become ready: deployment "splunk-operator-controller-manager" not available: Deployment does not have minimum availability. I also tried to follow the below steps mentioned in this document How to deploy Splunk in an OpenShift environment | by Eduardo Patrocinio | Medium  but found below as well in CLI  oc apply -f https://raw.githubusercontent.com/patrocinio/openshift/main/splunk/standalone-standalone.yaml error: resource mapping not found for name: "standalone" namespace: "splunk-operator" from "https://raw.githubusercontent.com/patrocinio/openshift/main/splunk/standalone-standalone.yaml": no matches for kind "Standalone" in version "enterprise.splunk.com/v1alpha2" ensure CRDs are installed first my requirement is to create a instance of splunk in a namespace in OCP. Kindly help on this Splunk Operator for Kubernetes 
Hello!  So here is a doozy.  We have blacklists in place using Regx. In particular this one: [WinEventLog://Microsoft-Windows-Sysmon/Operational] source= XmlWinEventLog:Microsoft-Windows-Sysmon/O... See more...
Hello!  So here is a doozy.  We have blacklists in place using Regx. In particular this one: [WinEventLog://Microsoft-Windows-Sysmon/Operational] source= XmlWinEventLog:Microsoft-Windows-Sysmon/Operational disabled = false start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = $XmlRegex= (C:\\Program Files\\Preton\\PretonSaver\\PretonService.exe)   along with some other blacklists in sequence as blacklist2, blacklist3 etc.  Now, on my and some colleagues local machines where I have tested the inputs.conf file to see if we are blacklisting correctly, it all works, no logs come into splunk cloud.   BUT when this is deployed using our deployment server to the estate the blacklist doesnt work.  Does anybody know where i can troubleshoot this from?  Ive ran the btool and confs come out fine.  we are local UF agents going to a DS then to Splunk Cloud.   
Hi Splunkers, Let me provide a bit of background,   We are ingesting logs into splunk using an API from our DLP service provider using HEC. The script that run the API read the last 24 hours of... See more...
Hi Splunkers, Let me provide a bit of background,   We are ingesting logs into splunk using an API from our DLP service provider using HEC. The script that run the API read the last 24 hours of events,  get the results in Json format and send it to splunk cloud using HEC. The problem I have is every pull some of the events get duplicates  .... eg the event_id get duplicated as the events changes its status on daily basis so the the script catches that.  so doing reporting is a nightmare, I know I can deduplicated but is there a easier way for splunk to remove those duplicates and maintain the same event_id?
Did you ever resolve this issue?
Hi Everyone I was wondering how I would go about a stats count for 2 fields where 1 field depends on the other.  So for example we have a field called service version and another called service... See more...
Hi Everyone I was wondering how I would go about a stats count for 2 fields where 1 field depends on the other.  So for example we have a field called service version and another called service operation. Operations aren't necessarily exclusive to a version but I was wondering if it was possible to do a stats count where it would display something like: Service 1 - Operation 1: *how many* - Operation 2: *How many* and so on. Is something like this possible? 
Hello Everyone, I've encountered an issue where certain customers appear to have duplicate ELB access logs. During a routine check, I noticed instances of identical events being logged with the exac... See more...
Hello Everyone, I've encountered an issue where certain customers appear to have duplicate ELB access logs. During a routine check, I noticed instances of identical events being logged with the exact same timestamp, which shouldn't normally occur. I'm utilizing the Splunk_TA_aws app for ingesting logs, specifying each S3 bucket and the corresponding ELB log prefix as inputs. My search pattern is index=customer-index-{customer_name} sourcetype="aws:elb:accesslogs", aimed at isolating the data per customer. Upon reviewing the original logs directly within the S3 buckets, I confirmed that the duplicates are not present at the source; they only appear once ingested into Splunk. This leads me to wonder if there might be a configuration or processing step within Splunk or the AWS Add-on that could be causing these duplicates. Has anyone experienced a similar issue or could offer insights into potential causes or solutions? Any advice or troubleshooting tips would be greatly appreciated. here we can see the same timestamp for the logs:  if im adding | dedup _raw the number of events going down to "6535" from 12,710    Thank you in advance for your assistance.  
is there way to add AM OR PM according to time.
Yeah my sincerest apologies, can have difficulties at times with accurately describing what I'm looking for.  I'll definitely checkout the below query.  But essentially I'm just looking for a d... See more...
Yeah my sincerest apologies, can have difficulties at times with accurately describing what I'm looking for.  I'll definitely checkout the below query.  But essentially I'm just looking for a date value and request value to not change day to day unless the request value is higher on a different date value. Hopefully that's a more accurate description. 
This is a little vague so I will make some assumptions. Assuming you want a daily count of events, and just keep the highest one, you could do this | bin _time span=1d | stats count by _time | even... See more...
This is a little vague so I will make some assumptions. Assuming you want a daily count of events, and just keep the highest one, you could do this | bin _time span=1d | stats count by _time | eventstats max(count) as max | where count==max
It is a system field called _indextime - you could rename it without the leading _ so it becomes visible. If you want to use it, you may need to include it in the stats command since this command onl... See more...
It is a system field called _indextime - you could rename it without the leading _ so it becomes visible. If you want to use it, you may need to include it in the stats command since this command only keeps fields which are explicitly named.
Hi all,  I was wondering if there's a way to create a search that I can add to a dashboard that'll present the Peak day and what the volume is over a 30 day period?  Essentially when loading into... See more...
Hi all,  I was wondering if there's a way to create a search that I can add to a dashboard that'll present the Peak day and what the volume is over a 30 day period?  Essentially when loading into the dashboard I was hoping it could save whatever day it occurred and not be replaced until a larger peak occurs. Assuming that's even possible.  Possibly worded this poorly so feel free to ask any questions about what I'm trying to achieve. 
|rex field=_raw "\"@timestamp\":\"\d{4}-\d{2}-\d{2}T(?<Time>\d{2}:\d{2})"
where can i see the index time?