All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Recent versions of the Splunk Universal Forwarder install with a least privileged user that doesn't have the ability to pull Windows event logs by default.  This is different from older versions of t... See more...
Recent versions of the Splunk Universal Forwarder install with a least privileged user that doesn't have the ability to pull Windows event logs by default.  This is different from older versions of the UF which worked differently and could pull all logs by default.   I'm not familiar with what you can do with Intune, but do you have the ability to specify command line arguments to run with the deployment? I'm thinking you may need to add something like WINEVENTLOG_SEC_ENABLE=1 (or similar).  See https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller#Supported_command_line_flags for details along with other supported options. 
I am trying to instrument a Xamarin Forms app with the following configuration:. var config = AppDynamics.Agent.AgentConfiguration.Create("<EUM_APP_KEY>"); config.CollectorURL = "http://<IP_Address... See more...
I am trying to instrument a Xamarin Forms app with the following configuration:. var config = AppDynamics.Agent.AgentConfiguration.Create("<EUM_APP_KEY>"); config.CollectorURL = "http://<IP_Address>:7001"; config.CrashReportingEnabled = true; config.ScreenshotsEnabled = true; config.LoggingLevel = AppDynamics.Agent.LoggingLevel.All; AppDynamics.Agent.Instrumentation.EnableAggregateExceptionHandling = true; AppDynamics.Agent.Instrumentation.InitWithConfiguration(config); In the documentation, it is stated that: The Xamarin Agent does not support automatic instrumentation for network requests made with any library. You will need to manually instrument HTTP network requests regardless of what library is used. Mobile RUM Supported Environments (appdynamics.com) Yet, in these links Customize the Xamarin Instrumentation (appdynamics.com) and Instrument Xamarin Applications (appdynamics.com) I see that I can use AppDynamics.Agent.AutoInstrument.Fody package to automatically instrument the network requests. I tried to follow the approach of automatic instrumentation using both AppDynamics.Agent and AppDynamics.Agent.AutoInstrument.Fody packages as per the docs, but I get the below error when building the project: MSBUILD : error : Fody: An unhandled exception occurred: MSBUILD : error : Exception: MSBUILD : error : Failed to execute weaver /Users/username/.nuget/packages/appdynamics.agent.autoinstrument.fody/2023.12.0/build/../weaver/AppDynamics.Agent.AutoInstrument.Fody.dll MSBUILD : error : Type: MSBUILD : error : System.Exception MSBUILD : error : StackTrace: MSBUILD : error : at InnerWeaver.ExecuteWeavers () [0x0015a] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:222 MSBUILD : error : at InnerWeaver.Execute () [0x000fe] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:112 MSBUILD : error : Source: MSBUILD : error : FodyIsolated MSBUILD : error : TargetSite: MSBUILD : error : Void ExecuteWeavers() MSBUILD : error : Sequence contains more than one matching element MSBUILD : error : Type: MSBUILD : error : System.InvalidOperationException MSBUILD : error : StackTrace: MSBUILD : error : at System.Linq.Enumerable.Single[TSource] (System.Collections.Generic.IEnumerable`1[T] source, System.Func`2[T,TResult] predicate) [0x00045] in /Users/builder/jenkins/workspace/build-package-osx-mono/2020-02/external/bockbuild/builds/mono-x64/external/corefx/src/System.Linq/src/System/Linq/Single.cs:71 MSBUILD : error : at AppDynamics.Agent.AutoInstrument.Fody.ImportedReferences.LoadTypes () [0x000f1] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ImportedReferences.cs:117 MSBUILD : error : at AppDynamics.Agent.AutoInstrument.Fody.ImportedReferences..ctor (ModuleWeaver weaver) [0x0000d] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ImportedReferences.cs:90 MSBUILD : error : at ModuleWeaver.Execute () [0x00000] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ModuleWeaver.cs:17 MSBUILD : error : at InnerWeaver.ExecuteWeavers () [0x000b7] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:204 MSBUILD : error : Source: MSBUILD : error : System.Core MSBUILD : error : TargetSite: MSBUILD : error : Mono.Cecil.MethodDefinition Single[MethodDefinition](System.Collections.Generic.IEnumerable`1[Mono.Cecil.MethodDefinition], System.Func`2[Mono.Cecil.MethodDefinition,System.Boolean]) Any help please? Thank you
Perfect. Thank you!
Hi @corti77, if you have a list of monitored hosts you could run a search like the following: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields hos... See more...
Hi @corti77, if you have a list of monitored hosts you could run a search like the following: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 if you don't have this list, you can identify the hosts that sent logs in the last 30 days and not in the last hour: | tstats count latest(_time) AS _time WHERE index=* BY host | eval period=if(_time>now()-3600,"last_Hour","Previous") | stats dc(period) AS period_count values(period) AS period BY host | where period_count=1 AND period="Previous" Ciao. Giuseppe
Is renderXml = 1 set on the input for working and non-working? And any chance what you have working locally includes delimiters around the regex?  Ex...  blacklist1 = $XmlRegex="(C:\\Program File... See more...
Is renderXml = 1 set on the input for working and non-working? And any chance what you have working locally includes delimiters around the regex?  Ex...  blacklist1 = $XmlRegex="(C:\\Program Files\\Preton\\PretonSaver\\PretonService.exe)" inputs.conf - Splunk Documentation * key=regex format: [...] * The regex consists of a leading delimiter, the regex expression, and a trailing delimiter. Examples: %regex%, *regex*, "regex" [...]  
Hello @Nimi1 , Below 2 points might be the reason for duplicate events ingested in Splunk - 1 - It might be possible that if you have multiple forwarders and you've enabled the same input (like E... See more...
Hello @Nimi1 , Below 2 points might be the reason for duplicate events ingested in Splunk - 1 - It might be possible that if you have multiple forwarders and you've enabled the same input (like ELB access logs) on all of them, it could lead to duplicate events being sent to Splunk. This duplication may occur because each forwarder is independently sending the same logs, resulting in repeated entries in your Splunk data. 2 - By mistake If you've enabled multiple inputs for the same source within Splunk, like ELB access logs, it could result in the same logs being ingested multiple times, leading to duplicates in your Splunk data. If this reply helps you, Karma would be appreciated.
The timeframe field is not terminated by a pipe, like the other fields, so the regex doesn't match.  Try this: | rex "(\w+)=([^\|\"]+)"
Hi, By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type o... See more...
Hi, By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type of "issue" in the future, I would like to have some alerts in splunk informing me whenever any host, initially domain joined workstations and servers, might have stopped reporting events. Did someone implement something like this already? any good article to follow? I plan to create a lookup table using ldapsearch and then, an alert detecting which hosts from that table are not present in a basic listing host search on sysmon index for example. Any other better or simpler approach?   many thanks
OK so the next time period after it was indexed would be 2024-04-04 01:00 to 2024-04-04 01:04:59 which doesn't include 04/04/2024 00:56:08.600 which is why your alert didn't get triggered.
Ok, thank you.   In one of the cases that I didn't get my alert triggered,  TimeIndexed = 2024-04-04 01:01:59 _time=04/04/2024 00:56:08.600
The simple answer is no - once the event is there, it is there until it expires i.e. passes the retention period, or it is deleted (which is a complex and risky operation). A more complex answer is ... See more...
The simple answer is no - once the event is there, it is there until it expires i.e. passes the retention period, or it is deleted (which is a complex and risky operation). A more complex answer is yes, but it involves the delete command which has its challenges and is restricted by role, i.e. only those who have been assigned the role can delete events, and even then, there aren't actually deleted, they are just made invisible to searches. A pragmatic answer (as you hinted at) is that you deduplicate or merge the events with the same event_id through your searches, which, as you point out, has its own challenges.
There's an app for that at https://splunkbase.splunk.com/app/3465 See the Splunk Validate Architectures document (https://www.splunk.com/en_us/resources/splunk-validated-architectures.html) for sugg... See more...
There's an app for that at https://splunkbase.splunk.com/app/3465 See the Splunk Validate Architectures document (https://www.splunk.com/en_us/resources/splunk-validated-architectures.html) for suggested specs.
The stats command can group results by multiple fields.  Try | stats count by Service,Operation
Do you mean something like this? | stats count by Service, Operation
Using the new  appdynamics-gradle-plugin 24.1.0 makes the build fail with the following exception: Execution failed for task ':ControlCenterApp:expandReleaseArtProfileWildcards'. > A failure o... See more...
Using the new  appdynamics-gradle-plugin 24.1.0 makes the build fail with the following exception: Execution failed for task ':ControlCenterApp:expandReleaseArtProfileWildcards'. > A failure occurred while executing com.android.build.gradle.internal.tasks.ExpandArtProfileWildcardsTask$ExpandArtProfileWildcardsWorkAction > invalid END header (bad central directory offset) The relevant stacktrace part: ... Caused by: java.util.zip.ZipException: invalid END header (bad central directory offset) at com.android.tools.profgen.ArchiveClassFileResourceProvider.getClassFileResources(ClassFileResource.kt:62) at com.android.tools.profgen.ProfgenUtilsKt.expandWildcards(ProfgenUtils.kt:60) at com.android.build.gradle.internal.tasks.ExpandArtProfileWildcardsTask$ExpandArtProfileWildcardsWorkAction.run(ExpandArtProfileWildcardsTask.kt:120) at com.android.build.gradle.internal.profile.ProfileAwareWorkAction.execute(ProfileAwareWorkAction.kt:74) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:63) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:66) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:62) at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:62) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:59) at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:170) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:187) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:120) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:162) at org.gradle.internal.Factories$1.create(Factories.java:31) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:264) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:128) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:133) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:157) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:126) ... 2 more Environment: com.android.tools.build:gradle:8.3.1 Any ideas what is happening here? It kind of sounds like there is something wrong with the library-packaging itself.. but thats just a guess and in fact i could open the renamed jar with a zip-tool myself. cheers, Jerg
Yes, extract the full timestamp (including the date), then parse it with strptime() into an epoch time value (number of seconds since 1970), then format it with strftime() using the relevant time var... See more...
Yes, extract the full timestamp (including the date), then parse it with strptime() into an epoch time value (number of seconds since 1970), then format it with strftime() using the relevant time variables Date and time format variables - Splunk Documentation
Hello, I am trying to install splunk operator using the operator Hub in OpenShift cluster through webconsole but it is failing with below error  installing: waiting for deployment splunk-operator-... See more...
Hello, I am trying to install splunk operator using the operator Hub in OpenShift cluster through webconsole but it is failing with below error  installing: waiting for deployment splunk-operator-controller-manager to become ready: deployment "splunk-operator-controller-manager" not available: Deployment does not have minimum availability. I also tried to follow the below steps mentioned in this document How to deploy Splunk in an OpenShift environment | by Eduardo Patrocinio | Medium  but found below as well in CLI  oc apply -f https://raw.githubusercontent.com/patrocinio/openshift/main/splunk/standalone-standalone.yaml error: resource mapping not found for name: "standalone" namespace: "splunk-operator" from "https://raw.githubusercontent.com/patrocinio/openshift/main/splunk/standalone-standalone.yaml": no matches for kind "Standalone" in version "enterprise.splunk.com/v1alpha2" ensure CRDs are installed first my requirement is to create a instance of splunk in a namespace in OCP. Kindly help on this Splunk Operator for Kubernetes 
Hello!  So here is a doozy.  We have blacklists in place using Regx. In particular this one: [WinEventLog://Microsoft-Windows-Sysmon/Operational] source= XmlWinEventLog:Microsoft-Windows-Sysmon/O... See more...
Hello!  So here is a doozy.  We have blacklists in place using Regx. In particular this one: [WinEventLog://Microsoft-Windows-Sysmon/Operational] source= XmlWinEventLog:Microsoft-Windows-Sysmon/Operational disabled = false start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = $XmlRegex= (C:\\Program Files\\Preton\\PretonSaver\\PretonService.exe)   along with some other blacklists in sequence as blacklist2, blacklist3 etc.  Now, on my and some colleagues local machines where I have tested the inputs.conf file to see if we are blacklisting correctly, it all works, no logs come into splunk cloud.   BUT when this is deployed using our deployment server to the estate the blacklist doesnt work.  Does anybody know where i can troubleshoot this from?  Ive ran the btool and confs come out fine.  we are local UF agents going to a DS then to Splunk Cloud.   
Hi Splunkers, Let me provide a bit of background,   We are ingesting logs into splunk using an API from our DLP service provider using HEC. The script that run the API read the last 24 hours of... See more...
Hi Splunkers, Let me provide a bit of background,   We are ingesting logs into splunk using an API from our DLP service provider using HEC. The script that run the API read the last 24 hours of events,  get the results in Json format and send it to splunk cloud using HEC. The problem I have is every pull some of the events get duplicates  .... eg the event_id get duplicated as the events changes its status on daily basis so the the script catches that.  so doing reporting is a nightmare, I know I can deduplicated but is there a easier way for splunk to remove those duplicates and maintain the same event_id?
Did you ever resolve this issue?