All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello   I have 2 searches that i want to do math on the results. Each search looks for a specific string and dedups based on an id.  The first search:     index=anon_index source="*source.log" ... See more...
Hello   I have 2 searches that i want to do math on the results. Each search looks for a specific string and dedups based on an id.  The first search:     index=anon_index source="*source.log" "call to ODM at */file_routing" | dedup MSGID | stats count     the second search:     index=anon_index source="*source.log" "message was published to * successfully" | dedup MSGID | append [search index="index2" "ROUTING failed. Result sent to * response queue tjhis.MSG.RES.xxx" source=MSGTBL* OR source=MSG_RESPONSE OR source=*source.log | dedup MSGID] | stats count     What I'd like to be able to do is take the results from the first set and subtract the second. for example if the first set was 1000 and the second was 500 I'd like to be able to show the difference. At some point I'd like to be able to show the id's that were in the first set that were not in the second and show that in a panel. Thanks for the assistance!
Thank you ,  This is a very good suggestion but unfortunately all old server are decommissioned. We only have data backup in buckets form . I am pretty sure they are warm and cold . Hence it is deci... See more...
Thank you ,  This is a very good suggestion but unfortunately all old server are decommissioned. We only have data backup in buckets form . I am pretty sure they are warm and cold . Hence it is decided to have a standalone and mount the data backup storage and start searching it . 
it is just old data ,  Both setups were running in parallel for like a month or so, once all the log sources shifted successfully to new setup we stopped using old setup . I am sure mostly the data w... See more...
it is just old data ,  Both setups were running in parallel for like a month or so, once all the log sources shifted successfully to new setup we stopped using old setup . I am sure mostly the data will be in warm and cold bucket as when we stop/restart old splunk buckets should have moved to warm .   
Hi  Project/environment setup: 1. Android Gradle Plugin: 8.1.1 2. AppDynamics Android agent: 23.7.1 3. Gradle version: 8.3 3. Java 17 In our company we need to migrate to new Maven repository (... See more...
Hi  Project/environment setup: 1. Android Gradle Plugin: 8.1.1 2. AppDynamics Android agent: 23.7.1 3. Gradle version: 8.3 3. Java 17 In our company we need to migrate to new Maven repository (GitLab) and need to use header credentials for this new maven repo. Exactly as described in GitLab docs here:  https://docs.gitlab.com/ee/user/packages/maven_repository/?tab=gradle#edit-the-client-configuration So, basically our Maven repository configuration looks approximately like this: maven { url = uri("https://repo.url") name = "GitLab" credentials(HttpHeaderCredentials::class) { name = "TokenHeader" value = "TokenValue" } authentication { create("header", HttpHeaderAuthentication::class) } } The problem that once we add this to project configuration, the project starts failing on Gradle configuration stage with below stacktrace: FAILURE: Build failed with an exception. * What went wrong: A problem occurred configuring project ':app'. > Can not use getCredentials() method when not using PasswordCredentials; please use getCredentials(Class) ... * Exception is: org.gradle.api.ProjectConfigurationException: A problem occurred configuring project ':app'. ... Caused by: java.lang.IllegalStateException: Can not use getCredentials() method when not using PasswordCredentials; please use getCredentials(Class) at org.gradle.api.internal.artifacts.repositories.AuthenticationSupporter.getCredentials(AuthenticationSupporter.java:62) ... at org.gradle.internal.metaobject.AbstractDynamicObject.getProperty(AbstractDynamicObject.java:60) at org.gradle.api.internal.artifacts.repositories.DefaultMavenArtifactRepository_Decorated.getProperty(Unknown Source) at com.appdynamics.android.gradle.DependencyInjector$1$_afterEvaluate_closure2$_closure6.doCall(DependencyInjector.groovy:93) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ... at org.gradle.api.internal.artifacts.DefaultArtifactRepositoryContainer.addRepository(DefaultArtifactRepositoryContainer.java:88) at org.gradle.api.internal.artifacts.dsl.DefaultRepositoryHandler.maven(DefaultRepositoryHandler.java:161) at org.gradle.api.internal.artifacts.dsl.DefaultRepositoryHandler.maven(DefaultRepositoryHandler.java:167) at org.gradle.api.artifacts.dsl.RepositoryHandler$maven.call(Unknown Source) at com.appdynamics.android.gradle.DependencyInjector$1$_afterEvaluate_closure2.doCall(DependencyInjector.groovy:89) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at com.appdynamics.android.gradle.DependencyInjector$1.afterEvaluate(DependencyInjector.groovy:79) at jdk.internal.reflect.GeneratedMethodAccessor2628.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at org.gradle.configuration.internal.DefaultListenerBuildOperationDecorator$BuildOperationEmittingInvocationHandler$1.lambda$run$0(DefaultListenerBuildOperationDecorator.java:255) ... From what I understood AppDynamics plugin is trying to use somehow maven repositories mentioned in the project and trying to work with them via some reflection API. And only assume that Maven repositories can have authentication via username/password. I also found that it is somehow related to `adeum.dependencyInjection.enabled` property, but it is poorly documented. Well, I have not found any documentation about what it is doing at all, only single sentence mentioning it here:  https://docs.appdynamics.com/appd/22.x/22.3/en/end-user-monitoring/mobile-real-user-monitoring/instrument-android-applications/troubleshoot-the-android-instrumentation#id-.TroubleshoottheAndroidInstrumentationv22.1-EnforceaDifferentRuntimeVersionfromthePluginVersion Anyway, after trying disabling this option, the project compiles, but app crashes in runtime when built. So, questions are: 1. Is there any way to use Maven repository with non username/password option and AppDynamics plugin?     We are not allowed to use different authentication for it, so AppDynamics plugin becoming a blocker for us for building the project. 2. Is there any documentation or knowledge about `adeum.dependencyInjection.enabled`, because it seems it is directly related to the issue?
We are moving into a container environment and plan to manage the logs via Splunk Cloud.  We'd like to be able to programmatically create the Event Collector, the main index and additional indexes.  ... See more...
We are moving into a container environment and plan to manage the logs via Splunk Cloud.  We'd like to be able to programmatically create the Event Collector, the main index and additional indexes.  Has anyone done anything like that?  We would need to both create and upgrade.
Hello @thambisetty ,   Can you able to explain each line usage if possible?   How can we build this same missing_number logic in if condition and throw an error if we miss any number in between. ... See more...
Hello @thambisetty ,   Can you able to explain each line usage if possible?   How can we build this same missing_number logic in if condition and throw an error if we miss any number in between. In our case also the sequnce increase by one.   Each Doc_ID(AGDGYS8737vdhh = file_name like category mentioned in this post) has set of sequence_number increase one by one.If it not increase by one means then we came to conclusion like some number missed in between we can trigger alert then and there.
Thank you. Maybe I am not being clear enough. I apologize. index=windowsevent sourcetype="Script:InstalledApps" NOT DisplayName="Carbon Black Cloud Sensor 64-bit" | dedup host | table host   ... See more...
Thank you. Maybe I am not being clear enough. I apologize. index=windowsevent sourcetype="Script:InstalledApps" NOT DisplayName="Carbon Black Cloud Sensor 64-bit" | dedup host | table host   When I ran this, it returns all the host I have in splunk and many of those host have the Carbon Black Cloud installed.
Hello @ITWhisperer & @bowesmana  & @yuanliu ,   Thanks One more question , How Can I put logic to find missing number in this sequence of dynamically changing numbers: It has no logic except it i... See more...
Hello @ITWhisperer & @bowesmana  & @yuanliu ,   Thanks One more question , How Can I put logic to find missing number in this sequence of dynamically changing numbers: It has no logic except it increase one by one. Is there any way to build logic for this increase by one number and Need to trigger an alert if it not increase by one- which indicates number missed. 00000000000000875115,00000000000000875116,00000000000000875118 In this case 00000000000000875117 is missing 00000000000000875117 00000000000000875117 
Thanks One more question , How Can I put logic to find missing number in this sequence of dynamically changing numbers: It has no logic except it increase one by one.Is there any way to build logic... See more...
Thanks One more question , How Can I put logic to find missing number in this sequence of dynamically changing numbers: It has no logic except it increase one by one.Is there any way to build logic for this increase by one number and Need to trigger an alert if it not increase by one- which indicates number missed. 00000000000000875115,00000000000000875116,00000000000000875118 In this case 00000000000000875117 is missing 00000000000000875117 00000000000000875117 
The pattern may be:   Sequence Numbers:00000000000000872510,00000000000000872511,00000000000000872512,00000000000000872513,00000000000000872514 I need to extract only numbers without comma and d... See more...
The pattern may be:   Sequence Numbers:00000000000000872510,00000000000000872511,00000000000000872512,00000000000000872513,00000000000000872514 I need to extract only numbers without comma and display them in table like: 00000000000000872510 00000000000000872511 00000000000000872512 00000000000000872513 00000000000000872514
Hello, I tested by following the instruction, it only worked with the sample IP provided by Splunk, but it didn't work when I tried  compare IPv6 with IPv6.  (See below) It looks like CIDR match on... See more...
Hello, I tested by following the instruction, it only worked with the sample IP provided by Splunk, but it didn't work when I tried  compare IPv6 with IPv6.  (See below) It looks like CIDR match only work only if a IP is part of subnet. In my environment, I tried to compare IPv6 (compressed) with IPv6 (expanded) Thanks for your help IP from Splunk, expected = TRUE | makeresults | eval ip="2001:0db8:ffff:ffff:ffff:ffff:ffff:ff99" | lookup ipv6test ip OUTPUT expected _time expected ip 2023-09-07 09:08:54 TRUE 2001:0db8:ffff:ffff:ffff:ffff:ffff:ff99   IP from our test, expected = empty | makeresults | eval ip="2001:db8:3333:4444:5555:6666::2101" | lookup ipv6test ip OUTPUT expected _time expected ip 2023-09-07 09:10:54   2001:db8:3333:4444:5555:6666::2101   CSV Table ip expected 2001:0db8:ffff:ffff:ffff:ffff:ffff:ff00/120 true 2001:db8:3333:4444:5555:6666::2101/64 test mask 2001:db8:3333:4444:5555:6666::2101 test with mask
Noting your input on the SH being not the best option for the input collector of HEC. Anyway, your tip was the correct one and allowed to filter the data. You made my day, thanks !
Hi @vijreddy30, check the sharing options: maybe you have different roles for some users. Did you checked tha the alias aren't private? Ciao. Giuseppe
import requests url = "https://splunkbase.splunk.com/api/v1/app/" limit = 1 url2= "https://splunkbase.splunk.com/api/v1/app/" with open(r"C:\Users\denis.zarfin\PycharmProjects\pythonProject2\main.... See more...
import requests url = "https://splunkbase.splunk.com/api/v1/app/" limit = 1 url2= "https://splunkbase.splunk.com/api/v1/app/" with open(r"C:\Users\denis.zarfin\PycharmProjects\pythonProject2\main.txt", 'w') as f: f.write("name" + ", " + "uid" + ", " + "title" + ", " +'\n') offset = -1 all_numbers = [] try : while True: offset += 1 try: response = requests.get(url, params={"limit": limit, "offset": offset}) data = response.json() for i in data["results"]: url2 = str(url2) + str(i["uid"]) + "/release/" response2 = requests.get(url2) data2 = response2.json() data2 = data2[0:1] for j in data2: a = str(j["name"]) b = str(i["uid"]) c = str(i["title"]) with open(r"C:\Users\denis.zarfin\PycharmProjects\pythonProject2\main.txt", 'a') as f: f.write(a + ", " + b + ", " + c + ", " +'\n') url2 = "https://splunkbase.splunk.com/api/v1/app/" except: pass print(offset, a, b, c) if offset > 2700: break except: pass print("ok") That one exports the results to CSV... but it's not that good. In the end I want to get 2 JSONs I was able to do it "manually" with: import json import requests result = [] for app_id in range(0, 1, 1): url = f'https://splunkbase.splunk.com/api/v1/app/?offset={app_id}&limit=1' data = requests.get(url).json() print(f'Name: {data["results"]['uid']}')  
Ugh. Using SH as an input collector is... kinda unusual. And not a very beautiful architecture. Anyway, remember that your events are parsed and processed _only_ on the first "heavy" (based on the S... See more...
Ugh. Using SH as an input collector is... kinda unusual. And not a very beautiful architecture. Anyway, remember that your events are parsed and processed _only_ on the first "heavy" (based on the Splunk Enterprise install package; not UF) component in event's path (except ingest actions; those can happen on indexers even on parsed data). So if you're ingesting HEC on SH, you need those props/transforms on SH. And in order to filter on the source field you need MetaData:Source  : The source associated with the event. The value must be prefixed by "source::"
Hi Team,   props ,conf  write the field alias , Fields alias are showing the Interesting fields  in Dev and QA environment, same configuration updated in Prod environment , but prod not showing fie... See more...
Hi Team,   props ,conf  write the field alias , Fields alias are showing the Interesting fields  in Dev and QA environment, same configuration updated in Prod environment , but prod not showing field alias in Interesting fields,   Please help me.   Regards, Vijay ,K
I'm receiving the HEC directly on the search head and have the props/transforms setup on both the SH and the indexers. The sourcetype is "jenkins_log" and the log I want to avoid has "DBCompilation"... See more...
I'm receiving the HEC directly on the search head and have the props/transforms setup on both the SH and the indexers. The sourcetype is "jenkins_log" and the log I want to avoid has "DBCompilation" in the source field This is what I'm trying to achieve. in props.conf:     [jenkins_log] TRANSFORMS-override = ignore_jenkins_logs   in transforms.conf   [ignore_jenkins_logs] SOURCE_KEY = fields:source REGEX = DBCompilation DEST_KEY = queue FORMAT = nullQueue  
Hi @jip31 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
In my case, yes. Check your props.conf, and try with [httpevent]  and see if that helps. I had to do a mixture of both to get it to work.
The first question is - are your props/transforms on the same host as you're receiving data with your HEC input(s)? Or are you trying to receive HEC on HF and filter with props/transforms on indexers?