All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Subsearches are limited to 50,000 events - could this be the reason your subsearch is not showing any results? Have you tried a shorter timeframe, or tried fragmenting your subsearch in some way, e.... See more...
Subsearches are limited to 50,000 events - could this be the reason your subsearch is not showing any results? Have you tried a shorter timeframe, or tried fragmenting your subsearch in some way, e.g. splitting by source?
Wow... This worked! Thank you very much. This has been a journey.  Looks like I just need to shorten my relative time to avoid the max results and timeouts for the "map" function but that is totally... See more...
Wow... This worked! Thank you very much. This has been a journey.  Looks like I just need to shorten my relative time to avoid the max results and timeouts for the "map" function but that is totally worth it. 
Hi, I have several dashboards that are utilizing the custom JS to create tabs from this 2015 Splunk blog: Making a dashboard with tabs (and searches that run when clicked) | Splunk The custom JS fro... See more...
Hi, I have several dashboards that are utilizing the custom JS to create tabs from this 2015 Splunk blog: Making a dashboard with tabs (and searches that run when clicked) | Splunk The custom JS from the blog and the tabs worked perfectly on Splunk versions 8.1.3; however, after upgrading to version 9.1.0.1. The custom JavaScript that powers the tabs no longer works. When loaded, the dashboards show an error that says: "A custom JavaScript error caused an issue loading your dashboard."  Does anyone know how to update the JavaScript from this blog post to be compatible with Splunk version 9.1.0.1 and JQuery 3.5? I have not been able to find any other Splunk questions referencing this same issue. I can provide the full JavaScript from the blog post in this message if necessary. It is also on the blog post. 
Yes, I have results from both the subsearches. But, still I don't see Counter in the results which is weird.
Hello all, We are getting error in analytics agent part of SAP ABAP Status logs as given below. - Analytics agent connection: > ERROR: connection check failed: > Ping: 1 ms > HTTP response ended... See more...
Hello all, We are getting error in analytics agent part of SAP ABAP Status logs as given below. - Analytics agent connection: > ERROR: connection check failed: > Ping: 1 ms > HTTP response ended with error: HTTP communication failed (code 411: Connect to eutehtas001:9090 failed: NIECONN_REFUSED(-10)) > HTTP server ******* (URI '/_ping') responded with status code 404 (Connection Refused) > Analytics Agent was not reached by server *********_EHT_11V1http://**********:9090. Is it running? Can anyone tell me why this error is occurring? Regards, Abhiram
There doesn't appear to be anything wrong with the search as you have presented it - are you certain you have results from the subsearch index=summary source="abc1" OR source="abc2" | timechart span... See more...
There doesn't appear to be anything wrong with the search as you have presented it - are you certain you have results from the subsearch index=summary source="abc1" OR source="abc2" | timechart span=1h sum(xyz) as Counter | table Counter
I want to allow users of a specific role to be able to access the user dropdown menu so that they can logout, but I want to prevent their ability to modify the user preferences (time zone, default ap... See more...
I want to allow users of a specific role to be able to access the user dropdown menu so that they can logout, but I want to prevent their ability to modify the user preferences (time zone, default app, etc.). I have determined how to prevent these users from modifying their own password with role capabilities. Without the Role capability list_all_objects, the user dropdown doesn't display making the ability to logout difficult. With the list_all_objects capability active for the role the user now has access to adjust the user preferences. Thanks in advance!
I am trying to get data from 2 indexes and combine them via appendcols. The search is  index="anon" sourcetype="test1" localDn=*aaa* | fillnull release_resp_succ update_resp_succ release_req u... See more...
I am trying to get data from 2 indexes and combine them via appendcols. The search is  index="anon" sourcetype="test1" localDn=*aaa* | fillnull release_resp_succ update_resp_succ release_req update_req n40_msg_written_to_disk create_req value=0 | eval Number_of_expected_CDRs = release_req+update_req | eval Succ_CDRs=release_resp_succ+update_resp_succ | eval Missing_CDRs=Number_of_expected_CDRs-Succ_CDRs-n40_msg_written_to_disk | timechart span=1h sum(Number_of_expected_CDRs) as Expected_CDRs sum(Succ_CDRs) as Successful_CDRs sum(Missing_CDRs) as Missing_CDRs sum(n40_msg_written_to_disk) as Written sum(create_req) as Create_Request | eval Missed_CDRs_%=round((Missing_CDRs/Expected_CDRs)*100,2) | eval Missed_CDRs_%=round((Missing_CDRs/Expected_CDRs)*100,2) | table * | appendcols [| search index=summary source="abc1" OR source="abc2" | timechart span=1h sum(xyz) as Counter | table Counter] But, I am getting output from just the first search. The appendcols  search is just not giving the Counter field in the output. 
Try something like this index=anon_index source="*source.log" "message was published to * successfully" | dedup MSGID | append [search index="index2" "ROUTING failed. Result sent to * response que... See more...
Try something like this index=anon_index source="*source.log" "message was published to * successfully" | dedup MSGID | append [search index="index2" "ROUTING failed. Result sent to * response queue tjhis.MSG.RES.xxx" source=MSGTBL* OR source=MSG_RESPONSE OR source=*source.log | dedup MSGID] | stats count as firstcount | appendcols [ search index=anon_index source="*source.log" "call to ODM at */file_routing" | dedup MSGID | stats count as secondcount] | eval diff=firstcount-secondcount
yeah thank you in advance 
Try something like this | rex field=MessageText max_match=0 "(?<SequenceNumber>\d+)" | table SequenceNumber | eval fullSequence = mvrange(tonumber(mvindex(SequenceNumber,0)),tonumber(mvindex(Sequenc... See more...
Try something like this | rex field=MessageText max_match=0 "(?<SequenceNumber>\d+)" | table SequenceNumber | eval fullSequence = mvrange(tonumber(mvindex(SequenceNumber,0)),tonumber(mvindex(SequenceNumber,-1))+1) | eval missing=mvmap(fullSequence,if(isnull(mvfind(SequenceNumber,fullSequence)),fullSequence,NULL()))
Hi all, We have a source which comes in via HEC into an index. The sourcetyping currently is dynamic. We then route data based on a indexed label the data to a specific index. Here comes the ... See more...
Hi all, We have a source which comes in via HEC into an index. The sourcetyping currently is dynamic. We then route data based on a indexed label the data to a specific index. Here comes the catch. If we have another indexed field called label we want to clone that event into a new index and sourcetype   props.conf     [(?::){0}kube:container:*] TRANSFORMS-route_by_domain_label = route_by_domain_label       transforms.conf We route the data based on a label which is custom named k8s_label for the example here and for sensitive data we also have a label called : label_sensitive      [route_index_by_label_domain] SOURCE_KEY = field:k8s_label REGEX = index_domain_(\w+) FORMAT = indexname_$1 DEST_KEY = _MetaData:Index [clone_when_sensitive] SOURCE_KEY = field:label_sensitive REGEX = true DEST_KEY = _MetaData:Sourcetype #CLONE_SOURCETYPE = sensitive_events FORMAT = sourcetype::sensitive_events    
Just released in Splunk Enterprise Security 7.2.0, this is now a feature. Splunk Idea ESSID-I-67: Ability to configure multiple drill-down searches for notable
Hi @vikas_gopal, the main problem is that probably you have the backup in clustered format: I'm not sure that it's possible to restore it without a cluster! Let me know if I can help you more. Cia... See more...
Hi @vikas_gopal, the main problem is that probably you have the backup in clustered format: I'm not sure that it's possible to restore it without a cluster! Let me know if I can help you more. Ciao. Giuseppe P.S.: Karma Points are appeciated
Hello   I have 2 searches that i want to do math on the results. Each search looks for a specific string and dedups based on an id.  The first search:     index=anon_index source="*source.log" ... See more...
Hello   I have 2 searches that i want to do math on the results. Each search looks for a specific string and dedups based on an id.  The first search:     index=anon_index source="*source.log" "call to ODM at */file_routing" | dedup MSGID | stats count     the second search:     index=anon_index source="*source.log" "message was published to * successfully" | dedup MSGID | append [search index="index2" "ROUTING failed. Result sent to * response queue tjhis.MSG.RES.xxx" source=MSGTBL* OR source=MSG_RESPONSE OR source=*source.log | dedup MSGID] | stats count     What I'd like to be able to do is take the results from the first set and subtract the second. for example if the first set was 1000 and the second was 500 I'd like to be able to show the difference. At some point I'd like to be able to show the id's that were in the first set that were not in the second and show that in a panel. Thanks for the assistance!
Thank you ,  This is a very good suggestion but unfortunately all old server are decommissioned. We only have data backup in buckets form . I am pretty sure they are warm and cold . Hence it is deci... See more...
Thank you ,  This is a very good suggestion but unfortunately all old server are decommissioned. We only have data backup in buckets form . I am pretty sure they are warm and cold . Hence it is decided to have a standalone and mount the data backup storage and start searching it . 
it is just old data ,  Both setups were running in parallel for like a month or so, once all the log sources shifted successfully to new setup we stopped using old setup . I am sure mostly the data w... See more...
it is just old data ,  Both setups were running in parallel for like a month or so, once all the log sources shifted successfully to new setup we stopped using old setup . I am sure mostly the data will be in warm and cold bucket as when we stop/restart old splunk buckets should have moved to warm .   
Hi  Project/environment setup: 1. Android Gradle Plugin: 8.1.1 2. AppDynamics Android agent: 23.7.1 3. Gradle version: 8.3 3. Java 17 In our company we need to migrate to new Maven repository (... See more...
Hi  Project/environment setup: 1. Android Gradle Plugin: 8.1.1 2. AppDynamics Android agent: 23.7.1 3. Gradle version: 8.3 3. Java 17 In our company we need to migrate to new Maven repository (GitLab) and need to use header credentials for this new maven repo. Exactly as described in GitLab docs here:  https://docs.gitlab.com/ee/user/packages/maven_repository/?tab=gradle#edit-the-client-configuration So, basically our Maven repository configuration looks approximately like this: maven { url = uri("https://repo.url") name = "GitLab" credentials(HttpHeaderCredentials::class) { name = "TokenHeader" value = "TokenValue" } authentication { create("header", HttpHeaderAuthentication::class) } } The problem that once we add this to project configuration, the project starts failing on Gradle configuration stage with below stacktrace: FAILURE: Build failed with an exception. * What went wrong: A problem occurred configuring project ':app'. > Can not use getCredentials() method when not using PasswordCredentials; please use getCredentials(Class) ... * Exception is: org.gradle.api.ProjectConfigurationException: A problem occurred configuring project ':app'. ... Caused by: java.lang.IllegalStateException: Can not use getCredentials() method when not using PasswordCredentials; please use getCredentials(Class) at org.gradle.api.internal.artifacts.repositories.AuthenticationSupporter.getCredentials(AuthenticationSupporter.java:62) ... at org.gradle.internal.metaobject.AbstractDynamicObject.getProperty(AbstractDynamicObject.java:60) at org.gradle.api.internal.artifacts.repositories.DefaultMavenArtifactRepository_Decorated.getProperty(Unknown Source) at com.appdynamics.android.gradle.DependencyInjector$1$_afterEvaluate_closure2$_closure6.doCall(DependencyInjector.groovy:93) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ... at org.gradle.api.internal.artifacts.DefaultArtifactRepositoryContainer.addRepository(DefaultArtifactRepositoryContainer.java:88) at org.gradle.api.internal.artifacts.dsl.DefaultRepositoryHandler.maven(DefaultRepositoryHandler.java:161) at org.gradle.api.internal.artifacts.dsl.DefaultRepositoryHandler.maven(DefaultRepositoryHandler.java:167) at org.gradle.api.artifacts.dsl.RepositoryHandler$maven.call(Unknown Source) at com.appdynamics.android.gradle.DependencyInjector$1$_afterEvaluate_closure2.doCall(DependencyInjector.groovy:89) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at com.appdynamics.android.gradle.DependencyInjector$1.afterEvaluate(DependencyInjector.groovy:79) at jdk.internal.reflect.GeneratedMethodAccessor2628.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at org.gradle.configuration.internal.DefaultListenerBuildOperationDecorator$BuildOperationEmittingInvocationHandler$1.lambda$run$0(DefaultListenerBuildOperationDecorator.java:255) ... From what I understood AppDynamics plugin is trying to use somehow maven repositories mentioned in the project and trying to work with them via some reflection API. And only assume that Maven repositories can have authentication via username/password. I also found that it is somehow related to `adeum.dependencyInjection.enabled` property, but it is poorly documented. Well, I have not found any documentation about what it is doing at all, only single sentence mentioning it here:  https://docs.appdynamics.com/appd/22.x/22.3/en/end-user-monitoring/mobile-real-user-monitoring/instrument-android-applications/troubleshoot-the-android-instrumentation#id-.TroubleshoottheAndroidInstrumentationv22.1-EnforceaDifferentRuntimeVersionfromthePluginVersion Anyway, after trying disabling this option, the project compiles, but app crashes in runtime when built. So, questions are: 1. Is there any way to use Maven repository with non username/password option and AppDynamics plugin?     We are not allowed to use different authentication for it, so AppDynamics plugin becoming a blocker for us for building the project. 2. Is there any documentation or knowledge about `adeum.dependencyInjection.enabled`, because it seems it is directly related to the issue?
We are moving into a container environment and plan to manage the logs via Splunk Cloud.  We'd like to be able to programmatically create the Event Collector, the main index and additional indexes.  ... See more...
We are moving into a container environment and plan to manage the logs via Splunk Cloud.  We'd like to be able to programmatically create the Event Collector, the main index and additional indexes.  Has anyone done anything like that?  We would need to both create and upgrade.
Hello @thambisetty ,   Can you able to explain each line usage if possible?   How can we build this same missing_number logic in if condition and throw an error if we miss any number in between. ... See more...
Hello @thambisetty ,   Can you able to explain each line usage if possible?   How can we build this same missing_number logic in if condition and throw an error if we miss any number in between. In our case also the sequnce increase by one.   Each Doc_ID(AGDGYS8737vdhh = file_name like category mentioned in this post) has set of sequence_number increase one by one.If it not increase by one means then we came to conclusion like some number missed in between we can trigger alert then and there.