All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Sorry for everyone that I am posting multiple posts for my issue. Just summarising everything here.. please help me with the solution... we created a single summary index to all applications and afr... See more...
Sorry for everyone that I am posting multiple posts for my issue. Just summarising everything here.. please help me with the solution... we created a single summary index to all applications and afraid of giving access to them because any of them see that there can see other's apps summary data, it will be a security issue right. We have created a dashboard with summary index and disabled open in search. At some point, we need to give them access to summary index and what if they search index=* then their restricted index and this summary index shows up which can be risky. Is there any way we can restrict users running index=*. NOTE - already we are using RBAC to restrict users to their specific indexes. But this summary index will show summarised data of all. Any way to restrict this? However in dashboard we are restricting them by a field should be selected then only panel with summary index shows up by filtering. How people handle this type of situations? We will create two indexes per application one for non_prod and one for prod logs in same splunk. They create 2 AD groups (np and prod). We will create indexes, roles and assign that to respective AD groups and 1 user will have access to both these 2 groups. Being single summary index, thought of filtering it at role level using srchFilter and service field, so that to restrict one user seeing other apps summary data...Extracted service field from raw data and ingested it into summary index so that it will pick service field values. Then I will use this field in srchFilter to restrict users. We only need summary index for prod data (indexes) not non-prod data... Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod Below is the role created for prod [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index=prod OR (index=opco_summary service=juniper-prod) In other post I received comment that indexed fields will use :: but here these two fields (index, service) are not indexes fields, hence given = Here my doubt is when the user with these two roles if they can search only index=non_prod if he see results or not? How this search works in backend? Is there any way to test? And few users are part of 6-8 AD groups (6-8 indexes). How this srchFilter work here? Please clarify.. But what if user runs index=non_prod... Can he still see non_prod logs or not? If there is no other way rather than creating seperate summary index for each application, we need to do it. But is there any way we can do it fast rather than doing it manually? But again I don't have coding knowledge to auomate this.
Hello Splunker, I hope you all are doing well.    I prepare to take the SPLK-3001 Exam, and I want to know the Self-Study guide, and the Version of the ES? is it V7 or V8? Thanks in advance!
Windows server 2022 I have tried installing JRE24 and Java 8. It doesn't let me save the JAVA_HOME path.  Throw below error:- FileNotFoundError: [WinError 2] The system cannot find the file specif... See more...
Windows server 2022 I have tried installing JRE24 and Java 8. It doesn't let me save the JAVA_HOME path.  Throw below error:- FileNotFoundError: [WinError 2] The system cannot find the file specified validate java command: java.   Any help would be appreciated!!!!  
We will create two indexes per application one for non_prod and one for prod logs in same splunk. They create 2 AD groups (np and prod). We will create indexes, roles and assign that to respective AD... See more...
We will create two indexes per application one for non_prod and one for prod logs in same splunk. They create 2 AD groups (np and prod). We will create indexes, roles and assign that to respective AD groups. Till here it is good.  Now we created a single summary index for all prod indexes data and we need to give access to that index to all app teams. Being single summary index, thought of filtering it at role level using srchFilter and service field, so that to restrict one user seeing other apps summary data Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod Below is the role created for prod  [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index::prod OR (index::opco_summary service::juniper-prod))  Not sure whether to use = or :: here to work? Because in UI when I m testing it is giving warning when I give = .. but when giving :: search preview results not working. Not sure what to give? Here my doubt is when the user with these two roles if they can search only index=non_prod if he see results or not? How this search works in backend? Is there any way to test? And few users are part of 6-8 AD groups (6-8 indexes). How this srchFilter work here? Please clarify 
Hello all,  I am working on an Splunk query which suppose to filter some logs by utilizing data from lookup. Consider a field called host. I have list of host stored on an lookup (let's call the l... See more...
Hello all,  I am working on an Splunk query which suppose to filter some logs by utilizing data from lookup. Consider a field called host. I have list of host stored on an lookup (let's call the lookup as hostList.csv). Now, I want to retrieve the list of servers from the hostList.csv lookup. And then filter the field host with the retrieved set of list.  Note - I don't want use map command for this.  If is there any other way of pull off this logic. Please help me with example query and explanation.  Thank you!
Finding the Cisco documentation and support hard to follow.  Netviz agent installed and running, java agent installed but not working.  Cisco Support advising me that I need a standalone Java applica... See more...
Finding the Cisco documentation and support hard to follow.  Netviz agent installed and running, java agent installed but not working.  Cisco Support advising me that I need a standalone Java application to attach the java agent to.  Haven't read this in the Network Visibility guidance.  Confused, can I add this to the app agent?  Anyone got steps on this? 
I onboarded one production logs to splunk but after restarting the UF I am not able to see the recent logs also I am not able to see the recent internal logs. How to fix this issue please help?
https://splunkbase.splunk.com/app/3079 Qmulos - developer is this a free or paid app? if paid, where can I find pricing? thanks A
I need to filter a list of timestamps which are less than _time. this works: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < 1570... See more...
I need to filter a list of timestamps which are less than _time. this works: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < 1570000010)   but the compared value is whatever is in _time.  this does not work: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval _time = 1570000010 | eval older = mvfilter(timestamps < _time)   I know timestamps work, because this does work: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < now())   Why does now() and static values work, but this does not: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval now_time = now() | eval older = mvfilter(timestamps < now_time)   How can i get a variable in there to compare, since i need to compare the list to _time?
Hi Community, I'm exploring ways to ingest data into Splunk Cloud from a Amazon s3 Bucket which has multiple directories and multiple files to be ingested onto Splunk. Now, I have assessed the Gene... See more...
Hi Community, I'm exploring ways to ingest data into Splunk Cloud from a Amazon s3 Bucket which has multiple directories and multiple files to be ingested onto Splunk. Now, I have assessed the Generic s3, SQS-s3 and the Data Manager Inputs for AWS available on Splunk but am not getting the required outcome. My use case is given below: There's a s3 bucket named as exampledatastore, in that there's a directory named as statichexcodedefinition, in that there're multiple message Ids and Dates. The s3 example structure is given below: s3://exampledatastore/statichexcodedefinition/{messageId}/functionname/{date}/* - functionnameattribute Where the {messageId} and the {date} values are dynamic. And I have a start date to begin with but the messageId varies. Please can you assist me on this on how to get the data into Splunk. Many Thanks!
Hello Splunkers, The hardcoded time parameters inside a simple search don't work with v9.4.3.  It only takes the input from the time presets. Do you also experience a similar issue? index=index e... See more...
Hello Splunkers, The hardcoded time parameters inside a simple search don't work with v9.4.3.  It only takes the input from the time presets. Do you also experience a similar issue? index=index earliest="-7d@d" latest="-1m@m" and my preset is last 15 mins, then I get this output.  earliestTime latestTime 07/25/2025 10:40:01.636 07/25/2025 10:52:59.564 Very strange. Nothing mentioned on this in the release notes.
Can anyone please confirm if appdynamics machine agent supports TLS 1.3 or not ?  We are using java agent 25.4.0.37061 on Linux X64 platform ; If anyone can suggest an answer or point me towards rele... See more...
Can anyone please confirm if appdynamics machine agent supports TLS 1.3 or not ?  We are using java agent 25.4.0.37061 on Linux X64 platform ; If anyone can suggest an answer or point me towards relevant documentation ?    Thanks
Hello folks, We are doing splunkforwarder upgrade to 9.4.x (from 8.x) recently, we build the splunk sidecar image for our k8s application and i noticed the same procedures which works previous in fw... See more...
Hello folks, We are doing splunkforwarder upgrade to 9.4.x (from 8.x) recently, we build the splunk sidecar image for our k8s application and i noticed the same procedures which works previous in fwd version 8.x don't work anymore in 9.4.x. during the docker image startup, it's clearly to see the process hanging there and wait for interaction. bash-4.4$ ps -ef UID PID PPID C STIME TTY TIME CMD splunkf+ 1 0 0 02:11 ? 00:00:00 /bin/bash /entrypoint.sh splunkf+ 59 1 99 02:11 ? 00:01:25 /opt/splunkforwarder/bin/splunk edit user admin -password XXXXXXXX -role admin -auth admin:xxxxxx --answer-yes --accept-license --no-prompt splunkf+ 61 0 0 02:12 pts/0 00:00:00 /bin/bash splunkf+ 68 61 0 02:12 pts/0 00:00:00 ps -ef bash-4.4$ rpm -qa | grep splunkforwarder splunkforwarder-9.4.3-237ebbd22314.x86_64   there is a workaround to add a "tty: true" to k8s deployment template but this will cause a lot of efforts in our environment.   Any idea if any newer version has the fix? or any splunk command parameter can be used to bypass the tty requirement? Thanks.
I am trying to find the time taken by our processes. I wrote a basic query that fetch a start, end time, and the difference for a particular interaction. This uses the max and min to find the start a... See more...
I am trying to find the time taken by our processes. I wrote a basic query that fetch a start, end time, and the difference for a particular interaction. This uses the max and min to find the start and the end times. But I am not sure how to look for multiple process start and end times by looking at the messages.     index=application_na sourcetype=my_logs:hec appl="*" message="***" interactionid=12345 | table interactionid, seq, _time, host, severity, message, msgsource | sort _time | stats min(_time) as StartTime, max(_time) as EndTime by interactionid | eval Difference=EndTime-StartTime | fieldformat StartTime=strftime(StartTime, "%Y-%m-%d %H:%M:%S.%3N") | fieldformat EndTime=strftime(EndTime, "%Y-%m-%d %H:%M:%S.%3N") | fieldformat Difference=tostring(Difference,"duration") | table interactionid, StartTime, EndTime, Difference   I have messages that look like this: interactionid _time message 12345 2025-06-26 07:55:56.317 TimeMarker: WebService: Received request. (DoPayment - ID:1721 Amount:16 Acc:1234) 12345 2025-06-26 07:55:56.717 OtherApp: -> Sending request with timeout value: 15 12345 2025-06-26 07:55:57.512 TimeMarker: OtherApp: Received result from OtherApp (SALE - ID:1721 Amount:16.00 Acc:1234) 12345 2025-06-26 07:55:58.017 TimeMarker: WebService: Sending result @20234ms. (DoPayment - ID:1721 Amount:16 Acc:1234) So, I want to get an output of time taken by `OtherApp` from when it received a request to when it responded back to my app, and then the total time taken by my service `DoPayment`. Is this something achievable. Output that I am looking for is  interactionid DoPayment Start OtherApp Start OtherApp End DoPayment End          
I have a dotnet application logging template formatted log messages with serilog library and since everything is in JSON format they are great to filter my results when I know the fields to use but I... See more...
I have a dotnet application logging template formatted log messages with serilog library and since everything is in JSON format they are great to filter my results when I know the fields to use but I am having a hard time just to read logs when I dont know the fields available. So for example, the application might log things like: Log.Information("Just got a request {request} in endpoint {endpoint} with {httpMethod}", request,endpoint, httpMethod); And in Splunk I will see something like: { "msg": { "@mt": "Just got a request {request} in endpoint {endpoint} with {httpMethod}", "@sp": "11111", "request": "some_data", "endpoint": "some_url". "httpMethod": "POST" } } So this is awesome to create splunk queries using msg.request or msg.endpoint, but since the application logs pretty much everything using these message templates from serilog, when I am just doing investigations, I have a hard time in making readable results because everythig is hidden behind a placeholder. I am trying to achieve something like in Splunk Search: <some_guid> index=some_index | table _time msg.@mt and of course the msg.@mt will just give me the log line with the placeholders, but how can I just bring back the full log line in the table with the actual values?
I've developed TA's previously, and when using python2, everything worked just fine. But now, using python3 with splunk 9.x, it seems nothing works. Trying to develop a TA that makes some REST calls ... See more...
I've developed TA's previously, and when using python2, everything worked just fine. But now, using python3 with splunk 9.x, it seems nothing works. Trying to develop a TA that makes some REST calls out to a 3rd-party service, and then uses those values in some local confs. It's been a nightmare to try to make this work. Started with a modular input design, but contrary to the docs, my python code would never receive a splunk token on STDIN. Literally had this working perfectly in a python2 TA. This time? Doesn't matter how or when attempting to read STDIN, the python3 code *NEVER RECEIVES ANYTHING*. Finally I just gave up on this... Next try was with a scripted input; at least this **bleep** thing does receive a token on STDIN. Great, that token can be used w/ the SDK, right?  RIGHT??? Well, no, because 1) splunklib is not installed/included in the splunk python env, and 2) attempting to use the system python causes the whole **bleep** thing to crash, and 3) including splunklib inside the TA, and attempting to import it by manipulating python paths is also horribly broken. If we munge the python system paths thusly, we can in theory import our included libs (not concerned if this is idiomatic python; it works m'kay?): import os, sys modules = sys.argv[0].split('/')[:-2] modules.append('lib') sys.path.append('/'.join(modules)) This inserts our local lib path into python's lib search dirs. And it works to find splunklib. But then splunklib fails to load completely since: ImportError: libssl.so.1.0.0: cannot open shared object file: No such file or directory This is true even if LD_LIBRARY_PATH points to a dir containing libssl.so.1.0.0. I suspect this is due to the fact that Splunk is also doing an LD_PRELOAD="libdlwrapper.so" I don't know what this library is or what it's doing, but I also suspect it's breaking my env preventing anything from running. But it doesn't actually matter. If I remove my "import splunklib" and just leave the REST client to attempt to make its HTTPS request, that too is apparently horribly broken: ...(Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available")) What in the everloving fsck is going on here??!? Best I can tell, these two things are now true: 1) splunklib cannot be used from a TA 2) TA's cannot make HTTPS requests   This is happening in a clean-room environment with a fresh splunk install on a host that is not running selinux or apparmor or any other MAC system that might interfere. This is very much a problem with Splunk and splunklib.  So, how exactly can splunklib be used in TAs? And how exactly can TAs execute HTTPS requests??    
Before one week I created a summary index named waf_opco_yes_summary and it is working fine. Now they asked to change the index name as opco_yes_summary and already existing summary index should be c... See more...
Before one week I created a summary index named waf_opco_yes_summary and it is working fine. Now they asked to change the index name as opco_yes_summary and already existing summary index should be come to this index and that index shouldn't be visible anywhere either in dashboards or searches. That should be deleted and all its data should be moved to new index. What can I do here?  One more problem is we created a single summary index to all applications and afraid of giving access to them because any of them see that there can see other's apps summary data, it will be a security issue right. We have created a dashboard with summary index and disabled open in search. At some point, we need to give them access to summary index and what if they search index=* then their restricted index and this summary index shows up which can be risky. Is there any way we can restrict users running index=*. NOTE - already we are using RBAC to restrict users to their specific indexes. But this summary index will show summarised data of all. Any way to restrict this? Can't create summary index for each application. However in dashboard we are restricting them by a field should be selected then only panel with summary index shows up by filtering. How people handle this type of situations?
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_t... See more...
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_time, "%b-%y") | stats count by customer, Month, product, publicationId, topic | streamstats count as product_rank by customer, Month | where product_rank <= 5 | table customer, product, publicationId, topic, count, Month However, I do not believe it is achieving what I aim for. The data is structured as follows: Products > Publication IDs within those products > Topics within those specific publication IDs. What I am trying to accomplish is find out the top 5 products per customer per month, and then for each of those 5 products find out the top 5 publicationIds within them, and then for each publicationID find out the top 5 topics within them.
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observabil... See more...
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observability/Product_Tips/Observability_Cloud/Setting_up_the_OpenTelemetry_Demo_in_Docker However, Otel Collector throws below errors while connecting to ingestion URL.  tag=otel-collector 2025-07-24T12:49:17.841Z info internal/retry_sender.go:133 Exporting failed. Will retry the request after interval. {"resource": {"service.instance.id": "45be9d90-2946-4ae4-8cd9-f0edff3bc822", "service.name": "otelcol", "service.version": "v0.129.0"}, "otelcol.component.id": "otlphttp", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "error": "failed to make an HTTP request: Post \"https://ingest.us1.signalfx.com/v2/trace/otlp\": net/http: TLS handshake timeout", "interval": "27.272487507s"} and I tried with curl and it hang while checking TLS [root@kyn-app-01 opentelemetry-demo]# curl -v https://api.us1.signalfx.com/v2/apm/correlate/host.name/test/service \ -X PUT \ -H "X-SF-TOKEN: accesstoken" \ -H "Content-Type: application/json" \ -d '{"value": "test"}' * Trying 54.203.64.116:443... * Connected to api.us1.signalfx.com (54.203.64.116) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * CAfile: /etc/pki/tls/certs/ca-bundle.crt * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 * Closing connection 0 curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 So, kindly suggest what went wrong and how to fix it. Note: Firewall is disabled and no proxy. Regards, Eshwar      
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of thos... See more...
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of those services.   The end goal is to create an alert.   The following query produces no results index="server" host="*"  source="Unix:Service"   | eval IPTABLES = if(UNIT=iptables.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval AUDITD = if(UNIT=auditd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval CHRONYD = if(UNIT=chronyd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK") | dedup host  | table host IPTABLES AUDITD CHRONYD This query works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host IPTABLES How can I get the query to produce the following results host         IPTABLES       AUDITD    CHRONYD server1       failed                OK                OK