All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

One more thing I'd check would be to call the btool with a user and app context. It seems a bit illogical to treat the command differently per this context additionally to giving a user capability to... See more...
One more thing I'd check would be to call the btool with a user and app context. It seems a bit illogical to treat the command differently per this context additionally to giving a user capability to run such command but it's worth checking. If it still shows that the setting should effectively be false, it might be worth creating a support case.
Hey, I never heard back from anyone since posting the output of btool.  Any suggestions why this setting is not working as documented in the Splunk documentation?
Before one week I created a summary index named waf_opco_yes_summary and it is working fine. Now they asked to change the index name as opco_yes_summary and already existing summary index should be c... See more...
Before one week I created a summary index named waf_opco_yes_summary and it is working fine. Now they asked to change the index name as opco_yes_summary and already existing summary index should be come to this index and that index shouldn't be visible anywhere either in dashboards or searches. That should be deleted and all its data should be moved to new index. What can I do here?  One more problem is we created a single summary index to all applications and afraid of giving access to them because any of them see that there can see other's apps summary data, it will be a security issue right. We have created a dashboard with summary index and disabled open in search. At some point, we need to give them access to summary index and what if they search index=* then their restricted index and this summary index shows up which can be risky. Is there any way we can restrict users running index=*. NOTE - already we are using RBAC to restrict users to their specific indexes. But this summary index will show summarised data of all. Any way to restrict this? Can't create summary index for each application. However in dashboard we are restricting them by a field should be selected then only panel with summary index shows up by filtering. How people handle this type of situations?
Hi @Eshwar  I know you mentioned the firewall is disabled, but I wanted to check - is there any corporate firewall/proxy in place (which is often transparent) between your host and the internet? Ive... See more...
Hi @Eshwar  I know you mentioned the firewall is disabled, but I wanted to check - is there any corporate firewall/proxy in place (which is often transparent) between your host and the internet? Ive seen this error countless times with firewalls blocking the traffic, even when no firewall is on the host itself.  Another issue could possibly be ciphers - What OS are you running? Is it relatively modern and up to date?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Try something like this (essentially, you need to calculate each "top 5" and eliminate the stats events for each level, before calculating the next "top 5" for the next level). action=load OR action... See more...
Try something like this (essentially, you need to calculate each "top 5" and eliminate the stats events for each level, before calculating the next "top 5" for the next level). action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_time, "%b-%y") | stats count by customer, Month, product, publicationId, topic | eventstats sum(count) as product_count by customer Month product | sort 0 customer Month -product_count | streamstats dc(product) as product_rank by customer, Month | where product_rank <= 5 | eventstats sum(count) as publicationId_count by customer Month product publicationId | sort 0 customer Month product -publicationId_count | streamstats dc(publicationId) as publicationId_rank by customer Month product | where publicationId_rank <= 5 | eventstats sum(count) as topic_count by customer Month product publicationId topic | sort 0 customer Month product publicationId -topic_count | streamstats dc(topic) as topic_rank by customer Month product publicationId | where topic_rank <= 5 | table customer, product, publicationId, topic, count, Month
Excellent Point.   Sadly, I knew that....   Must have been a brain cramp  
1. The host=* condition is completely unnecessary. It doesn't narrow your search and every event must have the host field. It's a purely aesthetic remark but bloating the search makes it less readabl... See more...
1. The host=* condition is completely unnecessary. It doesn't narrow your search and every event must have the host field. It's a purely aesthetic remark but bloating the search makes it less readable. 2. The dedup command works differently than I suppose you think it does. After "dedup host" you will be left with just one event containing data for the first service returned by the initial search. All subsequent services for this host will be discarded. I don't think it's what you want.
actually additional step is just to rolling restart after the config changes. and it will rebalanced
Hello, @LoMueller, and thanks, @livehybrid. My contact information is in the TA's README and default/app.conf. I am leaving for a trip this week, but I will be back on August 18. I can add a config... See more...
Hello, @LoMueller, and thanks, @livehybrid. My contact information is in the TA's README and default/app.conf. I am leaving for a trip this week, but I will be back on August 18. I can add a configuration file to set up a proxy for Python. Please send me a reminder after the 18th, @LoMueller, and I will try to get that done for you.
Yes, this works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host I... See more...
Yes, this works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host IPTABLES
It is from the TA Nix addon  
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_t... See more...
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_time, "%b-%y") | stats count by customer, Month, product, publicationId, topic | streamstats count as product_rank by customer, Month | where product_rank <= 5 | table customer, product, publicationId, topic, count, Month However, I do not believe it is achieving what I aim for. The data is structured as follows: Products > Publication IDs within those products > Topics within those specific publication IDs. What I am trying to accomplish is find out the top 5 products per customer per month, and then for each of those 5 products find out the top 5 publicationIds within them, and then for each publicationID find out the top 5 topics within them.
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observabil... See more...
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observability/Product_Tips/Observability_Cloud/Setting_up_the_OpenTelemetry_Demo_in_Docker However, Otel Collector throws below errors while connecting to ingestion URL.  tag=otel-collector 2025-07-24T12:49:17.841Z info internal/retry_sender.go:133 Exporting failed. Will retry the request after interval. {"resource": {"service.instance.id": "45be9d90-2946-4ae4-8cd9-f0edff3bc822", "service.name": "otelcol", "service.version": "v0.129.0"}, "otelcol.component.id": "otlphttp", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "error": "failed to make an HTTP request: Post \"https://ingest.us1.signalfx.com/v2/trace/otlp\": net/http: TLS handshake timeout", "interval": "27.272487507s"} and I tried with curl and it hang while checking TLS [root@kyn-app-01 opentelemetry-demo]# curl -v https://api.us1.signalfx.com/v2/apm/correlate/host.name/test/service \ -X PUT \ -H "X-SF-TOKEN: accesstoken" \ -H "Content-Type: application/json" \ -d '{"value": "test"}' * Trying 54.203.64.116:443... * Connected to api.us1.signalfx.com (54.203.64.116) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * CAfile: /etc/pki/tls/certs/ca-bundle.crt * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 * Closing connection 0 curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 So, kindly suggest what went wrong and how to fix it. Note: Firewall is disabled and no proxy. Regards, Eshwar      
Ensures Splunk Cloud Platform uptime and security Splunk continuously monitors the status of your Splunk Cloud Platform environment to help ensure uptime and availability. See the Monitoring section... See more...
Ensures Splunk Cloud Platform uptime and security Splunk continuously monitors the status of your Splunk Cloud Platform environment to help ensure uptime and availability. See the Monitoring section. We look at various health and performance variables such as the ability to log in, ingest data, access Splunk Web and perform searches. Splunk maintains the following: A rolling 30-day history of health and utilization data to help ensure uptime and assist troubleshooting of your Splunk Cloud Platform. A rolling 7-day daily backup of your ingested data and configuration files to ensure data durability. Note that the backups are accessible only by Splunk and at their discretion to leverage as situation dictates. The encryption keys when you purchase an encryption at rest subscription. See the Data retention section in Storage. See also the information in the Users and Authentication section regarding the Splunk Admin and system user roles, and the certification of Splunk Cloud Platform by independent third-party auditors to meet SOC2 Type II and ISO 27001 security standards. LINK TO FULL DOC: https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice
It depends on your actual data. Please share some sample representative events.
Hi @cdevoe57  Does this bit on its own work? index="server" host="*" source="Unix:Service" UNIT=iptables.service If not how about index="server" host="*" source="Unix:Service" UNIT="iptables... See more...
Hi @cdevoe57  Does this bit on its own work? index="server" host="*" source="Unix:Service" UNIT=iptables.service If not how about index="server" host="*" source="Unix:Service" UNIT="iptables.service"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of thos... See more...
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of those services.   The end goal is to create an alert.   The following query produces no results index="server" host="*"  source="Unix:Service"   | eval IPTABLES = if(UNIT=iptables.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval AUDITD = if(UNIT=auditd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval CHRONYD = if(UNIT=chronyd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK") | dedup host  | table host IPTABLES AUDITD CHRONYD This query works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host IPTABLES How can I get the query to produce the following results host         IPTABLES       AUDITD    CHRONYD server1       failed                OK                OK
Hi @yuvaraj_m91  Try the following:   | table _raw | eval parent_keys=json_array_to_mv(json_keys(_raw)) | mvexpand parent_keys | eval _raw=json_extract(_raw, parent_keys) | eval child_keys=... See more...
Hi @yuvaraj_m91  Try the following:   | table _raw | eval parent_keys=json_array_to_mv(json_keys(_raw)) | mvexpand parent_keys | eval _raw=json_extract(_raw, parent_keys) | eval child_keys=json_keys(item) | spath | fields - _* | transpose 0 header_field=parent_keys    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing