All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, this works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host I... See more...
Yes, this works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host IPTABLES
It is from the TA Nix addon  
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_t... See more...
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_time, "%b-%y") | stats count by customer, Month, product, publicationId, topic | streamstats count as product_rank by customer, Month | where product_rank <= 5 | table customer, product, publicationId, topic, count, Month However, I do not believe it is achieving what I aim for. The data is structured as follows: Products > Publication IDs within those products > Topics within those specific publication IDs. What I am trying to accomplish is find out the top 5 products per customer per month, and then for each of those 5 products find out the top 5 publicationIds within them, and then for each publicationID find out the top 5 topics within them.
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observabil... See more...
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observability/Product_Tips/Observability_Cloud/Setting_up_the_OpenTelemetry_Demo_in_Docker However, Otel Collector throws below errors while connecting to ingestion URL.  tag=otel-collector 2025-07-24T12:49:17.841Z info internal/retry_sender.go:133 Exporting failed. Will retry the request after interval. {"resource": {"service.instance.id": "45be9d90-2946-4ae4-8cd9-f0edff3bc822", "service.name": "otelcol", "service.version": "v0.129.0"}, "otelcol.component.id": "otlphttp", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "error": "failed to make an HTTP request: Post \"https://ingest.us1.signalfx.com/v2/trace/otlp\": net/http: TLS handshake timeout", "interval": "27.272487507s"} and I tried with curl and it hang while checking TLS [root@kyn-app-01 opentelemetry-demo]# curl -v https://api.us1.signalfx.com/v2/apm/correlate/host.name/test/service \ -X PUT \ -H "X-SF-TOKEN: accesstoken" \ -H "Content-Type: application/json" \ -d '{"value": "test"}' * Trying 54.203.64.116:443... * Connected to api.us1.signalfx.com (54.203.64.116) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * CAfile: /etc/pki/tls/certs/ca-bundle.crt * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 * Closing connection 0 curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 So, kindly suggest what went wrong and how to fix it. Note: Firewall is disabled and no proxy. Regards, Eshwar      
Ensures Splunk Cloud Platform uptime and security Splunk continuously monitors the status of your Splunk Cloud Platform environment to help ensure uptime and availability. See the Monitoring section... See more...
Ensures Splunk Cloud Platform uptime and security Splunk continuously monitors the status of your Splunk Cloud Platform environment to help ensure uptime and availability. See the Monitoring section. We look at various health and performance variables such as the ability to log in, ingest data, access Splunk Web and perform searches. Splunk maintains the following: A rolling 30-day history of health and utilization data to help ensure uptime and assist troubleshooting of your Splunk Cloud Platform. A rolling 7-day daily backup of your ingested data and configuration files to ensure data durability. Note that the backups are accessible only by Splunk and at their discretion to leverage as situation dictates. The encryption keys when you purchase an encryption at rest subscription. See the Data retention section in Storage. See also the information in the Users and Authentication section regarding the Splunk Admin and system user roles, and the certification of Splunk Cloud Platform by independent third-party auditors to meet SOC2 Type II and ISO 27001 security standards. LINK TO FULL DOC: https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice
It depends on your actual data. Please share some sample representative events.
Hi @cdevoe57  Does this bit on its own work? index="server" host="*" source="Unix:Service" UNIT=iptables.service If not how about index="server" host="*" source="Unix:Service" UNIT="iptables... See more...
Hi @cdevoe57  Does this bit on its own work? index="server" host="*" source="Unix:Service" UNIT=iptables.service If not how about index="server" host="*" source="Unix:Service" UNIT="iptables.service"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of thos... See more...
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of those services.   The end goal is to create an alert.   The following query produces no results index="server" host="*"  source="Unix:Service"   | eval IPTABLES = if(UNIT=iptables.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval AUDITD = if(UNIT=auditd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval CHRONYD = if(UNIT=chronyd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK") | dedup host  | table host IPTABLES AUDITD CHRONYD This query works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host IPTABLES How can I get the query to produce the following results host         IPTABLES       AUDITD    CHRONYD server1       failed                OK                OK
Hi @yuvaraj_m91  Try the following:   | table _raw | eval parent_keys=json_array_to_mv(json_keys(_raw)) | mvexpand parent_keys | eval _raw=json_extract(_raw, parent_keys) | eval child_keys=... See more...
Hi @yuvaraj_m91  Try the following:   | table _raw | eval parent_keys=json_array_to_mv(json_keys(_raw)) | mvexpand parent_keys | eval _raw=json_extract(_raw, parent_keys) | eval child_keys=json_keys(item) | spath | fields - _* | transpose 0 header_field=parent_keys    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid , Yes, you've hit on my exact point. I'm trying to determine the best way to contact support – specifically, if their assistance is limited to paying customers or if there's an avenu... See more...
Hi @livehybrid , Yes, you've hit on my exact point. I'm trying to determine the best way to contact support – specifically, if their assistance is limited to paying customers or if there's an avenue for the general public to inquire. This is precisely why I brought my question to the Splunk forum. If you have any information on how to reach the Splunk or TruSTAR technical teams, I would greatly appreciate your guidance.
Hi @TestUser , Could you please frame your question with more details and clarity so that it would be helpful for other Splunkers to clarify your question.
Hi @LoMueller  Based on the existing code for this TA (https://github.com/thatfrankwayne/TA_oui-lookup) its using the python requests library, so therefore it should be possible to implement a proxy... See more...
Hi @LoMueller  Based on the existing code for this TA (https://github.com/thatfrankwayne/TA_oui-lookup) its using the python requests library, so therefore it should be possible to implement a proxy for this with some code changes. There are no contact details for the author ( @frankwayne ) in the app so I would recommend raising an issue on GitHub with the feature request and then hopefully it can be worked in to a future version.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Is it possible to implement the possibility to use a proxy url to download the vendor oui-file? Thanks
Assuming that this is an accurate representation of your data, you could try something like this | eval array=json_array_to_mv(json_keys(_raw),false()) | foreach array mode=multivalue [| eval s... See more...
Assuming that this is an accurate representation of your data, you could try something like this | eval array=json_array_to_mv(json_keys(_raw),false()) | foreach array mode=multivalue [| eval set=mvappend(set,json_extract(_raw,<<ITEM>>))] | eval row=mvrange(0,mvcount(array)) | mvexpand row | eval key=mvindex(array,row) | eval fields=mvindex(set,row) | table key fields | fromjson fields | fields - fields | transpose 0 header_field=key column_name=samples
There is no such functionality in SimpleXML. And it's understandable to some extent - dashboards are meant for interactive work. You probably could create something like that with custom JS but it wo... See more...
There is no such functionality in SimpleXML. And it's understandable to some extent - dashboards are meant for interactive work. You probably could create something like that with custom JS but it won't be easy. 
3, 4 and partially 7 - not really. 3. Indexed fields - unless they contain additional metadata not present in the original events - are usually best avoided entirely. There are other ways of achievi... See more...
3, 4 and partially 7 - not really. 3. Indexed fields - unless they contain additional metadata not present in the original events - are usually best avoided entirely. There are other ways of achieving the same result. 4. You can't use tstats instead of stats-based search just because the field is a number. It requiers specific types of data. True though that if you can use tstats instead of normal stats, it's way faster. 7. Wildcards at the beginning of search term should not be "avoided", they should not be used at all unless you have a very very very good for using them, know and understand the performance impact and can significantly limit sought through events using other means. The remark about regexes is generally valid but this is most often not the main reason for performance problems.
Hi @nagendra1111 , Splunk does not currently support sending dashboard panel searches directly to background jobs from the UI.
Hi thahir, the btool output doesn't find any lang-setting. I think Splunk tries to honor the language preference of the browser. When we change to english language in the browser, en-GB is add... See more...
Hi thahir, the btool output doesn't find any lang-setting. I think Splunk tries to honor the language preference of the browser. When we change to english language in the browser, en-GB is added correctly into the link and the link works just fine. In our browser (e.g. MS Edge) we have multiple language packs installed. There's one name "German (Germany)" and another one called "German". I tested different languages. Our test environment can handle all of them. Our production messes up when we use language pack "German" without the country in brackets.