All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_t... See more...
Hello! I have the following query with the provided fields to track consumption data for customers. action=load OR action=Download customer!="" publicationId="*" topic="*" | eval Month=strftime(_time, "%b-%y") | stats count by customer, Month, product, publicationId, topic | streamstats count as product_rank by customer, Month | where product_rank <= 5 | table customer, product, publicationId, topic, count, Month However, I do not believe it is achieving what I aim for. The data is structured as follows: Products > Publication IDs within those products > Topics within those specific publication IDs. What I am trying to accomplish is find out the top 5 products per customer per month, and then for each of those 5 products find out the top 5 publicationIds within them, and then for each publicationID find out the top 5 topics within them.
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observabil... See more...
Hello Community, I am setting up a OpenTelemetr Application with in the docker on-prem environment  as per steps which are outlined in the below documentation. https://lantern.splunk.com/Observability/Product_Tips/Observability_Cloud/Setting_up_the_OpenTelemetry_Demo_in_Docker However, Otel Collector throws below errors while connecting to ingestion URL.  tag=otel-collector 2025-07-24T12:49:17.841Z info internal/retry_sender.go:133 Exporting failed. Will retry the request after interval. {"resource": {"service.instance.id": "45be9d90-2946-4ae4-8cd9-f0edff3bc822", "service.name": "otelcol", "service.version": "v0.129.0"}, "otelcol.component.id": "otlphttp", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "error": "failed to make an HTTP request: Post \"https://ingest.us1.signalfx.com/v2/trace/otlp\": net/http: TLS handshake timeout", "interval": "27.272487507s"} and I tried with curl and it hang while checking TLS [root@kyn-app-01 opentelemetry-demo]# curl -v https://api.us1.signalfx.com/v2/apm/correlate/host.name/test/service \ -X PUT \ -H "X-SF-TOKEN: accesstoken" \ -H "Content-Type: application/json" \ -d '{"value": "test"}' * Trying 54.203.64.116:443... * Connected to api.us1.signalfx.com (54.203.64.116) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * CAfile: /etc/pki/tls/certs/ca-bundle.crt * TLSv1.0 (OUT), TLS header, Certificate Status (22): * TLSv1.3 (OUT), TLS handshake, Client hello (1): * OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 * Closing connection 0 curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to api.us1.signalfx.com:443 So, kindly suggest what went wrong and how to fix it. Note: Firewall is disabled and no proxy. Regards, Eshwar      
Ensures Splunk Cloud Platform uptime and security Splunk continuously monitors the status of your Splunk Cloud Platform environment to help ensure uptime and availability. See the Monitoring section... See more...
Ensures Splunk Cloud Platform uptime and security Splunk continuously monitors the status of your Splunk Cloud Platform environment to help ensure uptime and availability. See the Monitoring section. We look at various health and performance variables such as the ability to log in, ingest data, access Splunk Web and perform searches. Splunk maintains the following: A rolling 30-day history of health and utilization data to help ensure uptime and assist troubleshooting of your Splunk Cloud Platform. A rolling 7-day daily backup of your ingested data and configuration files to ensure data durability. Note that the backups are accessible only by Splunk and at their discretion to leverage as situation dictates. The encryption keys when you purchase an encryption at rest subscription. See the Data retention section in Storage. See also the information in the Users and Authentication section regarding the Splunk Admin and system user roles, and the certification of Splunk Cloud Platform by independent third-party auditors to meet SOC2 Type II and ISO 27001 security standards. LINK TO FULL DOC: https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice
It depends on your actual data. Please share some sample representative events.
Hi @cdevoe57  Does this bit on its own work? index="server" host="*" source="Unix:Service" UNIT=iptables.service If not how about index="server" host="*" source="Unix:Service" UNIT="iptables... See more...
Hi @cdevoe57  Does this bit on its own work? index="server" host="*" source="Unix:Service" UNIT=iptables.service If not how about index="server" host="*" source="Unix:Service" UNIT="iptables.service"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of thos... See more...
I am attempting to run a query that will find the status fo 3 services and list which ones are failed and which ones are running.  I only want to display the host that failed and the statuses of those services.   The end goal is to create an alert.   The following query produces no results index="server" host="*"  source="Unix:Service"   | eval IPTABLES = if(UNIT=iptables.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval AUDITD = if(UNIT=auditd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK")  | eval CHRONYD = if(UNIT=chronyd.service AND (ACTIVE="failed" OR ACTIVE="inactive"), "failed", "OK") | dedup host  | table host IPTABLES AUDITD CHRONYD This query works index="server" host="*"  source="Unix:Service"  UNIT=iptables.service  | eval IPTABLES = if(ACTIVE="failed" OR ACTIVE="inactive", "failed", "OK")  | dedup host  | table host IPTABLES How can I get the query to produce the following results host         IPTABLES       AUDITD    CHRONYD server1       failed                OK                OK
Hi @yuvaraj_m91  Try the following:   | table _raw | eval parent_keys=json_array_to_mv(json_keys(_raw)) | mvexpand parent_keys | eval _raw=json_extract(_raw, parent_keys) | eval child_keys=... See more...
Hi @yuvaraj_m91  Try the following:   | table _raw | eval parent_keys=json_array_to_mv(json_keys(_raw)) | mvexpand parent_keys | eval _raw=json_extract(_raw, parent_keys) | eval child_keys=json_keys(item) | spath | fields - _* | transpose 0 header_field=parent_keys    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid , Yes, you've hit on my exact point. I'm trying to determine the best way to contact support – specifically, if their assistance is limited to paying customers or if there's an avenu... See more...
Hi @livehybrid , Yes, you've hit on my exact point. I'm trying to determine the best way to contact support – specifically, if their assistance is limited to paying customers or if there's an avenue for the general public to inquire. This is precisely why I brought my question to the Splunk forum. If you have any information on how to reach the Splunk or TruSTAR technical teams, I would greatly appreciate your guidance.
Hi @TestUser , Could you please frame your question with more details and clarity so that it would be helpful for other Splunkers to clarify your question.
Hi @LoMueller  Based on the existing code for this TA (https://github.com/thatfrankwayne/TA_oui-lookup) its using the python requests library, so therefore it should be possible to implement a proxy... See more...
Hi @LoMueller  Based on the existing code for this TA (https://github.com/thatfrankwayne/TA_oui-lookup) its using the python requests library, so therefore it should be possible to implement a proxy for this with some code changes. There are no contact details for the author ( @frankwayne ) in the app so I would recommend raising an issue on GitHub with the feature request and then hopefully it can be worked in to a future version.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Is it possible to implement the possibility to use a proxy url to download the vendor oui-file? Thanks
Assuming that this is an accurate representation of your data, you could try something like this | eval array=json_array_to_mv(json_keys(_raw),false()) | foreach array mode=multivalue [| eval s... See more...
Assuming that this is an accurate representation of your data, you could try something like this | eval array=json_array_to_mv(json_keys(_raw),false()) | foreach array mode=multivalue [| eval set=mvappend(set,json_extract(_raw,<<ITEM>>))] | eval row=mvrange(0,mvcount(array)) | mvexpand row | eval key=mvindex(array,row) | eval fields=mvindex(set,row) | table key fields | fromjson fields | fields - fields | transpose 0 header_field=key column_name=samples
There is no such functionality in SimpleXML. And it's understandable to some extent - dashboards are meant for interactive work. You probably could create something like that with custom JS but it wo... See more...
There is no such functionality in SimpleXML. And it's understandable to some extent - dashboards are meant for interactive work. You probably could create something like that with custom JS but it won't be easy. 
3, 4 and partially 7 - not really. 3. Indexed fields - unless they contain additional metadata not present in the original events - are usually best avoided entirely. There are other ways of achievi... See more...
3, 4 and partially 7 - not really. 3. Indexed fields - unless they contain additional metadata not present in the original events - are usually best avoided entirely. There are other ways of achieving the same result. 4. You can't use tstats instead of stats-based search just because the field is a number. It requiers specific types of data. True though that if you can use tstats instead of normal stats, it's way faster. 7. Wildcards at the beginning of search term should not be "avoided", they should not be used at all unless you have a very very very good for using them, know and understand the performance impact and can significantly limit sought through events using other means. The remark about regexes is generally valid but this is most often not the main reason for performance problems.
Hi @nagendra1111 , Splunk does not currently support sending dashboard panel searches directly to background jobs from the UI.
Hi thahir, the btool output doesn't find any lang-setting. I think Splunk tries to honor the language preference of the browser. When we change to english language in the browser, en-GB is add... See more...
Hi thahir, the btool output doesn't find any lang-setting. I think Splunk tries to honor the language preference of the browser. When we change to english language in the browser, en-GB is added correctly into the link and the link works just fine. In our browser (e.g. MS Edge) we have multiple language packs installed. There's one name "German (Germany)" and another one called "German". I tested different languages. Our test environment can handle all of them. Our production messes up when we use language pack "German" without the country in brackets.  
{   "abcdxyz" : {     "transaction" : "abcdxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 418.71666666666664,     "medianResTime" : 264.5... See more...
{   "abcdxyz" : {     "transaction" : "abcdxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 418.71666666666664,     "medianResTime" : 264.5,     "minResTime" : 0.0,     "maxResTime" : 4418.0,     "pct1ResTime" : 368.4,     "pct2ResTime" : 3728.049999999985,     "pct3ResTime" : 4418.0,     "throughput" : 0.25086548592644625,     "receivedKBytesPerSec" : 0.16945669591340123,     "sentKBytesPerSec" : 0.3197146692547623   },   "efghxyz" : {     "transaction" : "efghxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 421.8,     "medianResTime" : 32.0,     "minResTime" : 0.0,     "maxResTime" : 3566.0,     "pct1ResTime" : 3258.5,     "pct2ResTime" : 3497.6,     "pct3ResTime" : 3566.0,     "throughput" : 0.24752066797577596,     "receivedKBytesPerSec" : 0.34477244084256037,     "sentKBytesPerSec" : 0.08463804872238082   },   "ijklxyz" : {     "transaction" : "ijklxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 27.733333333333338,     "medianResTime" : 27.5,     "minResTime" : 0.0,     "maxResTime" : 241.0,     "pct1ResTime" : 41.599999999999994,     "pct2ResTime" : 52.699999999999974,     "pct3ResTime" : 241.0,     "throughput" : 0.25115636576738737,     "receivedKBytesPerSec" : 0.3331214746541367,     "sentKBytesPerSec" : 0.08588125143891667   },   "mnopxyz" : {     "transaction" : "mnopxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 491.74999999999994,     "medianResTime" : 279.5,     "minResTime" : 0.0,     "maxResTime" : 4270.0,     "pct1ResTime" : 381.29999999999995,     "pct2ResTime" : 4076.55,     "pct3ResTime" : 4270.0,     "throughput" : 0.2440254437195985,     "receivedKBytesPerSec" : 0.16483632755942018,     "sentKBytesPerSec" : 0.2839297997262848   } } I need to create a table view from the above log event which was captured as a single event, like the below table format: samples abcdxyz efghxyz ijklxyz mnopxyz     "transaction" :             "sampleCount"                                                     "errorCount"              "errorPct"                                                "meanResTime"                                    "medianResTime"                                          "minResTime"                                               "maxResTime"                                               "pct1ResTime"                                          "pct2ResTime"                                        "pct3ResTime"                                           "throughput"                                        "receivedKBytesPerSec"                                      "sentKBytesPerSec"                                
HI @zaks191 ,  Please consider the below points for the better performance in your environment. 1. Be Specific in Searches: Always use index= and sourcetype= and add unique terms early in you... See more...
HI @zaks191 ,  Please consider the below points for the better performance in your environment. 1. Be Specific in Searches: Always use index= and sourcetype= and add unique terms early in your search string to narrow down data quickly. 2. Filter Early, Transform Late: Place filtering commands (like where, search) at the beginning and transforming commands (stats, chart) at the end of your SPL. 3.Leverage Index-Time Extractions: Ensure critical fields are extracted at index time for faster searching, especially with JSON data. 4.Utilize tstats: For numeric or indexed data, tstats is highly efficient as it operates directly on pre-indexed data (.tsidx files), making it much faster than search | stats. 5.Accelerate Data Models: Define and accelerate data models for frequently accessed structured data. This pre-computes summaries, allowing tstats searches to run extremely fast. 6.Accelerate Reports: For specific, repetitive transforming reports, enable report acceleration to store pre-computed results. 7.Minimize Wildcards and Regex: Avoid leading wildcards (*term) and complex, unanchored regular expressions as they are resource-intensive. 8.Optimize Lookups: For large lookups, consider KV Store lookups or pre-generate summaries via scheduled searches. 9.Use Job Inspector: Regularly analyze slow searches with the Job Inspector to pinpoint bottlenecks (e.g., search head vs. indexer processing). 10.Review limits.conf (Carefully): While not a primary fix, review settings like max_mem_usage_mb or max_keymap_rows in limits.conf after monitoring resource usage, but proceed with caution and thorough testing. 11.Setup Alerts for Expensive searches: use internal metrics to detect problematic searches 12.Monitor and Limit User Search Concurrency: Users running unbounded or wide time-range ad hoc searches can harm performance. Happy Splunking