All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are in the midst of a virtualization project, and we are looking for a way to sanity check all the different components. I know that the MC does some of it, but I’m not sure if it covers all aspec... See more...
We are in the midst of a virtualization project, and we are looking for a way to sanity check all the different components. I know that the MC does some of it, but I’m not sure if it covers all aspects. I’m thinking about scripted input, and a dedicated dashboard to monitor and verify all the settings. Do you have any other suggestions, by any chance?
Hello, I put Table 1 into CSV file called "StudentRank.csv" and tried to use subsearch, it didn't seem to work See below: index=student [search | inputlookup StudentRank.csv | head 2 | table ... See more...
Hello, I put Table 1 into CSV file called "StudentRank.csv" and tried to use subsearch, it didn't seem to work See below: index=student [search | inputlookup StudentRank.csv | head 2 | table StudentID] I also tried to use appendpipe or append, it worked but it showed all students in CSV table (4 student) | append [ | inputlookup StudentRank.csv | head 2 | table StudentID] My goal is similar to | search StudentID=101    ..  repeat until StudentID=102     (N out of Total, in this example, 2 out of 4) The second search is a lot of details Is it possible to pass a token in a scheduled search like when pass a token from a dropdown selection box in Splunk dashboard? Thank you!!
1
Hi, It is possible to generate these types of custom command line metrics using the "smartagent/exec" receiver. https://docs.splunk.com/observability/en/gdi/monitors-databases/exec-input.html It... See more...
Hi, It is possible to generate these types of custom command line metrics using the "smartagent/exec" receiver. https://docs.splunk.com/observability/en/gdi/monitors-databases/exec-input.html It can be tricky to get the format and approach just right, so here are some tips: 1) Put your command in an external script so it's easier to format the output in an acceptable format and it's also easier to format the call from your receiver. The default format is "influx", so an example of the output you want to generate would look like this: printerqueue,printer=myprinter length=5 That output would generate a metric named "printerqueue.length" with a value of 5 and a tagname of "printer" and a tagvalue of "myprinter". Your external script might look like this: #!/bin/sh echo printerqueue,printer=myprinter length=$(lpstat -o | wc -l) 2) You'll need to define a receiver in your OTel config (e.g. agent_config.yaml) receivers: smartagent/exec: type: telegraf/exec command: "/PATH/TO/printerqueue_script.sh" telegrafParser: dataFormat: "influx"   3) Don't forget to place your new receiver in your metrics pipeline and restart your OTel collector: service: pipelines: metrics: receivers: [hostmetrics, otlp, signalfx, smartagent/signalfx-forwarder, smartagent/exec]  
Add the operating system to the list of values returned by stats rather than as one of the group-by options. | stats sparkline(sum(event_count)) AS event_count_sparkline sum(event_count) AS total_ev... See more...
Add the operating system to the list of values returned by stats rather than as one of the group-by options. | stats sparkline(sum(event_count)) AS event_count_sparkline sum(event_count) AS total_events, values(operatingSystem as operatingSystems BY host  
My current search is -    | tstats count AS event_count WHERE index=* BY host, _time span=1h | append [ | inputlookup Domain_Computers | fields cn, operatingSystem, operatingSystemVersion | eval ... See more...
My current search is -    | tstats count AS event_count WHERE index=* BY host, _time span=1h | append [ | inputlookup Domain_Computers | fields cn, operatingSystem, operatingSystemVersion | eval host = coalesce(host, cn)] | fillnull value="0" total_events | stats sparkline(sum(event_count)) AS event_count_sparkline sum(event_count) AS total_events BY host How do I get operatingSystem to display in my table?   When I add it to the end of my search BY host, operatingSystem my stats break in the table.
Changing the threshold will restore indexing, but is just kicking the can down the road.  As @PickleRick said, you should find out where the space is being used.  It may be necessary to add storage o... See more...
Changing the threshold will restore indexing, but is just kicking the can down the road.  As @PickleRick said, you should find out where the space is being used.  It may be necessary to add storage or reduce the retention time on one or more of the larger indexes. Make sure indexed data is on a separate mount point from the OS and from Splunk configs.  That will keep a huge core dump or lookup file from blocking indexing.
Ismo, Appreciate the response.    
This is great info.. thanks for providing the explanation. However I only have two options: Export PDF and Print, I couldn't see "Schedule PDF delivery"  
You may need to adjust the umask setting for the splunk account.
OK. So this search sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table request_id | head 1 Will give you a single result with a single field. Now i... See more...
OK. So this search sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table request_id | head 1 Will give you a single result with a single field. Now if you do something like this: index=some_other_index sourcetype="whatever" [ sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table request_id | head 1 ] Splunk will look in the some_other_index for events with sourcetype of whatever and this request_id value returned from the subsearch.
Hi @gcusello @PickleRick , yes, you are correct, I just did `head 1` just to see if my query works fine or not. so my second search is whatever  request_id I received, I want to search that request... See more...
Hi @gcusello @PickleRick , yes, you are correct, I just did `head 1` just to see if my query works fine or not. so my second search is whatever  request_id I received, I want to search that request_id itself in Splunk logs. when I searched with hard-coded request id in Splunk, I saw the whole Java object as a string, my main goal is to extract data from that object I hope this make sense
It's not that easy. Filling the disk to the brim is not very healthy performance-wise. It's good to leave at least a few percent of the disk space (depending on the size of the filesystem and use cha... See more...
It's not that easy. Filling the disk to the brim is not very healthy performance-wise. It's good to leave at least a few percent of the disk space (depending on the size of the filesystem and use characteristics) free so that the data doesn't get too fragmented. Question is - do you even know what uses up your space and do you know which of this data is important? (also - how do you manage your space and what did you base your limits on)
It's typically enough to have 1) Well configured timezone on the server itself 2) You must have the TA_windows on the HF for proper index-time parsing (of course you also need it on SHs - in your c... See more...
It's typically enough to have 1) Well configured timezone on the server itself 2) You must have the TA_windows on the HF for proper index-time parsing (of course you also need it on SHs - in your case - in your Cloud instance for search-time extractions, eventtypes and so on but that's another story).
When the search completes, the done stanza is executed and in this instance sets a token using the job information from the search.  
Edit :  It seems I must change the parameter "Pause indexing if free disk space (in MB) falls below" from 5000 to 4000 for example ?   Am I right ?   
dev docs have been nicely updated over the last little while! shout to tedd! https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage Theres API and SDK examples, and a nice... See more...
dev docs have been nicely updated over the last little while! shout to tedd! https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage Theres API and SDK examples, and a nice post on how to control secret access, which has gotten better, and could still be better with more ppl pushing on it.  https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage/secretstoragerbac your app just needs the proper role and capabilities to interact with the storage endpoint and can be scoped further from there. 
Very useful @richgalloway. Thanks a lot.
Thanks a lot @PickleRick.  Regarding Windows, we have UF installed on each Data sources; they sent file to a dedicated HF that then forward data to Splunk Cloud.
I get "Error: CLIENT_PLUGIN_AUTH is required" when trying to setup a collector to connect to 3 older Mysql db systems. AppDynamics Controller build 23.9.2-1074  mysql Ver 14.14 Distrib 5.1.73 RHEL... See more...
I get "Error: CLIENT_PLUGIN_AUTH is required" when trying to setup a collector to connect to 3 older Mysql db systems. AppDynamics Controller build 23.9.2-1074  mysql Ver 14.14 Distrib 5.1.73 RHEL 6.1 Is there a way in the collector to change the MySQL JDBC driver to a lower version?