All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My current search is -    | tstats count AS event_count WHERE index=* BY host, _time span=1h | append [ | inputlookup Domain_Computers | fields cn, operatingSystem, operatingSystemVersion | eval ... See more...
My current search is -    | tstats count AS event_count WHERE index=* BY host, _time span=1h | append [ | inputlookup Domain_Computers | fields cn, operatingSystem, operatingSystemVersion | eval host = coalesce(host, cn)] | fillnull value="0" total_events | stats sparkline(sum(event_count)) AS event_count_sparkline sum(event_count) AS total_events BY host How do I get operatingSystem to display in my table?   When I add it to the end of my search BY host, operatingSystem my stats break in the table.
Changing the threshold will restore indexing, but is just kicking the can down the road.  As @PickleRick said, you should find out where the space is being used.  It may be necessary to add storage o... See more...
Changing the threshold will restore indexing, but is just kicking the can down the road.  As @PickleRick said, you should find out where the space is being used.  It may be necessary to add storage or reduce the retention time on one or more of the larger indexes. Make sure indexed data is on a separate mount point from the OS and from Splunk configs.  That will keep a huge core dump or lookup file from blocking indexing.
Ismo, Appreciate the response.    
This is great info.. thanks for providing the explanation. However I only have two options: Export PDF and Print, I couldn't see "Schedule PDF delivery"  
You may need to adjust the umask setting for the splunk account.
OK. So this search sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table request_id | head 1 Will give you a single result with a single field. Now i... See more...
OK. So this search sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table request_id | head 1 Will give you a single result with a single field. Now if you do something like this: index=some_other_index sourcetype="whatever" [ sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table request_id | head 1 ] Splunk will look in the some_other_index for events with sourcetype of whatever and this request_id value returned from the subsearch.
Hi @gcusello @PickleRick , yes, you are correct, I just did `head 1` just to see if my query works fine or not. so my second search is whatever  request_id I received, I want to search that request... See more...
Hi @gcusello @PickleRick , yes, you are correct, I just did `head 1` just to see if my query works fine or not. so my second search is whatever  request_id I received, I want to search that request_id itself in Splunk logs. when I searched with hard-coded request id in Splunk, I saw the whole Java object as a string, my main goal is to extract data from that object I hope this make sense
It's not that easy. Filling the disk to the brim is not very healthy performance-wise. It's good to leave at least a few percent of the disk space (depending on the size of the filesystem and use cha... See more...
It's not that easy. Filling the disk to the brim is not very healthy performance-wise. It's good to leave at least a few percent of the disk space (depending on the size of the filesystem and use characteristics) free so that the data doesn't get too fragmented. Question is - do you even know what uses up your space and do you know which of this data is important? (also - how do you manage your space and what did you base your limits on)
It's typically enough to have 1) Well configured timezone on the server itself 2) You must have the TA_windows on the HF for proper index-time parsing (of course you also need it on SHs - in your c... See more...
It's typically enough to have 1) Well configured timezone on the server itself 2) You must have the TA_windows on the HF for proper index-time parsing (of course you also need it on SHs - in your case - in your Cloud instance for search-time extractions, eventtypes and so on but that's another story).
When the search completes, the done stanza is executed and in this instance sets a token using the job information from the search.  
Edit :  It seems I must change the parameter "Pause indexing if free disk space (in MB) falls below" from 5000 to 4000 for example ?   Am I right ?   
dev docs have been nicely updated over the last little while! shout to tedd! https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage Theres API and SDK examples, and a nice... See more...
dev docs have been nicely updated over the last little while! shout to tedd! https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage Theres API and SDK examples, and a nice post on how to control secret access, which has gotten better, and could still be better with more ppl pushing on it.  https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage/secretstoragerbac your app just needs the proper role and capabilities to interact with the storage endpoint and can be scoped further from there. 
Very useful @richgalloway. Thanks a lot.
Thanks a lot @PickleRick.  Regarding Windows, we have UF installed on each Data sources; they sent file to a dedicated HF that then forward data to Splunk Cloud.
I get "Error: CLIENT_PLUGIN_AUTH is required" when trying to setup a collector to connect to 3 older Mysql db systems. AppDynamics Controller build 23.9.2-1074  mysql Ver 14.14 Distrib 5.1.73 RHEL... See more...
I get "Error: CLIENT_PLUGIN_AUTH is required" when trying to setup a collector to connect to 3 older Mysql db systems. AppDynamics Controller build 23.9.2-1074  mysql Ver 14.14 Distrib 5.1.73 RHEL 6.1 Is there a way in the collector to change the MySQL JDBC driver to a lower version?
https://docs.splunk.com/Documentation/Splunk/9.1.3/Search/Aboutsubsearches Watch out however because subsearches have their limitations so if your subsearch is either long-running or returns many ev... See more...
https://docs.splunk.com/Documentation/Splunk/9.1.3/Search/Aboutsubsearches Watch out however because subsearches have their limitations so if your subsearch is either long-running or returns many events, it may get silently finalized and you might not get proper results (you might get wrong results or no results at all). Question is whether you have this Report 1 anyway or is it just a part of the functionality you want to achieve because it might be probably done differently with just a single search.
Hi @nlloyd, see : how to store encrypted credentials in Splunk at https://www.splunk.com/en_us/blog/security/storing-encrypted-credentials.html in other words, you have to run the script by Splun... See more...
Hi @nlloyd, see : how to store encrypted credentials in Splunk at https://www.splunk.com/en_us/blog/security/storing-encrypted-credentials.html in other words, you have to run the script by Splunk so you can store credentials in encrypted mode in Splunk conf files. Then you cas see here https://www.splunk.com/en_us/blog/tips-and-tricks/enable-first-run-app-configuration-with-setup-pages.html#:~:text=A%20setup%20page%20is%20a,the%20Splunk%20Web%20user%20interface. how to configure your Add-On to show a page to insert password to store in a conf file in encrypted mode. Ciao. Giuseppe
Hello, How to pass data/token from a report to another report?   Thank you for your help I am trying to run a weekly report that produces top 4 students (out of 100), then once I find out the top... See more...
Hello, How to pass data/token from a report to another report?   Thank you for your help I am trying to run a weekly report that produces top 4 students (out of 100), then once I find out the top 4 students, I will run another report that provides detailed information about grades for those 4 students For example: Report 1 StudentID Name GPA Percentile Email 101 Student1 4 100% Student1@email.com 102 Student2 3 90% Student2@email.com 103 Student3 2 70% Student3@email.com 104 Student4 1 40% Student4@email.com Report 2 StudentID Course Grade 101 Math 100 101 English 95 102 Math 90 102 English 90  
Hello, If I put your suggested search into a search  in Splunk, it didn't work, but I was able to create a dashboard using your search in Splunk. I was also able to export into PDF manually by cli... See more...
Hello, If I put your suggested search into a search  in Splunk, it didn't work, but I was able to create a dashboard using your search in Splunk. I was also able to export into PDF manually by clicking export=>download PDF 1) How do I schedule a dashboard as a PDF?  Should I create dashboard first, then put it on reports?      My goal is to send an email once a week with a report for specific time frame (e.g. 30 days) to determine a ranking.     2) What is the purpose of token=sid and <done> bracket? Thanks
Hi all, Very new to Splunk so apologies if this is a very basic question. I've looked around and haven't found a conclusive answer so far. I'm building an app that will require an API token from a 3... See more...
Hi all, Very new to Splunk so apologies if this is a very basic question. I've looked around and haven't found a conclusive answer so far. I'm building an app that will require an API token from a 3rd party system during the setup step. What I don't understand is how I can store that API token via a call to storage/passwords without also requiring the user to enter their Splunk credentials or a Splunk API token. Would really appreciate if someone could point out how I can do this! Ideally, I'm looking to use the JS SDK, so I'd need some way to create an instance of the Service object without needing admin user credentials being manually entered.  Thanks in advance!