All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk requires a hierarchical file system.  You can, however, move your frozen data to an object store, if you wish.
A majority of these are blacklisted from bundle replication and only exist on the SH cluster. I will check this out still. Thanks!
It depends on how often bundles are rebuilt on your system.  Start with 4 hours and add or subtract as necessary.
If you have CLI access then you can get that by looking at $SPLUNK_HOME/share/splunk/3rdparty/Copyright-for-CherryPy-*.txt.
When performing a query that creates a summary report, the associated search.log file shows: ResultsCollationProcessor - writing remote_event_providers.csv to disk. Then two hours later reports: ... See more...
When performing a query that creates a summary report, the associated search.log file shows: ResultsCollationProcessor - writing remote_event_providers.csv to disk. Then two hours later reports: StatsBuffer::read: Row is too large for StatsBuffer, resizing buffer.  row_size=77060 needed_space=11536 free_space=153653063 This is soon followed by lots of ~min-by-min output of: SummaryIndexProcessor - Using tmp_file=/opt/splunk/..../RMD....tmp messages. What might be happening in that two hour window?  We are running Splunk Enterprise 9.1.1 under Linux. @koronb_splunk @C_Mooney 
For Index time i have applied in Heavy Forwarders and for Search time i tried in Search Head.
Hey @richgalloway , Thanks for the reply! Would I need to run this over a larger time range to get as many lookups in as possible? Any approximate how large a range would get the best results??
As you can see from my runanywhere example, it does work. How have you actually implemented my suggestion? What results do you get? What do your actual events look like?
What results do you get then?
Hello @richgalloway ,   Could you please tell me why Block storage over Object Storage ?
Hi ITWhisperer, thx for sharing it. Unfortunately, if I run your code I receive no results.
I tried both index time and search time but nothing got worked.
no this is not working  
Where do you have those settings applied? Remember that index-time settings (like line-breaking, timestamp recognition/parsing) go to indexing tier (HFs/indexers) while search-time settings are neede... See more...
Where do you have those settings applied? Remember that index-time settings (like line-breaking, timestamp recognition/parsing) go to indexing tier (HFs/indexers) while search-time settings are needed on the search tier (it doesn't hurt to have the full set of settings on both tiers - unneeded settings are just not used there).
No. Single-site buckets will not be converted to multisite. That's why it's worth considering creating your installation as a one-site multisite cluster from the beginning so that if at any point yo... See more...
No. Single-site buckets will not be converted to multisite. That's why it's worth considering creating your installation as a one-site multisite cluster from the beginning so that if at any point you need to extend your cluster you don't have to "convert" it to multisite but simply add another site to it.
You can try like this: | makeresults | eval Title="title",'First name'=1,'Second name'=0 | foreach "*" [ eval <<FIELD>>=if ("<<MATCHSTR>>"=="Title","Title",if(<<FIELD>>=1,"Yes","No")) ]
It may not be all-inclusive, but you can get lookup file sizes from the audit index. index=_audit isdir=0 size lookups action IN (update created modified add) | stats latest(eval(size/1024/1024)) a... See more...
It may not be all-inclusive, but you can get lookup file sizes from the audit index. index=_audit isdir=0 size lookups action IN (update created modified add) | stats latest(eval(size/1024/1024)) as size_mb latest(_time) as _time by path
So you need to do <your search> | spath input=stdout This way you'll parse the contents of the stdout field.
thanks, @PickleRick - this almost worked. Only thing is Columns "Agent 1, Agent 2, Agent 3 ...." are actual Agent Names so below will not work. How can I use this foreach so it includes all columns e... See more...
thanks, @PickleRick - this almost worked. Only thing is Columns "Agent 1, Agent 2, Agent 3 ...." are actual Agent Names so below will not work. How can I use this foreach so it includes all columns except Column "Queue"? | foreach "Agent*"  Thank you. Edit: I was able to handle spaces within the field names referring to below link: https://community.splunk.com/t5/Splunk-Search/Foreach-fails-if-field-contains-colon-or-dot/m-p/487408 
I would like to find a way to list the dependency between dashboards and indexes. I'm using the following query to get the list of all the Dashboards using the index Oracle which is an event Index. ... See more...
I would like to find a way to list the dependency between dashboards and indexes. I'm using the following query to get the list of all the Dashboards using the index Oracle which is an event Index.             | rest splunk_server="local" "/servicesNS/-/-/data/ui/views" | search "eai:data"="*index=oracle*" | eval Type="Dashboards" | table Type title eai:acl.app author eai:acl.perms.read             This query is working fine but not for Metrics index. Am I missing something ?