When performing a query that creates a summary report, the associated search.log file shows: ResultsCollationProcessor - writing remote_event_providers.csv to disk. Then two hours later reports: ...
See more...
When performing a query that creates a summary report, the associated search.log file shows: ResultsCollationProcessor - writing remote_event_providers.csv to disk. Then two hours later reports: StatsBuffer::read: Row is too large for StatsBuffer, resizing buffer. row_size=77060 needed_space=11536 free_space=153653063 This is soon followed by lots of ~min-by-min output of: SummaryIndexProcessor - Using tmp_file=/opt/splunk/..../RMD....tmp messages. What might be happening in that two hour window? We are running Splunk Enterprise 9.1.1 under Linux. @koronb_splunk @C_Mooney
Hey @richgalloway , Thanks for the reply! Would I need to run this over a larger time range to get as many lookups in as possible? Any approximate how large a range would get the best results??
As you can see from my runanywhere example, it does work. How have you actually implemented my suggestion? What results do you get? What do your actual events look like?
Where do you have those settings applied? Remember that index-time settings (like line-breaking, timestamp recognition/parsing) go to indexing tier (HFs/indexers) while search-time settings are neede...
See more...
Where do you have those settings applied? Remember that index-time settings (like line-breaking, timestamp recognition/parsing) go to indexing tier (HFs/indexers) while search-time settings are needed on the search tier (it doesn't hurt to have the full set of settings on both tiers - unneeded settings are just not used there).
No. Single-site buckets will not be converted to multisite. That's why it's worth considering creating your installation as a one-site multisite cluster from the beginning so that if at any point yo...
See more...
No. Single-site buckets will not be converted to multisite. That's why it's worth considering creating your installation as a one-site multisite cluster from the beginning so that if at any point you need to extend your cluster you don't have to "convert" it to multisite but simply add another site to it.
It may not be all-inclusive, but you can get lookup file sizes from the audit index. index=_audit isdir=0 size lookups action IN (update created modified add)
| stats latest(eval(size/1024/1024)) a...
See more...
It may not be all-inclusive, but you can get lookup file sizes from the audit index. index=_audit isdir=0 size lookups action IN (update created modified add)
| stats latest(eval(size/1024/1024)) as size_mb latest(_time) as _time by path
thanks, @PickleRick - this almost worked. Only thing is Columns "Agent 1, Agent 2, Agent 3 ...." are actual Agent Names so below will not work. How can I use this foreach so it includes all columns e...
See more...
thanks, @PickleRick - this almost worked. Only thing is Columns "Agent 1, Agent 2, Agent 3 ...." are actual Agent Names so below will not work. How can I use this foreach so it includes all columns except Column "Queue"? | foreach "Agent*" Thank you. Edit: I was able to handle spaces within the field names referring to below link: https://community.splunk.com/t5/Splunk-Search/Foreach-fails-if-field-contains-colon-or-dot/m-p/487408
I would like to find a way to list the dependency between dashboards and indexes. I'm using the following query to get the list of all the Dashboards using the index Oracle which is an event Index. ...
See more...
I would like to find a way to list the dependency between dashboards and indexes. I'm using the following query to get the list of all the Dashboards using the index Oracle which is an event Index. | rest splunk_server="local" "/servicesNS/-/-/data/ui/views"
| search "eai:data"="*index=oracle*"
| eval Type="Dashboards"
| table Type title eai:acl.app author eai:acl.perms.read This query is working fine but not for Metrics index. Am I missing something ?