All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey @richgalloway , Thanks for the reply! Would I need to run this over a larger time range to get as many lookups in as possible? Any approximate how large a range would get the best results??
As you can see from my runanywhere example, it does work. How have you actually implemented my suggestion? What results do you get? What do your actual events look like?
What results do you get then?
Hello @richgalloway ,   Could you please tell me why Block storage over Object Storage ?
Hi ITWhisperer, thx for sharing it. Unfortunately, if I run your code I receive no results.
I tried both index time and search time but nothing got worked.
no this is not working  
Where do you have those settings applied? Remember that index-time settings (like line-breaking, timestamp recognition/parsing) go to indexing tier (HFs/indexers) while search-time settings are neede... See more...
Where do you have those settings applied? Remember that index-time settings (like line-breaking, timestamp recognition/parsing) go to indexing tier (HFs/indexers) while search-time settings are needed on the search tier (it doesn't hurt to have the full set of settings on both tiers - unneeded settings are just not used there).
No. Single-site buckets will not be converted to multisite. That's why it's worth considering creating your installation as a one-site multisite cluster from the beginning so that if at any point yo... See more...
No. Single-site buckets will not be converted to multisite. That's why it's worth considering creating your installation as a one-site multisite cluster from the beginning so that if at any point you need to extend your cluster you don't have to "convert" it to multisite but simply add another site to it.
You can try like this: | makeresults | eval Title="title",'First name'=1,'Second name'=0 | foreach "*" [ eval <<FIELD>>=if ("<<MATCHSTR>>"=="Title","Title",if(<<FIELD>>=1,"Yes","No")) ]
It may not be all-inclusive, but you can get lookup file sizes from the audit index. index=_audit isdir=0 size lookups action IN (update created modified add) | stats latest(eval(size/1024/1024)) a... See more...
It may not be all-inclusive, but you can get lookup file sizes from the audit index. index=_audit isdir=0 size lookups action IN (update created modified add) | stats latest(eval(size/1024/1024)) as size_mb latest(_time) as _time by path
So you need to do <your search> | spath input=stdout This way you'll parse the contents of the stdout field.
thanks, @PickleRick - this almost worked. Only thing is Columns "Agent 1, Agent 2, Agent 3 ...." are actual Agent Names so below will not work. How can I use this foreach so it includes all columns e... See more...
thanks, @PickleRick - this almost worked. Only thing is Columns "Agent 1, Agent 2, Agent 3 ...." are actual Agent Names so below will not work. How can I use this foreach so it includes all columns except Column "Queue"? | foreach "Agent*"  Thank you. Edit: I was able to handle spaces within the field names referring to below link: https://community.splunk.com/t5/Splunk-Search/Foreach-fails-if-field-contains-colon-or-dot/m-p/487408 
I would like to find a way to list the dependency between dashboards and indexes. I'm using the following query to get the list of all the Dashboards using the index Oracle which is an event Index. ... See more...
I would like to find a way to list the dependency between dashboards and indexes. I'm using the following query to get the list of all the Dashboards using the index Oracle which is an event Index.             | rest splunk_server="local" "/servicesNS/-/-/data/ui/views" | search "eai:data"="*index=oracle*" | eval Type="Dashboards" | table Type title eai:acl.app author eai:acl.perms.read             This query is working fine but not for Metrics index. Am I missing something ?
I would like know, if this feature has added now?
Hey Splunkers, I wanted to get a list of all the lookup files on my SH and their file sizes along with other data. I can't get the size from the rest API. Appreciate any and all answers.  Below... See more...
Hey Splunkers, I wanted to get a list of all the lookup files on my SH and their file sizes along with other data. I can't get the size from the rest API. Appreciate any and all answers.  Below are the searches I've been trying to use: | rest/servicesNS/-/-/data/lookup-table-files | rename "eai:acl.app" as app, "eai:acl.owner" as owner, "eai:acl.sharing" as sharing, "eai:data" as path | table title owner sharing app |foreach title [| inputlookup <<FIELD>> | foreach *      [| eval b_<<FIELD>>=len(<<FIELD>>) + 1 ]  | addtotals b_* fieldname=b  | stats sum(b) as b  | eval mb=b/1000/1000, gb=mb/1000  | fields mb] The foreach does not allow non-streaming commands and thus this does not work. Using a direct eval like below: | rest/servicesNS/-/-/data/lookup-table-files | rename "eai:acl.app" as app, "eai:acl.owner" as owner, "eai:acl.sharing" as sharing, "eai:data" as path | table title owner sharing app |eval size= [| inputlookup 'title' | foreach *      [| eval b_<<FIELD>>=len(<<FIELD>>) + 1 ]  | addtotals b_* fieldname=b  | stats sum(b) as b  | eval mb=b/1000/1000, gb=mb/1000  | fields mb] This also does not work since the inner search cannot see the outer values.  I have been trying to work with subsearches, foreach and the map command but couldn't get anywhere. Thanks in advance folks
That was one of my theories, but unfortunately, after checking, we do have some missing events. We only receive random events in XML and all events in wineventlog format.
Excellent! Is there a way of doing this directly with SPL?
Hello!! THanks for your answer! You are indeed correct! The event has some level that is treated as a Json, but nested in the "log" variable, the "stdout" variable has another dictionary within it t... See more...
Hello!! THanks for your answer! You are indeed correct! The event has some level that is treated as a Json, but nested in the "log" variable, the "stdout" variable has another dictionary within it that is being treated as a string, making it difficult to be worked with SPL. I did my research and it seems this might be an issue with the way the data is being parsed before arriving to splunk, before checking that I guess I'm stuck with searching for string literals Thank you for your time and help!!
Given that this doesn't appear to be wholly correct JSON, you could start with something like this | rex "DETAILS: (?<details>\[.*\])" | spath input=details