All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In which panel and which value is negative? Anyway, you can open any panel in search and see where this value comes from. Most probably there is an initial rest call which returns wrong values but y... See more...
In which panel and which value is negative? Anyway, you can open any panel in search and see where this value comes from. Most probably there is an initial rest call which returns wrong values but you have to double-check that. Did you restart splunkd on the server(s) where you added storage or did you just extend the filesystem on the fly?
1. Check your _internal for possible messages regarding this source. 2. Are your sourcetypes properly defined or are you mostly just relying on defaults? I suspect this data source hasn't been prope... See more...
1. Check your _internal for possible messages regarding this source. 2. Are your sourcetypes properly defined or are you mostly just relying on defaults? I suspect this data source hasn't been properly onboarded. Most importantly - do you have line merging disabled and have properly defined line breaker? (and do you have event breakers set properly?) 3. Did you verify if the rest of those events is really not ingested or maybe just not indexed at the right time? The way to test it would be to run a real-time search (that's one of the very few cases where real-time searches make sense) narrowed down to this problematic source and see whether the data shows up and what timestamp is being parsed from it. 4. Thruput has nothing to do with it. It would only make your downstream pipe get clogged but your data would finally trickle down to the indexer(s).
It's up to your OS and/or Splunk admins to solve. For some reason the filesystem on which the dispatch directory is located is filled up to the brim. It might be just the dispatch data but if it's on... See more...
It's up to your OS and/or Splunk admins to solve. For some reason the filesystem on which the dispatch directory is located is filled up to the brim. It might be just the dispatch data but if it's on the same filesystem as - for example - Splunk's internal logs and maybe OS logs and other stuff there could be other places you need to be looking for free space.
Here's some good information about the dispatch directory: https://docs.splunk.com/Documentation/Splunk/9.3.1/Search/Dispatchdirectoryandsearchartifacts Splunk normally does age things out but rea... See more...
Here's some good information about the dispatch directory: https://docs.splunk.com/Documentation/Splunk/9.3.1/Search/Dispatchdirectoryandsearchartifacts Splunk normally does age things out but read the doc above. Perhaps the disk is full for other reasons? https://community.splunk.com/t5/Splunk-Search/Splunk-says-dispatch-directory-is-full-but-when-I-go-to-the/m-p/370243 One thing that can cause your dispatch directory to grow is if you adjust the time to live (TTL) of jobs.
There is a Splunk-supported TA for McAfee ePO https://splunkbase.splunk.com/app/5085 The log ingestion is via syslog (as far as I remember from few years back, ePO exports event over TLS-protected ... See more...
There is a Splunk-supported TA for McAfee ePO https://splunkbase.splunk.com/app/5085 The log ingestion is via syslog (as far as I remember from few years back, ePO exports event over TLS-protected TCP stream). The rest you'll find in the docs - it's a Splunk-supported app so it has relatively good docs.
1. This is not your whole event since you're doing spath to get it. 2. Don't search for "*tanium*". Wildcards at the beginning of search term will make Splunk read all raw events. 3. We don't know ... See more...
1. This is not your whole event since you're doing spath to get it. 2. Don't search for "*tanium*". Wildcards at the beginning of search term will make Splunk read all raw events. 3. We don't know your data. How can we know why your results are "wrong"? Maybe some of your extractions don't work and you get nulls. Dedups or mvzips on them will yield null results. 4. There are two typical ways of debugging SPL searches. One is to start from the start and add commands until their results stop making sense. Another is to start from the end and remove commands untill the results start making sense.
Hi, if an environment encounters the error 'Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch' multiple times (meaning the issue persists even ... See more...
Hi, if an environment encounters the error 'Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch' multiple times (meaning the issue persists even after cleaning the dispatch directory), what corrective actions should be taken? Should the dispatch directory be cleaned regularly? This is for a standalone environment.
What do you mean by "split"? This is obviously not an event but a result of a search. So adjust your search to not merge all results into multivalued fields (which by the way give you no guarantee th... See more...
What do you mean by "split"? This is obviously not an event but a result of a search. So adjust your search to not merge all results into multivalued fields (which by the way give you no guarantee that "the same" row from each of those fields correspond to the same event in the original data or whatever data you're summarizing it from).
This is what I have in "server.conf", in addition to what I have in "web.conf": [httpServer] disableDefaultPort = false mgmtMode = tcp After that, splunkd starts to listen to TCP port 8089.
Please help me to extract multiple values from one single value.  
Here's what I ended up doing, seems to work! | rex max_match=0 field=Tags "(?<namevalue>[^:, ]+:[^,]+)" | mvexpand namevalue | rex field=namevalue "(?<name>[^:]+):(?<value>.*)" | eval {name}=valu... See more...
Here's what I ended up doing, seems to work! | rex max_match=0 field=Tags "(?<namevalue>[^:, ]+:[^,]+)" | mvexpand namevalue | rex field=namevalue "(?<name>[^:]+):(?<value>.*)" | eval {name}=value The confusion about seeing only one of the fields being extracted was a result of the mvexpand. I didn't realize that created NEW events, one for each field. Makes sense now...thank you!
A subsearch will get executed first and if it completes successfully (which might not happen - subsearches have limitations and throwing heavy raw-data based searches into them is not a good idea) wi... See more...
A subsearch will get executed first and if it completes successfully (which might not happen - subsearches have limitations and throwing heavy raw-data based searches into them is not a good idea) will return a set of conditions or a search string which will get substituted in the main search. So your search as it is will make no sense syntactically because the rex command doesn't take more arguments. If anything you'd need to do <something> | search [ your subsearch here ]  
| spath input=json output=device audit.result.devices{} | mvexpand device | spath input=device whatever.whatever
Unfortunately, I am not the manager of our Splunk installation (and have no access to it), so I can't provide any info about our setup, config files, etc. I'll see if I can get that info to you fr... See more...
Unfortunately, I am not the manager of our Splunk installation (and have no access to it), so I can't provide any info about our setup, config files, etc. I'll see if I can get that info to you from one of our ops folks. - Tim    
Unfortunately, I am not the manager of our Splunk installation (and have no access to it), so I can't provide any info about our setup, config files, etc. I'll see if I can get that info to you from... See more...
Unfortunately, I am not the manager of our Splunk installation (and have no access to it), so I can't provide any info about our setup, config files, etc. I'll see if I can get that info to you from one of our ops folks. - Tim
I am trying to take the results of one search, extract a field from those results (named "id") and take all of those values (deduped) and use them to get results from another search. Unfortunately th... See more...
I am trying to take the results of one search, extract a field from those results (named "id") and take all of those values (deduped) and use them to get results from another search. Unfortunately the second search doesn't have this field name directly in the sourcetype either so it has to be extracted with rex.  I've been having issues with this though. From what I've read I need to use the subsearch to extract the id's for the outer search. It's not working though. Each search is from a competely different data set that has very little in common.   index=index1 source="/somefile.log" uri="/path/with/id/some_id/" | rex field=uri "/path/with/id/(?<some_id>[^/]+)/*" [ search index=index2 source="/another.log"" "condition-i-want-to-find" | rex field=_raw "some_id:(?<some_id>[^,]+),*" | dedup some_id | fields some_id ]   I've tried a bunch of variations of this with no luck. Including renaming field some_id to "search" as  some have said that would help. I don't necessarily need the original uri="/path/with/id/some_id" in the outer search but that would be nice to limit those results.
When you tested in the CLI, did you use Splunk's python interpreter (splunk cmd python ... )?  If not, then there may be differences in environments that prevent the command from running.  Verify all... See more...
When you tested in the CLI, did you use Splunk's python interpreter (splunk cmd python ... )?  If not, then there may be differences in environments that prevent the command from running.  Verify all imported modules are available via Splunk; those that are not should be added to your command's bin/lib directory. Check python.log for messages that might explain why the command isn't working.
@PickleRick - Can you please share sample syntax?
Hi, Please help me in extracting multivalue fields from email body logs: LOG: "Computer Name","Patch List Name","Compliance Status","Patch List Name1","Compliance Status1","OS Type1" "XXXX.e... See more...
Hi, Please help me in extracting multivalue fields from email body logs: LOG: "Computer Name","Patch List Name","Compliance Status","Patch List Name1","Compliance Status1","OS Type1" "XXXX.emea.intra","ACN - Windows Server - PL - Up to Oct24","Compliant","[ACN - Windows Server - PL - Up to Aug24] + [ACN - Windows Server - PL - Sep24]","Compliant","Windows" "XXXX.na.intra","ACN - Windows Server - PL - Up to Oct24","Compliant","[ACN - Windows Server - PL - Up to Aug24] + [ACN - Windows Server - PL - Sep24]","Compliant","Windows" Fields i want to extract are these: "Computer Name","Patch List Name","Compliance Status","Patch List Name1","Compliance Status1","OS Type1" I have applied rex to bring out all the fields  The rex is giving me total number of 3131 computer_names but when i am using mvexpand command to expand in into multiple rows , it is giving me only 1500 results not sure why rest are getting truncated. Attaching the search query and snippet for reference: index=mail "*tanium*" |spath=body |rex field=body max_match=0 "\"(?<Computer_name>.*)\",\"ACN" |rex field=body max_match=0 "\"(?<Computer_name1>.*)\",\"\[n" |rex field=Computer_name1 max_match=0 "(?<Computer_name2>.*)\",\"\[n" |eval Computer_name=mvappend(Computer_name,Computer_name2)|table Computer_name |dedup Computer_name | mvexpand Computer_name | makemv Computer_name delim="," index=mail "*tanium*" |spath=body |rex field=body max_match=0 "\"(?<Computer_name>.*)\",\"ACN" |rex field=body max_match=0 "\"(?<Computer_name1>.*)\",\"\[n" |rex field=Computer_name1 max_match=0 "(?<Computer_name2>.*)\",\"\[n" |eval Computer_name=mvappend(Computer_name,Computer_name2) |rex field=body max_match=0 "\,(?<Patch_List_Name1>.*)\"\[" |rex field=Patch_List_Name1 max_match=0 "\"(?<Patch_List_Name>.*)\",\"" |rex field=Patch_List_Name1 max_match=0 "\",\""(?<Compliance_status>.*)\" |table Computer_name Patch_List_Name Compliance_status |dedup Computer_name Patch_List_Name Compliance_status | eval tagged=mvzip(Computer_name,Patch_List_Name) | eval tagged=mvzip(tagged,Compliance_status) | mvexpand tagged | makemv tagged delim="," | eval Computer_name=mvindex(tagged,0) | eval Patch_List_Name=mvindex(tagged,1) |eval Compliance_status=mvindex(tagged,-1) |table Computer_name Patch_List_Name Compliance_status      
Hi @mwolfe , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors