All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm looking for a query to display a list of jobs stuck in queue (the past 7 days). Does anyone knows the query?  
Hi @mackey  If you have ES, it has a framework called "THREAT INTELLIGENCE" for managing threat feeds, detecting threats, and sending alerts. You should explore this functionality, as it can be quit... See more...
Hi @mackey  If you have ES, it has a framework called "THREAT INTELLIGENCE" for managing threat feeds, detecting threats, and sending alerts. You should explore this functionality, as it can be quite beneficial. Additionally, there are several other high-quality sources of threat data available in that  which just need to be activated if required OR if you have your own custom feeds, you can also integrate them as custom lookups in threat intelligence. As mentioned by @gcusello you have two options , explore it as per your requirement.  For more info on this , please refer the below docs:  https://lantern.splunk.com/Security/UCE/Guided_Insights/Threat_intelligence/Using_threat_intelligence_in_Splunk_Enterprise_Security https://dev.splunk.com/enterprise/docs/devtools/enterprisesecurity/threatintelligenceframework/ https://www.splunk.com/en_us/pdfs/feature-brief/splunk-threat-intelligence-management.pdf   If this helps, accept the answer by upvoting !! Happy Splunking !!  
I have access to ES yes. 
Hello @PickleRick , Yes, this is the search on the basis of email logs which is giving me one result and i need that search to be multivalued not single valued as you can see in my snippet its giv... See more...
Hello @PickleRick , Yes, this is the search on the basis of email logs which is giving me one result and i need that search to be multivalued not single valued as you can see in my snippet its giving statistics 1 rather than 3131 which is actually there in the data. LOGS: I need this 3131 to be spiltted into mutiple rows with my other following fields as shown in the previous screenshot. when i am doing mvexpand Computer_name its coming 3131 but as soon as i am applying other fields its not showing the data.    
Whereas the syntax problem that @PickleRick pointed out can be rectified by adding a pipe like this   index=index1 source="/somefile.log" uri="/path/with/id/some_id/" | rex field=uri "/path/with/i... See more...
Whereas the syntax problem that @PickleRick pointed out can be rectified by adding a pipe like this   index=index1 source="/somefile.log" uri="/path/with/id/some_id/" | rex field=uri "/path/with/id/(?<some_id>[^/]+)/*" | search [ search index=index2 source="/another.log"" "condition-i-want-to-find" | rex field=_raw "some_id:(?<some_id>[^,]+),*" | dedup some_id | fields some_id ]   this method reduces the advantage of using subsearch in your dataset. To improve efficiency, "renaming field some_id to "search" as  some have said would help" actually will help. (In part because / is a hard separator in Splunk.)  You just need to add a format command:   index=index1 source="/somefile.log" uri="/path/with/id/some_id/" [ search index=index2 source="/another.log"" "condition-i-want-to-find" | rex field=_raw "some_id:(?<search>[^,]+),*" | dedup search | fields search | format ] | rex field=uri "/path/with/id/(?<some_id>[^/]+)/*"   Here is an emulation.  Play with it and compare with your data.   index = _internal log/splunk ``` the above emulates index=index1 source="/somefile.log" uri="/path/with/id/some_id/" ``` [makeresults format=csv data="search supervisor.log splunkd_ui_access.log" ``` the above emulates [ search index=index2 source="/another.log"" "condition-i-want-to-find" | rex field=_raw "some_id:(?<search>[^,]+),*" | dedup search | fields search | format ] ``` | format] | rex field=series "log/splunk/(?<some_id>[^\"]+)" ``` emulates | rex field=uri "/path/with/id/(?<some_id>[^/]+)/*" ``` | stats count by some_id   On my laptop, it gives some_id count splunkd_ui_access.log 59 supervisor.log 1045 As you can see, among all the logs, the output is limited to the two values in the subsearch.
Hello, I'm still new to Splunk, recently I was testing with BrowsingHistoryView Add-on for Splunk. I was able to deploy it and push to the windows clients. However it is not working properly, basica... See more...
Hello, I'm still new to Splunk, recently I was testing with BrowsingHistoryView Add-on for Splunk. I was able to deploy it and push to the windows clients. However it is not working properly, basically BrowsingHistoryView.exe is not working fully under virtual splunk account, if I run loader .bat script under my account it working perfectly. Can anyone help on this ? Thank you.
If I put a lower value on TTL for the dispatch directory - this would be a good idea in this case?
edited
Hi, I have a splunk server that has tonnes of data in it. What we would like to do is have a system on a dedicated search head, that does a search lookup, then exports the data it finds to an S3 buc... See more...
Hi, I have a splunk server that has tonnes of data in it. What we would like to do is have a system on a dedicated search head, that does a search lookup, then exports the data it finds to an S3 bucket for another system to ingest and do analysis on. I have looked at several adds including Export Everything, and S3 Uploader for Splunk, but neither of them have clear instructions and I am having issues. Are there any resources that are clear on how to setup the connection to export search results from Splunk into an S3 bucket?
Did you find solution to this? my problem is that it will trigger on all shc members and when i assign notable from on sh it is not reflected on other shs
In which panel and which value is negative? Anyway, you can open any panel in search and see where this value comes from. Most probably there is an initial rest call which returns wrong values but y... See more...
In which panel and which value is negative? Anyway, you can open any panel in search and see where this value comes from. Most probably there is an initial rest call which returns wrong values but you have to double-check that. Did you restart splunkd on the server(s) where you added storage or did you just extend the filesystem on the fly?
1. Check your _internal for possible messages regarding this source. 2. Are your sourcetypes properly defined or are you mostly just relying on defaults? I suspect this data source hasn't been prope... See more...
1. Check your _internal for possible messages regarding this source. 2. Are your sourcetypes properly defined or are you mostly just relying on defaults? I suspect this data source hasn't been properly onboarded. Most importantly - do you have line merging disabled and have properly defined line breaker? (and do you have event breakers set properly?) 3. Did you verify if the rest of those events is really not ingested or maybe just not indexed at the right time? The way to test it would be to run a real-time search (that's one of the very few cases where real-time searches make sense) narrowed down to this problematic source and see whether the data shows up and what timestamp is being parsed from it. 4. Thruput has nothing to do with it. It would only make your downstream pipe get clogged but your data would finally trickle down to the indexer(s).
It's up to your OS and/or Splunk admins to solve. For some reason the filesystem on which the dispatch directory is located is filled up to the brim. It might be just the dispatch data but if it's on... See more...
It's up to your OS and/or Splunk admins to solve. For some reason the filesystem on which the dispatch directory is located is filled up to the brim. It might be just the dispatch data but if it's on the same filesystem as - for example - Splunk's internal logs and maybe OS logs and other stuff there could be other places you need to be looking for free space.
Here's some good information about the dispatch directory: https://docs.splunk.com/Documentation/Splunk/9.3.1/Search/Dispatchdirectoryandsearchartifacts Splunk normally does age things out but rea... See more...
Here's some good information about the dispatch directory: https://docs.splunk.com/Documentation/Splunk/9.3.1/Search/Dispatchdirectoryandsearchartifacts Splunk normally does age things out but read the doc above. Perhaps the disk is full for other reasons? https://community.splunk.com/t5/Splunk-Search/Splunk-says-dispatch-directory-is-full-but-when-I-go-to-the/m-p/370243 One thing that can cause your dispatch directory to grow is if you adjust the time to live (TTL) of jobs.
There is a Splunk-supported TA for McAfee ePO https://splunkbase.splunk.com/app/5085 The log ingestion is via syslog (as far as I remember from few years back, ePO exports event over TLS-protected ... See more...
There is a Splunk-supported TA for McAfee ePO https://splunkbase.splunk.com/app/5085 The log ingestion is via syslog (as far as I remember from few years back, ePO exports event over TLS-protected TCP stream). The rest you'll find in the docs - it's a Splunk-supported app so it has relatively good docs.
1. This is not your whole event since you're doing spath to get it. 2. Don't search for "*tanium*". Wildcards at the beginning of search term will make Splunk read all raw events. 3. We don't know ... See more...
1. This is not your whole event since you're doing spath to get it. 2. Don't search for "*tanium*". Wildcards at the beginning of search term will make Splunk read all raw events. 3. We don't know your data. How can we know why your results are "wrong"? Maybe some of your extractions don't work and you get nulls. Dedups or mvzips on them will yield null results. 4. There are two typical ways of debugging SPL searches. One is to start from the start and add commands until their results stop making sense. Another is to start from the end and remove commands untill the results start making sense.
Hi, if an environment encounters the error 'Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch' multiple times (meaning the issue persists even ... See more...
Hi, if an environment encounters the error 'Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch' multiple times (meaning the issue persists even after cleaning the dispatch directory), what corrective actions should be taken? Should the dispatch directory be cleaned regularly? This is for a standalone environment.
What do you mean by "split"? This is obviously not an event but a result of a search. So adjust your search to not merge all results into multivalued fields (which by the way give you no guarantee th... See more...
What do you mean by "split"? This is obviously not an event but a result of a search. So adjust your search to not merge all results into multivalued fields (which by the way give you no guarantee that "the same" row from each of those fields correspond to the same event in the original data or whatever data you're summarizing it from).
This is what I have in "server.conf", in addition to what I have in "web.conf": [httpServer] disableDefaultPort = false mgmtMode = tcp After that, splunkd starts to listen to TCP port 8089.
Please help me to extract multiple values from one single value.