All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It could be a n umber of things as to why the data is not coming through or not showing. 1.Whatever your monitoring does it have read permissions? 2.Check for typos' (index name etc) You can als... See more...
It could be a n umber of things as to why the data is not coming through or not showing. 1.Whatever your monitoring does it have read permissions? 2.Check for typos' (index name etc) You can also check the internal logs for clues index=_internal sourcetype=splunkd host=neo log_level=INFO component=WatchedFile | table host, _time, component, event_message, log_level | sort - _time   What is the output of this command - it shows whats being monitored (Assuming its a linux host) /opt/splunk/bin/splunk list inputstatus   Are you able to show us your inputs.conf and describe what you are trying to monitor?
I made my configuration for inputs.conf to ingest data into splunk but not getting data, during my investigation to check if there is any issue i realize the configured source is not showing any data... See more...
I made my configuration for inputs.conf to ingest data into splunk but not getting data, during my investigation to check if there is any issue i realize the configured source is not showing any data and i cant see the source path in the index in splunk. Is there a reason why am not seeing the source after configuring the inputs.conf 
That worked, here is the updated SPL using your concept.   |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid] | eventstats count by soar_uuid |where count<2 | table... See more...
That worked, here is the updated SPL using your concept.   |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid] | eventstats count by soar_uuid |where count<2 | table soar_uuid,triggered,rule.name,title,classification,url,count
Splunk Works apps are unsupported.  They're created by Splunk employees contributing to the community in an unofficial capacity.  If the app is not updated it could be because the author has moved on... See more...
Splunk Works apps are unsupported.  They're created by Splunk employees contributing to the community in an unofficial capacity.  If the app is not updated it could be because the author has moved on to a different project or may have left Splunk.
Your output looks correct. Is it not what you expected? If not, what did you expect?
My Output : Inbound file processed successfully GL1025pcardBCAXX8595143691007 Inbound file processed successfully GL1025pcardBCAXX8595144691006 Inbound file processed successfully GL1025pcardB... See more...
My Output : Inbound file processed successfully GL1025pcardBCAXX8595143691007 Inbound file processed successfully GL1025pcardBCAXX8595144691006 Inbound file processed successfully GL1025pcardBCAXX8732024191001 Inbound file processed successfully GL1025transBCAXX8277966711002 File put Succesfully GL1025pcardBCAXX8595143691007 File put Succesfully GL1025pcardBCAXX8595144691006 File put Succesfully GL1025pcardBCAXX8732024191001 File put Succesfully GL1025transBCAXX8277966711002 In OR condition i mentioned both the keywords.why because some of the messages fields dont have Fileput successfully .That y i gave both the strings in the mvdedup
As I suggested, it might be your data because the way you appear to be doing it should work. Can you identify values of field1 which should have joined which don't appear to have joined? Also, bear... See more...
As I suggested, it might be your data because the way you appear to be doing it should work. Can you identify values of field1 which should have joined which don't appear to have joined? Also, bear in mind that sub-search (as used by your inner search on the join) are limited to 50,000 events so it could be that the missing inner events have fallen outside the 50k limit. Try reducing the timeframe for you search to see if there is a point at which you get the results you expect.
 Network data can be notorious for sending large volumes of data - where possible filter at source.   It’s also worth thing about how you’re sending the network data to Splunk   The  better sys... See more...
 Network data can be notorious for sending large volumes of data - where possible filter at source.   It’s also worth thing about how you’re sending the network data to Splunk   The  better syslog options are: Splunks free SC4S (container syslog under the hood) Have a syslog (r-syslog or syslog-ng) server and send the data there, then let a UF to pick up from there, and send it to Splunk.   Many people set up TCP/UDP ports on a HF or Splunk Indexers, and this can various implications for large environments (not saying you can't do this) but it’s not ideal for production, but for testing or small environments Ok.
If the left side is a subset of the right side then the left side will be the result of a Left Join.
match uses regex so the * at the end of each string is probably superfluous (unless you were matching for "File put Succesfull" or "File put Succesfullyyyyy") Other than that, it looks like your mvd... See more...
match uses regex so the * at the end of each string is probably superfluous (unless you were matching for "File put Succesfull" or "File put Succesfullyyyyy") Other than that, it looks like your mvdedup mvfilter should work. Please can you share some example events for which this is not working?
Splunk is very security focused and has many apps and various options for data etc.... A couple of tips: When I first started, I had a play with this App, its a great way to learn about security ... See more...
Splunk is very security focused and has many apps and various options for data etc.... A couple of tips: When I first started, I had a play with this App, its a great way to learn about security datamodels and dashboards. Its not Splunk ES SIEM but its still very good to have a play with and learn. https://splunkbase.splunk.com/app/4240 The other one to install and have a play with is security essentials - this provide so much security content, again good to learn about. (This is not a SIEM or monitoring app)  and its my go to for security use cases. https://splunkbase.splunk.com/app/3435
Hello, Thanks a lot for your answer. After a few test, the same bug happens when we import 1 day logs (500 Mb) in the debug environment. So that the problem seems to come from the Cisco logs themsel... See more...
Hello, Thanks a lot for your answer. After a few test, the same bug happens when we import 1 day logs (500 Mb) in the debug environment. So that the problem seems to come from the Cisco logs themselves. We will try to activate / deactivate transformations in the props.conf file (I will start with the lookups) and I will keep the community up-to-date! Do not hesitate to suggest some other action we should take!  Thanks for your help!    
I’m expecting 0 results because the 1000 events in the left query are a subset of the 2000 events in the right query.  But in real I’m getting 1000 events on using left join which seems incorrect. 
Assuming there is only one event for each soar_uuid in either of the two searches, i.e. it is unique in its search but possibly duplicated in the other search, you could do comething like this | loc... See more...
Assuming there is only one event for each soar_uuid in either of the two searches, i.e. it is unique in its search but possibly duplicated in the other search, you could do comething like this | localop | rest .... ```first search key field``` |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid] | eventstats count by soar_uuid | table soar_uuid,triggered,rule.name,title,classification,url,count count would then be 2 if it is duplicated in the appended search
So, what are you saying is not working, are you getting no results when you were expect some or results when you were expecting none? Either way, it sounds like a problem with your data not being as... See more...
So, what are you saying is not working, are you getting no results when you were expect some or results when you were expecting none? Either way, it sounds like a problem with your data not being as you expected. Are the fields extracted as you expected? Do you have any "hidden" blank spaces which cause the join to give unexpected results? You could try including the "inner" search in the main search and set a field based on whether the event can be identified as coming from the inner search or not e.g. | eval inner=if(index="index1", "true", "false") | stats values(inner) as innerouter by field1 If innerouter has both "true" and "false" then value in field1 appeared in the inner search and at least one of the outer searches, etc.
The Carbon Black Response app for SOAR doesn't allow you to quarantine/unquarantine if the device is offline. In the Carbon Black interface/api this is just a flag that is set, so if they are offline... See more...
The Carbon Black Response app for SOAR doesn't allow you to quarantine/unquarantine if the device is offline. In the Carbon Black interface/api this is just a flag that is set, so if they are offline it prevents them from re-connecting. This is the desired behaviour but seems from the Carbon Black Response app code that a check to see if online has been added. Can this be removed?
@ITWhisperernot expect it to remove any events mean that I should be able to see 1000 results after loading the query or (0 results) no results because the events are matching perfectly. 
This didn't work...
If you need to preserve the original field then you aren't renaming.  Use the eval function to create a new field based on the old one. | eval NewIDs = case(OriginalIDs="P1D", "Popcorn", ... See more...
If you need to preserve the original field then you aren't renaming.  Use the eval function to create a new field based on the old one. | eval NewIDs = case(OriginalIDs="P1D", "Popcorn", OriginalIDs="B4D", "Banana", OriginalIDs="O5D", "Opp", 1==1, OriginalIDs)  
Splunk heavy forwarders (and indexers) send data to third-party services in syslog format only.  They do not (and cannot) listen to data on a search head (SHs do not have data). Place heavy forwarde... See more...
Splunk heavy forwarders (and indexers) send data to third-party services in syslog format only.  They do not (and cannot) listen to data on a search head (SHs do not have data). Place heavy forwarders in front of your indexers to route data both to the indexers and to ELK.  See https://docs.splunk.com/Documentation/Splunk/9.2.1/Forwarding/Routeandfilterdatad#Replicate_a_subset_of_data_to_a_third-party_system for more information. Be aware that this is a bit of a Science Project.  Splunk does not make it easy to switch to a competing product.