All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your output looks correct. Is it not what you expected? If not, what did you expect?
My Output : Inbound file processed successfully GL1025pcardBCAXX8595143691007 Inbound file processed successfully GL1025pcardBCAXX8595144691006 Inbound file processed successfully GL1025pcardB... See more...
My Output : Inbound file processed successfully GL1025pcardBCAXX8595143691007 Inbound file processed successfully GL1025pcardBCAXX8595144691006 Inbound file processed successfully GL1025pcardBCAXX8732024191001 Inbound file processed successfully GL1025transBCAXX8277966711002 File put Succesfully GL1025pcardBCAXX8595143691007 File put Succesfully GL1025pcardBCAXX8595144691006 File put Succesfully GL1025pcardBCAXX8732024191001 File put Succesfully GL1025transBCAXX8277966711002 In OR condition i mentioned both the keywords.why because some of the messages fields dont have Fileput successfully .That y i gave both the strings in the mvdedup
As I suggested, it might be your data because the way you appear to be doing it should work. Can you identify values of field1 which should have joined which don't appear to have joined? Also, bear... See more...
As I suggested, it might be your data because the way you appear to be doing it should work. Can you identify values of field1 which should have joined which don't appear to have joined? Also, bear in mind that sub-search (as used by your inner search on the join) are limited to 50,000 events so it could be that the missing inner events have fallen outside the 50k limit. Try reducing the timeframe for you search to see if there is a point at which you get the results you expect.
 Network data can be notorious for sending large volumes of data - where possible filter at source.   It’s also worth thing about how you’re sending the network data to Splunk   The  better sys... See more...
 Network data can be notorious for sending large volumes of data - where possible filter at source.   It’s also worth thing about how you’re sending the network data to Splunk   The  better syslog options are: Splunks free SC4S (container syslog under the hood) Have a syslog (r-syslog or syslog-ng) server and send the data there, then let a UF to pick up from there, and send it to Splunk.   Many people set up TCP/UDP ports on a HF or Splunk Indexers, and this can various implications for large environments (not saying you can't do this) but it’s not ideal for production, but for testing or small environments Ok.
If the left side is a subset of the right side then the left side will be the result of a Left Join.
match uses regex so the * at the end of each string is probably superfluous (unless you were matching for "File put Succesfull" or "File put Succesfullyyyyy") Other than that, it looks like your mvd... See more...
match uses regex so the * at the end of each string is probably superfluous (unless you were matching for "File put Succesfull" or "File put Succesfullyyyyy") Other than that, it looks like your mvdedup mvfilter should work. Please can you share some example events for which this is not working?
Splunk is very security focused and has many apps and various options for data etc.... A couple of tips: When I first started, I had a play with this App, its a great way to learn about security ... See more...
Splunk is very security focused and has many apps and various options for data etc.... A couple of tips: When I first started, I had a play with this App, its a great way to learn about security datamodels and dashboards. Its not Splunk ES SIEM but its still very good to have a play with and learn. https://splunkbase.splunk.com/app/4240 The other one to install and have a play with is security essentials - this provide so much security content, again good to learn about. (This is not a SIEM or monitoring app)  and its my go to for security use cases. https://splunkbase.splunk.com/app/3435
Hello, Thanks a lot for your answer. After a few test, the same bug happens when we import 1 day logs (500 Mb) in the debug environment. So that the problem seems to come from the Cisco logs themsel... See more...
Hello, Thanks a lot for your answer. After a few test, the same bug happens when we import 1 day logs (500 Mb) in the debug environment. So that the problem seems to come from the Cisco logs themselves. We will try to activate / deactivate transformations in the props.conf file (I will start with the lookups) and I will keep the community up-to-date! Do not hesitate to suggest some other action we should take!  Thanks for your help!    
I’m expecting 0 results because the 1000 events in the left query are a subset of the 2000 events in the right query.  But in real I’m getting 1000 events on using left join which seems incorrect. 
Assuming there is only one event for each soar_uuid in either of the two searches, i.e. it is unique in its search but possibly duplicated in the other search, you could do comething like this | loc... See more...
Assuming there is only one event for each soar_uuid in either of the two searches, i.e. it is unique in its search but possibly duplicated in the other search, you could do comething like this | localop | rest .... ```first search key field``` |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid] | eventstats count by soar_uuid | table soar_uuid,triggered,rule.name,title,classification,url,count count would then be 2 if it is duplicated in the appended search
So, what are you saying is not working, are you getting no results when you were expect some or results when you were expecting none? Either way, it sounds like a problem with your data not being as... See more...
So, what are you saying is not working, are you getting no results when you were expect some or results when you were expecting none? Either way, it sounds like a problem with your data not being as you expected. Are the fields extracted as you expected? Do you have any "hidden" blank spaces which cause the join to give unexpected results? You could try including the "inner" search in the main search and set a field based on whether the event can be identified as coming from the inner search or not e.g. | eval inner=if(index="index1", "true", "false") | stats values(inner) as innerouter by field1 If innerouter has both "true" and "false" then value in field1 appeared in the inner search and at least one of the outer searches, etc.
The Carbon Black Response app for SOAR doesn't allow you to quarantine/unquarantine if the device is offline. In the Carbon Black interface/api this is just a flag that is set, so if they are offline... See more...
The Carbon Black Response app for SOAR doesn't allow you to quarantine/unquarantine if the device is offline. In the Carbon Black interface/api this is just a flag that is set, so if they are offline it prevents them from re-connecting. This is the desired behaviour but seems from the Carbon Black Response app code that a check to see if online has been added. Can this be removed?
@ITWhisperernot expect it to remove any events mean that I should be able to see 1000 results after loading the query or (0 results) no results because the events are matching perfectly. 
This didn't work...
If you need to preserve the original field then you aren't renaming.  Use the eval function to create a new field based on the old one. | eval NewIDs = case(OriginalIDs="P1D", "Popcorn", ... See more...
If you need to preserve the original field then you aren't renaming.  Use the eval function to create a new field based on the old one. | eval NewIDs = case(OriginalIDs="P1D", "Popcorn", OriginalIDs="B4D", "Banana", OriginalIDs="O5D", "Opp", 1==1, OriginalIDs)  
Splunk heavy forwarders (and indexers) send data to third-party services in syslog format only.  They do not (and cannot) listen to data on a search head (SHs do not have data). Place heavy forwarde... See more...
Splunk heavy forwarders (and indexers) send data to third-party services in syslog format only.  They do not (and cannot) listen to data on a search head (SHs do not have data). Place heavy forwarders in front of your indexers to route data both to the indexers and to ELK.  See https://docs.splunk.com/Documentation/Splunk/9.2.1/Forwarding/Routeandfilterdatad#Replicate_a_subset_of_data_to_a_third-party_system for more information. Be aware that this is a bit of a Science Project.  Splunk does not make it easy to switch to a competing product.
  There are many factors that could cause performance issues in your prod environment that wasn’t in the dev environment, production normally has more data and many other variables to that could cau... See more...
  There are many factors that could cause performance issues in your prod environment that wasn’t in the dev environment, production normally has more data and many other variables to that could cause issues.   Splunk is a workhorse, it needs CPU/Memory/Disk resources and other factors to be in place. Things to consider Has the environment been sized according for production? Is the disk on fast type of disks SSD etc Are  there lots of user’s run running the same search and for all time and at the same time? Do you have indexer clustering or is it Splunk All in One The Add-ons (TA’s) normally provide parsing and other knowledge objects  and potentially it could impact the environment with regex processing as an example.  The Splunk apps on the other hand have searches and dashboards that could potentially impact with long running searches. But normally it’s down to the Splunk sizing or something based on the environment. I don’t re-call a TA ever causing performance issues in the PROD environment, but it could happen I guess. I suggest: Use the  Monitoring Console for the production environment, this is a good place to start to check the performance issues. Check CPU/Memory on the SH and Indexers first. Check the Searches Run and Search Memory usage,  using the  MC. If you removed the TA does it improve and re-install, does it get bad again? If that all fails then perhaps look at logging a support call.   Monitoring Console https://docs.splunk.com/Documentation/Splunk/9.2.1/DMC/DMCoverview Splunk Sizing guide https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Sizing_your_Splunk_architecture
I would like to rename the field values that exist in one column and add them into their own separate column while keeping the original column (with the values before they were renamed) to show how t... See more...
I would like to rename the field values that exist in one column and add them into their own separate column while keeping the original column (with the values before they were renamed) to show how they map to the new values in the new column. The idea is if I have a list of IDs (original) that I want to map to different names in a separate column that represent those original IDS (basically Aliases) but want to keep both of the columns in a list view, how would I go about doing that? Example: Display Original IDs NewIDs P1D Popcorn B4D Banana O5D Opp
Hi All, I have a message filed having multiple success messages .I am using stats values(message) as message .So i want to show any one of the success messages in the output.For that i used below qu... See more...
Hi All, I have a message filed having multiple success messages .I am using stats values(message) as message .So i want to show any one of the success messages in the output.For that i used below query to restrict the other message values using mvdedup. But its not filtering. | eval Result=mvdedup(mvfilter(match(message, "File put Succesfully*") OR match(message, "Successfully created file data*") OR match(message, "Archive file processed successfully*") OR match(message, "Summary of all Batch*") OR match(message, "processed successfully for file name*") OR match(message, "ISG successful Call*") OR match(message, "Inbound file processed successfully") OR match(message, "ISG successful Call*") ) )   
  I have created a search that contains a field that is unique. I am using this search to populate the index. however for some reason when I try and check to see if the record is in the index it doe... See more...
  I have created a search that contains a field that is unique. I am using this search to populate the index. however for some reason when I try and check to see if the record is in the index it doesn't work for me. The closest I have come is this: | localop | rest .... ```first search key field``` |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid|rename soar_uuid as ExistingKey] | table soar_uuid,triggered,rule.name,title,classification,url,ExistingKey The above returns  a list of new records with a blank ExistingKey field, and matching keys for soar_uuid  of existing records with a blank soar_uuid field.  If I could just populate either with the other field, then I could remove all the duplicates. I want to remove the new records that match the existing records before writing the events to the index. appendsearch instead of append doesn't seem to return the existing records.