All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Harisha, There is an add-on on Splunkbase for this: CrowdStrike OAuth API | Splunkbase This SOAR Add-on allows you to download the reports. It might already be installed on your SOAR instance s... See more...
Hi Harisha, There is an add-on on Splunkbase for this: CrowdStrike OAuth API | Splunkbase This SOAR Add-on allows you to download the reports. It might already be installed on your SOAR instance so feel free to check. The first thing you will need to do is configure an asset with the correct API credentials within the Crowdstrike app. Once you have the app configured you can then implement actions within a playbook to do whatever you need. If you have any specific questions along the way then feel free to ask away!
Hello, With these sorts of issues it's best to work your way down to eliminate the possible causes. Take an exemplar broken search from the dashboard and try to run it manually: eventtype=msad-suc... See more...
Hello, With these sorts of issues it's best to work your way down to eliminate the possible causes. Take an exemplar broken search from the dashboard and try to run it manually: eventtype=msad-successful-user-logons   If that doesn't work try to run the definition manually: eventtype=wineventlog_index_windows eventtype=wineventlog_security EventCode=4624 user!="*$"   If that works, make sure the msad-successful-user-logons definition is correct and shared properly. If not, try expanding your index eventtype: (index=msad OR index=main) eventtype=wineventlog_security EventCode=4624 user!="*$"   If that works, make sure your definition is correct and shared properly. If not, try expanding the wineventlog_security eventtype: (index=msad OR index=main) (search = source=WinEventLog:Security OR source=WMI:WinEventLog:Security OR source=XmlWinEventLog:Security) EventCode=4624 user!="*$"   If that works, make sure Splunk_TA_windows is installed the wineventlog_security eventtype is working. If that doesn't work then your problem is not with the eventtype definitions, but rather with the data itself. Things to try:   Do you have Splunk_TA_windows installed on your indexers/search heads? Are the source's renamed correctly as per TA_Windows ta-windows-fix-xml-source definition and the requirements of the wineventlog_security eventtype? Are your indexes correct and populated within the search timeframe? Finally, if you still can't get results, try stripping of key values from the search to check if the search is working: (index=msad OR index=main) (search = source=WinEventLog:Security OR source=WMI:WinEventLog:Security OR source=XmlWinEventLog:Security) If you get results, the problem is with the field extractions: EventCode=4624 user!="*$" check that Splunk_TA_windows is working as expected, check your inputs, props and transforms are all aligned. Good luck!
So, all the information you need for a "transaction" is in one event? Why are you using the transaction command? What do the other events look like? 
Hi Team,   Could you please help me on the logic on to download the crowdstrike sandboxed  analysis report using Splunk soar. Thanks in advance Regards, Harisha
Then KV_MODE must be defined on the search-head.
No. First step to answering your question is understanding what datamodel is. It is a middle layer abstracting the actual data structure from your search. This way if you want to do a search across ... See more...
No. First step to answering your question is understanding what datamodel is. It is a middle layer abstracting the actual data structure from your search. This way if you want to do a search across your network devices you don't have to know specific technical details about the sources or even in which indexes the details are stored (CIM configuration takes care of that). You're just doing a search on a datamodel. For example | tstats sum(All_Traffic.bytes) from datamodel=Network_Traffic where All_Traffic.src_ip=172.16.* by All_Traffic.src_ip will give you amount of traffic per source IP from a specific network. It doesn't care where the actual data comes from - this is the beauty of the datamodel. As long as your sources are properly onboarded and CIM-compliant it doesn't matter if the data comes from Juniper, Palo Alto, Cisco or Fortigate. Datamodel abstracts this from your search. But in order for this to work properly as I mentioned you must have properly onboarded data - you must have proper addons making sure the data is properly normalized and provides standardized fields (even if the fields are named differently in the original event). This is done by means of field aliases and calculated fields. So you don't typically use term "CIM-compliance" talking about searching/dashboards. When searching you're not "compliant". You simply use the datamodel. It's the underlying data that must be CIM-compliant so your searches against datamodels work properly.
@PickleRickno, the installation architecture is a distributed, non-clustered deployment, and I do not use a HF.  
No. Inputs is one thing. Props for sourcetype is another. Where to put it depends on your installation architecture. I strongly suspect you have an all-in-one installation so unless you're using a HF... See more...
No. Inputs is one thing. Props for sourcetype is another. Where to put it depends on your installation architecture. I strongly suspect you have an all-in-one installation so unless you're using a HF to ingest this data it should be enough to add a KV_MODE parameter with a value of xml to your sourcetype definition.
Hello Splunkers i have clustered splunk 9.2.1 on prem, i have pushed an app from the CM to search head cluster and trying to configure a data input through the search head (option is not available f... See more...
Hello Splunkers i have clustered splunk 9.2.1 on prem, i have pushed an app from the CM to search head cluster and trying to configure a data input through the search head (option is not available from the CM) whenever i add a data input i always face this error "Current instance is running in SHC mode and is not able to add new inputs" how can i fix this ?  
not sure to understand when you say "1. Use EventTypes to apply the tags to the events so they end up in the correct data model. E.g. tag "network" and "communicate" to put it in the NetworkTraffic ... See more...
not sure to understand when you say "1. Use EventTypes to apply the tags to the events so they end up in the correct data model. E.g. tag "network" and "communicate" to put it in the NetworkTraffic data model." imagine that my searc is index=main uri="*.php*" OR uri="*.py*" Do you meant that i have to onboard this in a tag called "network"? And if i have a field called "ip" in my apps does it mean i have to tag it as "dest_ip" following the Network Traffic datamodel?
Please check the code, i have shared as requested. Its the same for Recovery search as well.
So, all the information you need for a "transaction" is in one event? Why are you using the transaction command? What do the other events look like?  Again, it would be useful if you could share the... See more...
So, all the information you need for a "transaction" is in one event? Why are you using the transaction command? What do the other events look like?  Again, it would be useful if you could share them in a code block </> like this
@PickleRick  I am using single standalone machine and the data coming through the nework directory. That network directory produces files and then using inputs.conf I am monitoring into Splunk.    
    index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" | fields host | transaction host startswith="To:" | search "To: <Mail-Address>" | rex field=_raw "Host:(?<src_... See more...
    index=imdc_nagios_hadoop sourcetype=icinga host=* "Load_per_CPU_core" "PROBLEM" | fields host | transaction host startswith="To:" | search "To: <Mail-Address>" | rex field=_raw "Host:(?<src_host_1>.*) - Service:(?<Service_1>.*) State:(?<State_1>.*)" | rex field=_raw "Subject: (?<Subject>.*)" | rex field=Subject "PROBLEM - (?<src_host_2>.*) - (?<Service_2>.*) is (?<State_2>.*)" | rex field=_raw "(?<Additional_Info>.*)\nTo:" | eval Service= if(isnull(Service_1),Service_2,Service_1) ,src_host= if(isnull(src_host_1),src_host_2,src_host_1) ,State= if(isnull(State_1),State_2,State_1) | fields host ,Service,src_host,State,Subject,Additional_Info | lookup hostdata_lookup.csv host as src_host | table src_host,Service,State,_time, cluster, isvm | rename _time as Start_time | search isvm=N AND cluster=*EDGE* | eval Start_time=strftime(Start_time, "%m/%d/%Y - %H:%M:%S") | sort Start_time   For security reason, removed Mail-addr  
Please share some anonymised representative events so we can better understand what you are dealing with. Please use a code block </> so that they can be used to simulate your situation.
@PickleRickcould you share some documentation on this ? Or is it enough to add this in the sourcetypes configuration in the inputs configuration file, if I'm not mistaken ?
The storage will on a DELL EMC storage, and considering the splunk recomandations and the SAN caracteristics, it will work smoothly, on the paper. I will test with few index and check Thanks a ... See more...
The storage will on a DELL EMC storage, and considering the splunk recomandations and the SAN caracteristics, it will work smoothly, on the paper. I will test with few index and check Thanks a lot for your help !
Hi, guys : forgive my English level first, it is not my native language. I have a distributed search which consists of an indexer instance and a search head instance, Their host specifications are ... See more...
Hi, guys : forgive my English level first, it is not my native language. I have a distributed search which consists of an indexer instance and a search head instance, Their host specifications are as follows:     indexer CPU:E5-2682 v4 @ 2.50GHz / 16Core Memory:32G Dsik:1.8TB(5000IOPS) search head: CPU:E5-2680 v3 @ 2.50GHz / 16Core Memory:32G Disk:200GB(3400IOPS).       I have 170G of raw logs ingested into splunk indexer every day ,5 indexes, one of which is 1.3TB in size. Its index name is tomcat , which stores the logs of the backend application. now the index is full. When I search for events in this index, the search speed is very slow. My search is     index=tomcat uri="/xxx/xxx/xxx/xxx/xxx" "xxxx"       I'm very sorry that I use xxx to represent a certain word because it involves the privacy issues of the API interface. I am searching for events from 7 days ago, no results found were returned for a long time,I even tried searching the logs for a specific day,but the search speed is still not ideal. If I wait about 5 minutes, I will gradually see some events appear on the page.  I checked the job inspector, I found that command.search.index, dispatch.finalizeRemoteTimeline, and dispatch.fetch.rcp.phase_0 execution cost is high   but these don't help me much.I tried leaving the search head and performing a search on the indexer web ui, but the search was still slow. this means that there is no bottleneck in the search head? During the search, I observed the various indicators of the host monitoring, the screenshot is as follows: It seems that the indexer server resources are not completely exhausted. So I tried restarting the indexer's splunkd service,Unexpectedly, the search speed seems to have been relieved,When I use the same search query and time range, it is gradually showing the events returned, although the speed does not seem to be particularly fast. Just as I was celebrating that I had solved the problem, my colleague told me the next day that the search speed seemed to be a little unsatisfactory again, although the search results would be gradually returned during the searching.so, this is not the best solution, it can only temporarily relieve. so, how do you think I should solve the problem of slow search speed? Is it to scale out the indexers horizontally and create a indexer cluster?    
Need to compare Host with Start_time(Icinga Problem) and End_time(Icinga Recovery), if the alert has been recovered within SLA( i.e, 15 minutes) take action or else nothing. Any help is appreciated.
@richgalloway  I am trying to extract them using RegEx. I select the event, choose Action, the Extract Fields, and select the method of extraction by regular expression.