All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, If you are facing a specific error then please post it here. Otherwise if you just need general guidance then I would start with the documentation: Create a new playbook in Splunk SOAR (Cloud) ... See more...
Hi, If you are facing a specific error then please post it here. Otherwise if you just need general guidance then I would start with the documentation: Create a new playbook in Splunk SOAR (Cloud) - Splunk Documentation
Hi, This is by design, the problem with running modular inputs on the SHC layer is that if all of the nodes in the cluster attempt to run the input you would get duplicated data and all sorts of pro... See more...
Hi, This is by design, the problem with running modular inputs on the SHC layer is that if all of the nodes in the cluster attempt to run the input you would get duplicated data and all sorts of problems. Splunk seem to be actively developing a solution for this but do not officially support at the time of writing. That being said, a handful of apps do have official support (e.g. Splunk DB Connect). These seem to rely on the run_only_one directive in inputs.conf to ensure they only run on the captain node to prevent duplication. Unless your TA has official support for a deployment on a SHC, I would recommend using a separate, dedicated instance for input collection such as a Heavy Forwarder.
hi  5 columns and 79 rows
Below is the search query for icinga Problem and events too.   Below is the search query for Icinga Recovery and events.     If you want me to get rid of transaction command, thats fine.... See more...
Below is the search query for icinga Problem and events too.   Below is the search query for Icinga Recovery and events.     If you want me to get rid of transaction command, thats fine. I would like to group multiple events into a single meta-event that represents a single physical event.
Hello i want to extract ip field from a log but i give error. this is a part of my log: ",\"SourceIp\":\"10.10.6.0\",\"N i want 10.10.6.0 as a field. can you help me?  
Hello I am building an app using the Splunk Add-on builder.  Can I use the helper.new_event method in order to send a metric to the metrics index?  If yes, what should be the format of the "event"... See more...
Hello I am building an app using the Splunk Add-on builder.  Can I use the helper.new_event method in order to send a metric to the metrics index?  If yes, what should be the format of the "event" ?    Kind regards,
@Tom_Lundie  Thanks for the response. We have already configured in Splunk soar, and I am not able to download as CSV, Jason,PCAP,STIX. But requirement is to get all results(including screensh... See more...
@Tom_Lundie  Thanks for the response. We have already configured in Splunk soar, and I am not able to download as CSV, Jason,PCAP,STIX. But requirement is to get all results(including screenshot) as pdf. Please let me know if you have any suggestion on this
How i can display the data sum of 2 fields like Last month same date data (example: 24 june and 24 may) I have tried the below query i was getting the data but how i can show in a manner. index=gc... See more...
How i can display the data sum of 2 fields like Last month same date data (example: 24 june and 24 may) I have tried the below query i was getting the data but how i can show in a manner. index=gc source=apps | eval AMT=if(IND="DR", BASE_AMT*-1, BASE_AMT) | eval GLBL1=if(FCR="DR", GLBL*-1, GLBL) | eval DATE="20".substr(REC_DATE,1,2).substr(REC_DATE,3,2).substr(REC_DATE,5,2) | eval current_pdate_4=strftime(relative_time(now(), "-30d@d"),"%Y%m%d") | where DATE = current_pdate_4 | stats sum(AMT) as w4AMT, sum(GLBL1) as w4FEE_AMT by DATE id |append [search index=gc source=apps | eval AMT=if(IND="DR", BASE_AMT*-1, BASE_AMT) | eval GLBL1=if(FCR="DR", GLBL*-1, GLBL) | eval DATE="20".substr(REC_DATE,1,2).substr(REC_DATE,3,2).substr(REC_DATE,5,2) | eval current_pdate_3=strftime(relative_time(now(), "-@d"),"%Y%m%d") | where DATE = current_pdate_3 | stats sum(AMT) as w3AMT, sum(GLBL1) as w3FEE_AMT by DATE id | table DATE, id w3AMT, w4AMT, w4FEE_AMT w3FEE_AMT | rename Date as currentDATE, w3AMT as currentdata, w3FEE_AMT as currentamt w4AMT as lastmonthdate w4FEE_AMT as lastmonthdateamt DATE, id currentdata lastmonthdate currentamt lastmonthdateamt 20240723 2 2323 2123 23 24 20240723 3 2423 2123 23 24 20240723 4 2223 2123 23 24 20240723 5 2323 2123 23 24 20240723 6 2329 2123 23 24 20240723 7 2323 2123 23 24
Hi Harisha, There is an add-on on Splunkbase for this: CrowdStrike OAuth API | Splunkbase This SOAR Add-on allows you to download the reports. It might already be installed on your SOAR instance s... See more...
Hi Harisha, There is an add-on on Splunkbase for this: CrowdStrike OAuth API | Splunkbase This SOAR Add-on allows you to download the reports. It might already be installed on your SOAR instance so feel free to check. The first thing you will need to do is configure an asset with the correct API credentials within the Crowdstrike app. Once you have the app configured you can then implement actions within a playbook to do whatever you need. If you have any specific questions along the way then feel free to ask away!
Hello, With these sorts of issues it's best to work your way down to eliminate the possible causes. Take an exemplar broken search from the dashboard and try to run it manually: eventtype=msad-suc... See more...
Hello, With these sorts of issues it's best to work your way down to eliminate the possible causes. Take an exemplar broken search from the dashboard and try to run it manually: eventtype=msad-successful-user-logons   If that doesn't work try to run the definition manually: eventtype=wineventlog_index_windows eventtype=wineventlog_security EventCode=4624 user!="*$"   If that works, make sure the msad-successful-user-logons definition is correct and shared properly. If not, try expanding your index eventtype: (index=msad OR index=main) eventtype=wineventlog_security EventCode=4624 user!="*$"   If that works, make sure your definition is correct and shared properly. If not, try expanding the wineventlog_security eventtype: (index=msad OR index=main) (search = source=WinEventLog:Security OR source=WMI:WinEventLog:Security OR source=XmlWinEventLog:Security) EventCode=4624 user!="*$"   If that works, make sure Splunk_TA_windows is installed the wineventlog_security eventtype is working. If that doesn't work then your problem is not with the eventtype definitions, but rather with the data itself. Things to try:   Do you have Splunk_TA_windows installed on your indexers/search heads? Are the source's renamed correctly as per TA_Windows ta-windows-fix-xml-source definition and the requirements of the wineventlog_security eventtype? Are your indexes correct and populated within the search timeframe? Finally, if you still can't get results, try stripping of key values from the search to check if the search is working: (index=msad OR index=main) (search = source=WinEventLog:Security OR source=WMI:WinEventLog:Security OR source=XmlWinEventLog:Security) If you get results, the problem is with the field extractions: EventCode=4624 user!="*$" check that Splunk_TA_windows is working as expected, check your inputs, props and transforms are all aligned. Good luck!
So, all the information you need for a "transaction" is in one event? Why are you using the transaction command? What do the other events look like? 
Hi Team,   Could you please help me on the logic on to download the crowdstrike sandboxed  analysis report using Splunk soar. Thanks in advance Regards, Harisha
Then KV_MODE must be defined on the search-head.
No. First step to answering your question is understanding what datamodel is. It is a middle layer abstracting the actual data structure from your search. This way if you want to do a search across ... See more...
No. First step to answering your question is understanding what datamodel is. It is a middle layer abstracting the actual data structure from your search. This way if you want to do a search across your network devices you don't have to know specific technical details about the sources or even in which indexes the details are stored (CIM configuration takes care of that). You're just doing a search on a datamodel. For example | tstats sum(All_Traffic.bytes) from datamodel=Network_Traffic where All_Traffic.src_ip=172.16.* by All_Traffic.src_ip will give you amount of traffic per source IP from a specific network. It doesn't care where the actual data comes from - this is the beauty of the datamodel. As long as your sources are properly onboarded and CIM-compliant it doesn't matter if the data comes from Juniper, Palo Alto, Cisco or Fortigate. Datamodel abstracts this from your search. But in order for this to work properly as I mentioned you must have properly onboarded data - you must have proper addons making sure the data is properly normalized and provides standardized fields (even if the fields are named differently in the original event). This is done by means of field aliases and calculated fields. So you don't typically use term "CIM-compliance" talking about searching/dashboards. When searching you're not "compliant". You simply use the datamodel. It's the underlying data that must be CIM-compliant so your searches against datamodels work properly.
@PickleRickno, the installation architecture is a distributed, non-clustered deployment, and I do not use a HF.  
No. Inputs is one thing. Props for sourcetype is another. Where to put it depends on your installation architecture. I strongly suspect you have an all-in-one installation so unless you're using a HF... See more...
No. Inputs is one thing. Props for sourcetype is another. Where to put it depends on your installation architecture. I strongly suspect you have an all-in-one installation so unless you're using a HF to ingest this data it should be enough to add a KV_MODE parameter with a value of xml to your sourcetype definition.
Hello Splunkers i have clustered splunk 9.2.1 on prem, i have pushed an app from the CM to search head cluster and trying to configure a data input through the search head (option is not available f... See more...
Hello Splunkers i have clustered splunk 9.2.1 on prem, i have pushed an app from the CM to search head cluster and trying to configure a data input through the search head (option is not available from the CM) whenever i add a data input i always face this error "Current instance is running in SHC mode and is not able to add new inputs" how can i fix this ?  
not sure to understand when you say "1. Use EventTypes to apply the tags to the events so they end up in the correct data model. E.g. tag "network" and "communicate" to put it in the NetworkTraffic ... See more...
not sure to understand when you say "1. Use EventTypes to apply the tags to the events so they end up in the correct data model. E.g. tag "network" and "communicate" to put it in the NetworkTraffic data model." imagine that my searc is index=main uri="*.php*" OR uri="*.py*" Do you meant that i have to onboard this in a tag called "network"? And if i have a field called "ip" in my apps does it mean i have to tag it as "dest_ip" following the Network Traffic datamodel?
Please check the code, i have shared as requested. Its the same for Recovery search as well.
So, all the information you need for a "transaction" is in one event? Why are you using the transaction command? What do the other events look like?  Again, it would be useful if you could share the... See more...
So, all the information you need for a "transaction" is in one event? Why are you using the transaction command? What do the other events look like?  Again, it would be useful if you could share them in a code block </> like this