All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jagan_jijo  Please could you provide a little more information on your usecases here and what kind of data you are looking to extract from Splunk?  You can download data using the search REST A... See more...
Hi @jagan_jijo  Please could you provide a little more information on your usecases here and what kind of data you are looking to extract from Splunk?  You can download data using the search REST API - Check out the following page on how to execute searches using the REST API: https://docs.splunk.com/Documentation/Splunk/9.4.2/RESTTUT/RESTsearches Regarding pulling data on specific incidents, are you using IT Service Intelligence (ITSI) or Enterprise Security (ES) which has your incidents collated? There are specific endpoints for these premium apps to provide things like incidents/notable events etc depending on your use-case.  Regarding webhooks, the native webhook sending is quite limited (see https://docs.splunk.com/Documentation/Splunk/9.4.0/Alert/Webhooks) - I'd usually recommend looking at Better Webhooks on SplunkBase, is there a particular problem you're having with that app?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Greetings, I have been reading through documentation and responses on here about filtering out specific events at the heavy forwarder (trying to reduce our daily ingest). In the local folder for... See more...
Greetings, I have been reading through documentation and responses on here about filtering out specific events at the heavy forwarder (trying to reduce our daily ingest). In the local folder for our Splunk_TA_juniper app I have created a props.conf and a transforms.conf and set owner/permissions to match other .conf files. props.conf: # Filter teardown events from Juniper syslogs into the nullqueue [juniper:junos:firewall:structured] TRANSFORMS-null= setnull transforms.conf # Filter juniper teardown logs to nullqueue [setnull] REGEX = RT_FLOW_SESSION_CLOSE DEST_KEY = queue FORMAT = nullQueue I restarted the Splunk service... but I'm still getting these events. Not sure what I did wrong. I pulled some raw event text and tested the regex in PowerShell (worked with -match). Any help would be greatly appreciated!
Hi @Harikiranjammul  You could use the following SPL to achieve this: | makeresults | eval ip="0.0.0.11,0.0.0.12" | makemv ip delim="," | mvexpand ip ``` End of sample data ``` | lookup your_looku... See more...
Hi @Harikiranjammul  You could use the following SPL to achieve this: | makeresults | eval ip="0.0.0.11,0.0.0.12" | makemv ip delim="," | mvexpand ip ``` End of sample data ``` | lookup your_lookup_file ipaddress as ip OUTPUTNEW ipaddress as found_ip | eval ip=if(isnull(found_ip), "0.0.0.0", ip)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello All, I have a question which I am not able to find an answer for. Hence looking for ideas, suggestions etc from fellow community members. We use Splunk enterprise security in our organization... See more...
Hello All, I have a question which I am not able to find an answer for. Hence looking for ideas, suggestions etc from fellow community members. We use Splunk enterprise security in our organization and I am trying to build correlation search for generating a finding (or intermediate finding) in Mission Control based on Microsoft defender incidents. I am sure that you would know, Microsoft defender incident is a combination of different alerts and it can include multiple entities. I have a search which gives me all the details but I am struggling to auto populate the identities data from Splunk identities lookup. Sample data below. My question are: how can I enrich the data for identities in the incident with Splunk ES identities data. Is it not the right way to create this search? My objective is to have a finding in Splunk ES if defender generates any incident.  Assuming this works somehow, how can I create the drill down searches so that it gives soc the ability to see supporting data (such as signin logs for a user (say user1)) as this is a multi value field. Should I use Defender alerts (as opposed to incident) to create a intermediate finding and then let Splunk run the Risk based rules to trigger if an finding based on this? alerts can have the multi entities (users, Ips, devices etc) as well so might end up with similar issues again.  Any other suggestions which others would have implemented? incidentId incidentUri incidentName alertId(s) alerts_count category createdTime identities identities_count serviceSource(s) 123456 https://security.microsoft.com/incidents/123456?tid=XXXXXXX Email reported by user as malware or phish involving multiple users 1a2b3c4d 1 InitialAccess 2025-05-08T09:43:20.95Z ip1 user1 user2 user3 mailbox1 6 MicrosoftDefenderForOffice   Thanks     
The lookup command will return null values when the target value is not present in the lookup table.  It's up to the query to test the returned values and take appropriate action when null is returned.
The fields names are different between the two lookup tables.  Try the modified command.
Have a data that returns ip field and values as below. Ip = 0.0.0.11 Ip= 0.0.0.12 There is a lookup that contains field ipaddress with below values 0.0.0.11 0.0.0.13 If we see above 0.0.0.12 mi... See more...
Have a data that returns ip field and values as below. Ip = 0.0.0.11 Ip= 0.0.0.12 There is a lookup that contains field ipaddress with below values 0.0.0.11 0.0.0.13 If we see above 0.0.0.12 missed in the lookup, I need a search to return 0.0.0.11(existed in both query result and lookup) and  0.0.0.12 (entries which are not in lookup and update them as ip=0.0.0.0 in result) 
Thanks, that solved the problem. It seems that for the code to work correctly, I need to yield the original record, or at least some fields of it. I haven’t tested thoroughly yet, but today I started... See more...
Thanks, that solved the problem. It seems that for the code to work correctly, I need to yield the original record, or at least some fields of it. I haven’t tested thoroughly yet, but today I started testing based on this example.  The following code works as expected for both of these queries: index=_internal | head 1 | table host | streamingcsc index=_internal | head 1 | streamingcsc #!/usr/bin/env python import os, sys sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "lib")) from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators @Configuration() class StreamingCSC(StreamingCommand): def stream(self, records): for record in records: record["event"] = str(type(records)) yield record dispatch(StreamingCSC, sys.argv, sys.stdin, sys.stdout, __name__)   But The following code only works with this query: index=_internal | head 1 | table host | streamingcsc #!/usr/bin/env python import os, sys sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "lib")) from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators @Configuration() class StreamingCSC(StreamingCommand): def stream(self, records): for record in records: yield {"event": str(type(records))} dispatch(StreamingCSC, sys.argv, sys.stdin, sys.stdout, __name__)  
@Ramachandran   Try curl or Postman to call the server's API endpoint directly. Check if the response includes a valid payload.  curl -X GET https://<server-endpoint> -H "Authorization: Bearer <tok... See more...
@Ramachandran   Try curl or Postman to call the server's API endpoint directly. Check if the response includes a valid payload.  curl -X GET https://<server-endpoint> -H "Authorization: Bearer <token>" Ensure the server’s response matches the format Splunk SOAR expects. Use a JSON validator to confirm syntax and structure. Ensure there are no network restrictions (e.g., firewalls, proxies) blocking communication between Splunk SOAR and the server.
Hi everyone, I'm working on improving our incident response and monitoring setup using Splunk, and I have a few questions I hope someone can help with: Bulk Incident Data Retrieval During Downtim... See more...
Hi everyone, I'm working on improving our incident response and monitoring setup using Splunk, and I have a few questions I hope someone can help with: Bulk Incident Data Retrieval During Downtime: What’s the best way to retrieve a large volume of incident (via REST API) data from Splunk for a specific timeframe, especially during known downtime periods? Are there recommended search queries or techniques to ensure we capture everything that occurred during those windows? Querying Individual Event Data via Endpoints: How can we query Splunk endpoints (e.g., via REST API) to retrieve detailed data for individual events or incidents? Any examples or best practices would be greatly appreciated. Customizing Webhook Notifications: Is it possible to modify the structure or content of webhook notifications sent from Splunk without using third-party apps like Better Webhooks or Alert Managers? If so, how can this be done natively within Splunk?  Thanks in advance for any guidance or examples you can share!  Splunk Enterprise 6.2 Overview REST Endpoint Examples 
Hey everyone, I'm trying to configure a new server in the SOAR UI, but I'm running into this error: Error Message: There was an error adding the server configuration. On SOAR: Verify server's 'All... See more...
Hey everyone, I'm trying to configure a new server in the SOAR UI, but I'm running into this error: Error Message: There was an error adding the server configuration. On SOAR: Verify server's 'Allowed IPs' and authorization configuration. Status: 500 Text: JSON reply had no "payload" value I've already double-checked the basic config, but still no luck. From what I understand, this might be related to: Missing or misconfigured Allowed IPs on the SOAR server Improper authorization settings Possibly an issue with the server not returning the expected JSON format Has anyone faced this before or have any ideas on how to troubleshoot this? Any guidance or checklist would be super helpful Thanks in advance!
What i want is from ES if it send to SOAR it will detect src IP then get information from VIrustotal, if it malicious it will write a note "Malicious from VirusTotal" and change the status to "Pendin... See more...
What i want is from ES if it send to SOAR it will detect src IP then get information from VIrustotal, if it malicious it will write a note "Malicious from VirusTotal" and change the status to "Pending" to make sure monitoring team will double check it. i share screenshot for playbook    Also here the code  """ """ import phantom.rules as phantom import json from datetime import datetime, timedelta @phantom.playbook_block() def on_start(container): phantom.debug('on_start() called') # call 'update_event_1' block update_event_1(container=container) return @phantom.playbook_block() def update_event_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("update_event_1() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.event_id","artifact:*.id"]) parameters = [] # build parameters list for 'update_event_1' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "status": "in progress", "comment": "tahap analisa via SOAR", "event_ids": container_artifact_item[0], "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("update event", parameters=parameters, name="update_event_1", assets=["soar_es"], callback=ip_reputation_1) return @phantom.playbook_block() def ip_reputation_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("ip_reputation_1() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.src","artifact:*.id"]) parameters = [] # build parameters list for 'ip_reputation_1' call for container_artifact_item in container_artifact_data: if container_artifact_item[0] is not None: parameters.append({ "ip": container_artifact_item[0], "context": {'artifact_id': container_artifact_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("ip reputation", parameters=parameters, name="ip_reputation_1", assets=["virustotalv3"], callback=decision_1) return @phantom.playbook_block() def decision_1(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("decision_1() called") # check for 'if' condition 1 found_match_1 = phantom.decision( container=container, conditions=[ ["ip_reputation_1:action_result.data.*.detected_communicating_samples.*.positives", ">", 0] ], delimiter=None) # call connected blocks if condition 1 matched if found_match_1: update_event_2(action=action, success=success, container=container, results=results, handle=handle) return return @phantom.playbook_block() def update_event_2(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs): phantom.debug("update_event_2() called") # phantom.debug('Action: {0} {1}'.format(action['name'], ('SUCCEEDED' if success else 'FAILED'))) update_event_1_result_data = phantom.collect2(container=container, datapath=["update_event_1:action_result.parameter.event_ids","update_event_1:action_result.parameter.context.artifact_id"], action_results=results) parameters = [] # build parameters list for 'update_event_2' call for update_event_1_result_item in update_event_1_result_data: if update_event_1_result_item[0] is not None: parameters.append({ "status": "Pending", "comment": "Source IP is Malicious from VirusTotal", "event_ids": update_event_1_result_item[0], "context": {'artifact_id': update_event_1_result_item[1]}, }) ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ phantom.act("update event", parameters=parameters, name="update_event_2", assets=["soar_es"]) return @phantom.playbook_block() def on_finish(container, summary): phantom.debug("on_finish() called") ################################################################################ ## Custom Code Start ################################################################################ # Write your custom code here... ################################################################################ ## Custom Code End ################################################################################ return  
14 years later, I'm looking for a solution to the same problem. Has anyone published the code/config? Using FIX 4.4/FIX5 in my case.
Hi This question are asked quite often. You find many explanations from community quite easily. I add here some posts which you should read to understand better the problematic of your needs. htt... See more...
Hi This question are asked quite often. You find many explanations from community quite easily. I add here some posts which you should read to understand better the problematic of your needs. https://community.splunk.com/t5/Splunk-Search/How-can-I-find-the-data-retention-and-indexers-involved/m-p/645374 https://community.splunk.com/t5/Deployment-Architecture/Hot-Warm-Cold-bucket-sizing-How-do-I-set-up-my-index-conf-with/m-p/634691 https://community.splunk.com/t5/Deployment-Architecture/Index-rolling-off-data-before-retention-age/m-p/684799 But shortly what those means when we are looking your request. There are many attributes which you need to use to achieve your target, but I'm quite sure that you cannot use those so that you will get 100% what you are requesting.  @livehybrid already answer to you one example for starting point.  The 1st issue is that you cannot force warm -> cold transition by time the only options are amount of buckets and size of homePath also if you are using volumes, then total volume size are used, but usually you have also some other indexes on the same volume.  And those are not depending on time, just # bucket and size of hot+warm bucket. The 2nd issue is that depending on data volumes and amount of indexers it will be even harder to control the amount of buckets. All these configurations are depending on one indexer. There are no relations to other indexers and indexes what those have. And actually it's not even indexer dependent it's dependent on amount of indexing pipelines . So if you have e.g. 10 indexer all those parameters which @livehybrid present must multiply 10 and if you have e.g. 2 ingesting pipelines per indexer you must multiply previous result by 2. And as normally each indexer/pipeline have 3 open hot bucket you must again multiply previous result by 3 or if you have change that bucket amount then with some other value. This means that when you are estimating needed amount of warm buckets to achieve that 12h time in hot you must divide your data by (3 * # pipeline * #indexers) to get estimate how many maxWarmDBCount you should use.  And to get this working correctly this means that your source system events must spread equally on all your indexers to calculate that value correctly. Of course this expecting that your data volume is flat for all time.  If your data volumes follow eg. sin function then it's quite obvious that this cannot work. One more thing is that if your events are not continuous by time then (e.g time by time there are some old logs or some events in future) those triggers create a new bucket and close old hot even it's not full. I suppose that above are not all aspects which one must take care of to achive what you are asking. You could try to achieve your objective, but don't surprise if you cannot get it to work. r. Ismo
Hi @zksvc  Please could you share your code for doing this check? I suspect that you are counting the number of categories returned rather than the counts in each category - e.g. in that specific ex... See more...
Hi @zksvc  Please could you share your code for doing this check? I suspect that you are counting the number of categories returned rather than the counts in each category - e.g. in that specific example you have "malicious" and "malware". Check that what you're counting isnt an array of objects and/or share you config/code and I'd be happy to look into it further.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi,  I have tried this without the proces_path= assignment since that is in the prefix, so just $Get_Process_Path|s$ Here is a snippet:  <input type="text" token="Get_Process_Path"> <label>Pro... See more...
Hi,  I have tried this without the proces_path= assignment since that is in the prefix, so just $Get_Process_Path|s$ Here is a snippet:  <input type="text" token="Get_Process_Path"> <label>Process Name or Path</label> <prefix>process_path="*</prefix> <suffix>*"</suffix> </input> <query>index=windows EventCode=4688 $Get_Process_Path|s$ This will break the search, I believe it's because |s is wrapping additional quotes around what is in the prefix.  But I need both of those things to fix the individual issues. 
Hi @BRFZ  Configure the index in indexes.conf as follows to enforce your requirements: Set frozenTimePeriodInSecs to 86400 (24 hours). Set maxWarmDBCount to a low value and maxHotSpanSecs to 4320... See more...
Hi @BRFZ  Configure the index in indexes.conf as follows to enforce your requirements: Set frozenTimePeriodInSecs to 86400 (24 hours). Set maxWarmDBCount to a low value and maxHotSpanSecs to 43200 (12 hours) so that buckets roll to warm quickly. Set maxWarmDBCount, maxDataSize, or other thresholds to force buckets to cold after 12 hours. Configure a coldToFrozenDir to archive (not delete) after cold.   Try this as an example indexes.conf: [test] homePath = $SPLUNK_DB/test/db coldPath = $SPLUNK_DB/test/colddb thawedPath = $SPLUNK_DB/test/thaweddb # set bucket max age to 12h (hot→warm) maxHotSpanSecs = 43200 # default size, can reduce for faster bucket rolling # maxDataSize = auto # keep small number of warm buckets, moves oldest to cold # maxWarmDBCount = 1 # total retention 24h frozenTimePeriodInSecs = 86400 # archive to this path, not delete coldToFrozenDir = /archive/test With this setup, data will move from hot→warm after 12h (due to maxHotSpanSecs), and oldest warm buckets will be rolled to cold (enforced by low maxWarmDBCount). Data will be kept for 24h in total before being archived.   The number of buckets (maxWarmDBCount, etc.) should be kept low to ensure data moves through states quickly for such a short retention. Splunk is optimised for longer retention; very short retention and frequent bucket transitions can increase management overhead, its generally advised to not have small buckets due to this however due to the small retention period you shouldnt end up with too many buckets here? Other things to remember: If you use coldToFrozenDir, ensure permissions and disk space are sufficient at the archive destination. Test carefully, as low maxWarmDBCount and short maxHotSpanSecs may result in more buckets than usual and performance impacts. If you want to restore archived data, it must be manually thawed.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I've obtained this information from VirusTotal, and I want to create a playbook to check IP reputation and retrieve the results. I want to make a decision where if the result is greater than 0, it wi... See more...
I've obtained this information from VirusTotal, and I want to create a playbook to check IP reputation and retrieve the results. I want to make a decision where if the result is greater than 0, it will write a note stating 'It's malicious from VirusTotal.' You can see this example: Community Score or information like '4/94 security vendors flagged.' I want to compare it according to VirusTotal from the playbook. However, when I run it, it only shows 'detected urls: 2.' Can someone explain this?
Hello, I'm looking to set up a log retention policy for a specific index, for example index=test. Here's what I'd like to configure: - Total retention time = 24 hours - First 12 hours in hot+warm... See more...
Hello, I'm looking to set up a log retention policy for a specific index, for example index=test. Here's what I'd like to configure: - Total retention time = 24 hours - First 12 hours in hot+warm, then - Next 12 hours cold. - After that, the data should be archived (not deleted). How exactly should I configure this please? Also does the number of buckets need to be adjusted to support this setup properly on such a short timeframe? Thanks in advance for your help.  
Please can you confirm the field names in your OS lookup? Thanks