All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hai, i have a problem with field in playbook  I’m building a SOAR playbook to check network traffic to Active Directory Web Services, and I’m stuck on one field My Objective: Use a Run Query a... See more...
hai, i have a problem with field in playbook  I’m building a SOAR playbook to check network traffic to Active Directory Web Services, and I’m stuck on one field My Objective: Use a Run Query action in SOAR to pull additional_action, If additional_action contains “teardown,” route the playbook down a specific branch. tstats summariesonly=true fillnull_value="unknown" values(All_Traffic.src) as src values(All_Traffic.dest) as dest values(All_Traffic.additional_action) as additional_action values(All_Traffic.status_action) as status_action values(All_Traffic.app) as app count from datamodel="Network_Traffic"."All_Traffic" WHERE (All_Traffic.src_ip IN ({0})) AND (All_Traffic.dest_ip IN ({1})) AND (All_Traffic.dest_port="{2}") by All_Traffic.session_id | nomv additional_action if I use the query there is a teardown result i have added field additional_action  but the result from playbook is Parameter: {"comment":"Protocol value None , mohon untuk dilakukan analisa kembali.   is there any way to solve this problem 
Hi @kiran_panchavat , I have already followed these steps during my investigation; however, they related to the NetFlow data model, not the syslog one. As a result, they did not help in mapping the... See more...
Hi @kiran_panchavat , I have already followed these steps during my investigation; however, they related to the NetFlow data model, not the syslog one. As a result, they did not help in mapping the syslog data to the intended data model, Cisco_SDWAN.
I am working on the task: "Send alert notifications to Splunk platform using Splunk Observability Cloud." I have completed the following steps: Created an HEC token in Splunk. Unchecked the "E... See more...
I am working on the task: "Send alert notifications to Splunk platform using Splunk Observability Cloud." I have completed the following steps: Created an HEC token in Splunk. Unchecked the "Enable indexer acknowledgment" option. Enabled HEC globally in Splunk Web. Enabled SSL (HTTPS). Restarted the Splunk instance after configuration. However, the integration is still not connecting. I'm receiving the following error:  
Hi @Ashish0405 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thank you that's worked
Thank you, that's worked
@mohsplunking  @sainag_splunk  already explained very well. But If your goal is simply: UF (collects, basic sourcetype=WinEventLog:Security or sourcetype=linux_secure set by DS) -> HF (aggregat... See more...
@mohsplunking  @sainag_splunk  already explained very well. But If your goal is simply: UF (collects, basic sourcetype=WinEventLog:Security or sourcetype=linux_secure set by DS) -> HF (aggregates, forwards) -> Indexer (parses fields like EventCode, user, sshd_pid etc.) then you do not need the full Splunk_TA_windows or Splunk_TA_nix on the Heavy Forwarder. The indexers(with TA's) will handle the detailed parsing. But it becomes necessary IF you want the HF to perform actions that rely on the knowledge within that TA (like parsing fields to use for routing, or specific sourcetype recognition that isn't happening on the UF) Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@RdomSplunkUser7  You may try to use a "Root search dataset." When you create your data model, instead of starting with a "Root Event" dataset , choose to create a "Root Search" dataset. In the "S... See more...
@RdomSplunkUser7  You may try to use a "Root search dataset." When you create your data model, instead of starting with a "Root Event" dataset , choose to create a "Root Search" dataset. In the "Search String" field for this Root Search dataset, put your base search query followed by the dedup command Eg: index=test_logs sourcetype="test_logs_st" [your base filters] | dedup eventId This might be able to built datamodel only from events with unique eventId Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@Amira  Identify your exact index and sourcetypre for your data. Make sure your datamodel Cisco_SDWAN root event constraints have the same index and sourcetype. Are there events with the root even... See more...
@Amira  Identify your exact index and sourcetypre for your data. Make sure your datamodel Cisco_SDWAN root event constraints have the same index and sourcetype. Are there events with the root event constraint search? If not, your syslog data isn't being assigned the correct sourcetype/index that the app's data model expects. Also check Data Model Acceleration status Check the "Status" or "Acceleration" column. Is it enabled? Is it 100% built? - If not, Enable acceleration. If acceleration seems stuck, incomplete, or you suspect corruption - try to rebuild. Disk space summaries full? - Check your indexer disk space via the Monitoring Console (Settings > Monitoring Console > Indexing > Indexes and Volumes). If the volume holding the summaries is full, acceleration will fail. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@Raj_Splunk_Ing  This is generally because API call normally defaults to UTC. So specify time zone in API call. If you are using Splunk python SDK, then try "tz": "America/Chicago" as search pa... See more...
@Raj_Splunk_Ing  This is generally because API call normally defaults to UTC. So specify time zone in API call. If you are using Splunk python SDK, then try "tz": "America/Chicago" as search parameter. By adding the tz parameter with your local time zone ("America/Chicago" for CST), you instruct Splunk to interpret earliest=-1d@d and latest=-0d@d relative to that timezone, making the API search behave identically to your UI search in terms of the time window. This should resolve the discrepancy in event counts Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
Hello,  I am Looking for details of anyone that has successfully setup a enterprise search head cluster that is behind an AWS ALB using SAML with a Pingfederate IdP.  It seems this should be doable,... See more...
Hello,  I am Looking for details of anyone that has successfully setup a enterprise search head cluster that is behind an AWS ALB using SAML with a Pingfederate IdP.  It seems this should be doable, however there does not seem to be a lot of (or really any) details on this setup. 
Hi Rick, same user. i did use the earliest and latest in the search query itself as filters. API is using the services/export
Ah there's your problem. You assign the variable "extracted_ip_1" which then works fine within the function, but in the following phantom.save_run_data function call, it does not actually dump the va... See more...
Ah there's your problem. You assign the variable "extracted_ip_1" which then works fine within the function, but in the following phantom.save_run_data function call, it does not actually dump the value of the "extracted_ip_1" variable into the output, but rather the "code_3__extracted_ip_1" variable, which is previously set to None. You should change the phantom.save_run_data command to use the correct variable name in the value parameter: phantom.save_run_data(key="code_3:extracted_ip_1", value=json.dumps(extracted_ip_1)) Or, if you want to constrain all custom code between the "custom code" comment blocks, you can change the variable name: code_3__extracted_ip_1 = regex_extract_ipv4_3_data_extracted_ipv4[0]   Also you mentioned your data path on the input to the following block is "code_3:customer_function:extraced_ip_1", which has "customer_function" but it should have "custom_function". Not sure if this is just a typo in your post but if it exists also in your SOAR instance then it can also cause problems.
Archive/live links for conf files:   2016 talk by David Veuve: https://web.archive.org/web/20161205164708/http://conf.splunk.com/sessions/2016-sessions.html#search=David%20Veuve& Video recording:... See more...
Archive/live links for conf files:   2016 talk by David Veuve: https://web.archive.org/web/20161205164708/http://conf.splunk.com/sessions/2016-sessions.html#search=David%20Veuve& Video recording: https://conf.splunk.com/files/2016/recordings/how-to-scale-from-raw-to-tstats.mp4 Video recording archive: https://web.archive.org/web/20250601131324/https://conf.splunk.com/files/2016/recordings/how-to-scale-from-raw-to-tstats.mp4 Slides: https://conf.splunk.com/files/2016/slides/how-to-scale-from-raw-to-tstats.pdf Slides Archive: https://web.archive.org/web/20250601130416/https://conf.splunk.com/files/2016/slides/how-to-scale-from-raw-to-tstats.pdf   2017 talk again by David Veuve: https://web.archive.org/web/20171220012042/http://conf.splunk.com/sessions/2017-sessions.html#search=David%20Veuve& Video recording: https://conf.splunk.com/files/2017/recordings/searching-fast-how-to-start-using-tstats-and-other-acceleration-techniques.mp4 Video recording archive: https://web.archive.org/web/20171220012042/http://conf.splunk.com/files/2017/recordings/searching-fast-how-to-start-using-tstats-and-other-acceleration-techniques.mp4 Slides: https://conf.splunk.com/files/2017/slides/searching-fast-how-to-start-using-tstats-and-other-acceleration-techniques.pdf Slides archive: https://web.archive.org/web/20211202200036/http://conf.splunk.com/files/2017/slides/searching-fast-how-to-start-using-tstats-and-other-acceleration-techniques.pdf   2017 talk by Satoshi Kawasaki: https://web.archive.org/web/20171220012042/http://conf.splunk.com/sessions/2017-sessions.html#search=speed%20up& Recording: https://conf.splunk.com/files/2017/recordings/speed-up-your-searches.mp4 Recording archive: https://web.archive.org/web/20240122110515/https://conf.splunk.com/files/2017/recordings/speed-up-your-searches.mp4 Slides: https://conf.splunk.com/files/2017/slides/speed-up-your-searches.pdf Slides archive: https://web.archive.org/web/20250601130246/https://conf.splunk.com/files/2017/slides/speed-up-your-searches.pdf  
@Amira Have you verified this?  https://splunkbase.splunk.com/app/6657 
I'm experiencing an issue with the Cisco SD-WAN application in Splunk where the dashboards are not displaying the expected data. We have followed the official documentation step by step and are succ... See more...
I'm experiencing an issue with the Cisco SD-WAN application in Splunk where the dashboards are not displaying the expected data. We have followed the official documentation step by step and are successfully receiving both syslog and NetFlow data. However, it seems that the data model "Cisco_SDWAN" associated with the syslog data is not functioning correctly, which is likely causing the dashboards to fail. We've already performed extensive troubleshooting without success. Has anyone encountered a similar issue or can offer guidance on resolving the data model problem? Splunk Enterprise Security  Cisco Catalyst SD-WAN App for Splunk  and Cisco Catalyst SD-WAN Add-on for Splunk 
I don't think it is possible to constrain a dataset to "only intake 1 event containing each value of EventId and then exclude the rest of the events with the same EventId value." This would require t... See more...
I don't think it is possible to constrain a dataset to "only intake 1 event containing each value of EventId and then exclude the rest of the events with the same EventId value." This would require the dataset to check against a list of already-included EventId values for every new event it intakes. It would be better to do this in another way. Ideally you could change the events themselves so that they only have one event per EventID, but there are other tricks you could try, like making a search that makes summary-indexed events once per EventID while excluding all EventIDs that already exist in the destination index. Then you could set the datamodel+dataset to include events from the index of summary-indexed events.
If you suspect there's some time range discrepancy between those two searches, check their job logs. After the search is expanded as it's being dispatched to be executed, if I remember correctly it s... See more...
If you suspect there's some time range discrepancy between those two searches, check their job logs. After the search is expanded as it's being dispatched to be executed, if I remember correctly it should have the earliest and latest as epoch-based timestamps. Check if they differ. I assume you're spawning the searches from the same user, aren't you?
when i look at the _time which is pulled through API values look like below _time 2025-05-30 10:28:06.234 UTC 2025-05-30 04:48:45.178 UTC 2025-05-30 16:33:09.755 UTC 2025-05-30 14:20:23.054 UTC
when i look at the last row/record and look for _time the value it has is 2025-05-30 23:30:28.314 there is no record after this