All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are there any licencing concerns to be considered for the integration between SPlunk and Databricks using the Plugin here : https://splunkbase.splunk.com/app/5416 No redistribution or sale - just pl... See more...
Are there any licencing concerns to be considered for the integration between SPlunk and Databricks using the Plugin here : https://splunkbase.splunk.com/app/5416 No redistribution or sale - just plain connecting one environment to the other.
Hi @esllorj  In short - you cannot run an integrity check against buckets created before the integrity check was enabled, see the following community post: https://community.splunk.com/t5/Splunk-Ent... See more...
Hi @esllorj  In short - you cannot run an integrity check against buckets created before the integrity check was enabled, see the following community post: https://community.splunk.com/t5/Splunk-Enterprise/enable-integrity-control-on-splunk-6-3/m-p/266889#:~:text=Error%20description%20%22journal%20has%20no,Reason%3DJournal%20has%20no%20hashes. Credit to @dbhagi_splunk for their answer here: Data Integrity Control feature & the corresponding settings/commands only apply to the data that is indexed after turning on this feature. It won't go ahead & generate hashes (or even check integrity) for pre-existing data. So in the case where "./splunk check-integrity -index [index_name]" returned the following error, That means this bucket is not created/indexed with Data Integrity control feature enabled. Either it was created before you enabled it (assuming you turned on this feature for your index now) or you haven't enabled this feature for the index=index_name at all. Error description "journal has no hashes": This indicates that journal is not created with hashes enabled. Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/index_name/db/db_1429532061_1429531988_278, Reason=Journal has no hashes. Same applies to "./splunk generate-hash-files -index [ index_name]" You would be able to generate (means, extracting the hashes embedded in the journal) only for data integrity control enabled buckets. This won't go and compute/create hashes for normal buckets without this feature enabled. Say you enabled the feature & you created few buckets, but you lost hash files of a particular bucket (someone modified or deleted them on disk), then you can run this command so that it again extract hashes & writes them to hash files (l1hashes_id_guid.dat, l2hash_id_guid.dat). Hope i answered all your questions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi splunkers,  My client wants to conduct a consistency check on all indexes that they collect So I added enableDataIntegrityControl=1 to every index setting and I created a script to run the comm... See more...
Hi splunkers,  My client wants to conduct a consistency check on all indexes that they collect So I added enableDataIntegrityControl=1 to every index setting and I created a script to run the command SPLUNK_CMD check-integrity -index "$INDEX" for all indexes. But that's where the problem comes from. The data we keep collecting in real time is that running a command during check-integrity fails.  ( ex linux_os logs, window_os logs) results are like this result server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security disableSSLShutdown=0 Setting search process to have long life span: enable_search_process_long_lifespan=1 certificateStatusValidationMethod is not set, defaulting to none. Splunk is starting with EC-SSC disabled CMIndexId: New indexName=linux_os inserted, mapping to id=1 Operating on: idx=linux_os bucket='/opt/splunk/var/lib/splunk/linux_os/db/db_1737699472_1737699262_0' Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/linux_os/db/db_1737699472_1737699262_0, Reason=Journal has no hashes. Operating on: idx=_audit bucket='/opt/splunk/var/lib/splunk/linux_os/db/hot_v1_1' Total buckets checked=2, succeeded=1, failed=1 Loaded latency_tracker_log_interval with value=30 from stanza=health_reporter Loaded aggregate_ingestion_latency_health with value=1 from stanza=health_reporter aggregate_ingestion_latency_health with value=1 from stanza=health_reporter will enable the aggregation of ingestion latency health reporter. Loaded ingestion_latency_send_interval_max with value=86400 from stanza=health_reporter Loaded ingestion_latency_send_interval with value=30 from stanza=health_reporter Is there a way to solve these problems?
@kamal18sharma  Was it compatibility issue that you re-installed splunk enterprise? I am facing this issue with "Splunk app for Soar export" which i installed on SOAR app. Can you elaborate the solu... See more...
@kamal18sharma  Was it compatibility issue that you re-installed splunk enterprise? I am facing this issue with "Splunk app for Soar export" which i installed on SOAR app. Can you elaborate the solution.
Hi @Amira  Have you updated the cisco_sdwan_index macro to index=<yourIndexName> for the index containing the syslog data? Please could you confirm the sourcetypes you have in your cisco sdwan inde... See more...
Hi @Amira  Have you updated the cisco_sdwan_index macro to index=<yourIndexName> for the index containing the syslog data? Please could you confirm the sourcetypes you have in your cisco sdwan index?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
opt/caspida/bin/Caspida setuphadoop ...............................Failed to run sudo -u hdfs hdfs namenode -format >> /var/vcap/sys/log/caspida/caspida.out 2>&1 Fri Jun 2 17:06:11 +07 2023: [ERROR] ... See more...
opt/caspida/bin/Caspida setuphadoop ...............................Failed to run sudo -u hdfs hdfs namenode -format >> /var/vcap/sys/log/caspida/caspida.out 2>&1 Fri Jun 2 17:06:11 +07 2023: [ERROR] Failed to run hadoop_setup [2]. Fix errors and re-run again. Error in running /opt/caspida/bin/Caspida setuphadoop on 192.168.126.16. Fix errors and re-run again. i execute the command /opt/caspida/bin/caspida setup but stopped here hadoopsetup can't run, i don't know the cause yet. someone please help me. I put some install logs here
Hi @ASEP  The field value from a "Run Query" action in a Splunk SOAR playbook needs to be accessed from the list of results returned by the action. Simply adding the field name under "Fields to add ... See more...
Hi @ASEP  The field value from a "Run Query" action in a Splunk SOAR playbook needs to be accessed from the list of results returned by the action. Simply adding the field name under "Fields to add to output" makes the field available, but you still need to reference the correct result object. The Run Query action typically returns a list of results in the results.data attribute of the action's output. You need to access the specific result you are interested in (e.g., the first one) and then the field within that result. Assuming your "Run Query" action is named your_action_name, you can access the additional_action field from the first result using templating like this: {{ your_action_name.results.data[0].additional_action }}   You can then use this value in subsequent playbook logic, such as a decision block to check if it contains "teardown". The Run Query action returns a list of result objects in action_name.results.data. Each object in this list corresponds to a row returned by your Splunk query. You access elements in the list using square brackets [index] and fields within an object using dot notation .field_name. Check if the results.data list is not empty before attempting to access elements by index (like [0]) to prevent errors or None values if the query returns no results. I think you should be able to use {% if your_action_name.results.data %} block for this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hai, i have a problem with field in playbook  I’m building a SOAR playbook to check network traffic to Active Directory Web Services, and I’m stuck on one field My Objective: Use a Run Query a... See more...
hai, i have a problem with field in playbook  I’m building a SOAR playbook to check network traffic to Active Directory Web Services, and I’m stuck on one field My Objective: Use a Run Query action in SOAR to pull additional_action, If additional_action contains “teardown,” route the playbook down a specific branch. tstats summariesonly=true fillnull_value="unknown" values(All_Traffic.src) as src values(All_Traffic.dest) as dest values(All_Traffic.additional_action) as additional_action values(All_Traffic.status_action) as status_action values(All_Traffic.app) as app count from datamodel="Network_Traffic"."All_Traffic" WHERE (All_Traffic.src_ip IN ({0})) AND (All_Traffic.dest_ip IN ({1})) AND (All_Traffic.dest_port="{2}") by All_Traffic.session_id | nomv additional_action if I use the query there is a teardown result i have added field additional_action  but the result from playbook is Parameter: {"comment":"Protocol value None , mohon untuk dilakukan analisa kembali.   is there any way to solve this problem 
Hi @kiran_panchavat , I have already followed these steps during my investigation; however, they related to the NetFlow data model, not the syslog one. As a result, they did not help in mapping the... See more...
Hi @kiran_panchavat , I have already followed these steps during my investigation; however, they related to the NetFlow data model, not the syslog one. As a result, they did not help in mapping the syslog data to the intended data model, Cisco_SDWAN.
I am working on the task: "Send alert notifications to Splunk platform using Splunk Observability Cloud." I have completed the following steps: Created an HEC token in Splunk. Unchecked the "E... See more...
I am working on the task: "Send alert notifications to Splunk platform using Splunk Observability Cloud." I have completed the following steps: Created an HEC token in Splunk. Unchecked the "Enable indexer acknowledgment" option. Enabled HEC globally in Splunk Web. Enabled SSL (HTTPS). Restarted the Splunk instance after configuration. However, the integration is still not connecting. I'm receiving the following error:  
Hi @Ashish0405 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thank you that's worked
Thank you, that's worked
@mohsplunking  @sainag_splunk  already explained very well. But If your goal is simply: UF (collects, basic sourcetype=WinEventLog:Security or sourcetype=linux_secure set by DS) -> HF (aggregat... See more...
@mohsplunking  @sainag_splunk  already explained very well. But If your goal is simply: UF (collects, basic sourcetype=WinEventLog:Security or sourcetype=linux_secure set by DS) -> HF (aggregates, forwards) -> Indexer (parses fields like EventCode, user, sshd_pid etc.) then you do not need the full Splunk_TA_windows or Splunk_TA_nix on the Heavy Forwarder. The indexers(with TA's) will handle the detailed parsing. But it becomes necessary IF you want the HF to perform actions that rely on the knowledge within that TA (like parsing fields to use for routing, or specific sourcetype recognition that isn't happening on the UF) Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@RdomSplunkUser7  You may try to use a "Root search dataset." When you create your data model, instead of starting with a "Root Event" dataset , choose to create a "Root Search" dataset. In the "S... See more...
@RdomSplunkUser7  You may try to use a "Root search dataset." When you create your data model, instead of starting with a "Root Event" dataset , choose to create a "Root Search" dataset. In the "Search String" field for this Root Search dataset, put your base search query followed by the dedup command Eg: index=test_logs sourcetype="test_logs_st" [your base filters] | dedup eventId This might be able to built datamodel only from events with unique eventId Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@Amira  Identify your exact index and sourcetypre for your data. Make sure your datamodel Cisco_SDWAN root event constraints have the same index and sourcetype. Are there events with the root even... See more...
@Amira  Identify your exact index and sourcetypre for your data. Make sure your datamodel Cisco_SDWAN root event constraints have the same index and sourcetype. Are there events with the root event constraint search? If not, your syslog data isn't being assigned the correct sourcetype/index that the app's data model expects. Also check Data Model Acceleration status Check the "Status" or "Acceleration" column. Is it enabled? Is it 100% built? - If not, Enable acceleration. If acceleration seems stuck, incomplete, or you suspect corruption - try to rebuild. Disk space summaries full? - Check your indexer disk space via the Monitoring Console (Settings > Monitoring Console > Indexing > Indexes and Volumes). If the volume holding the summaries is full, acceleration will fail. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@Raj_Splunk_Ing  This is generally because API call normally defaults to UTC. So specify time zone in API call. If you are using Splunk python SDK, then try "tz": "America/Chicago" as search pa... See more...
@Raj_Splunk_Ing  This is generally because API call normally defaults to UTC. So specify time zone in API call. If you are using Splunk python SDK, then try "tz": "America/Chicago" as search parameter. By adding the tz parameter with your local time zone ("America/Chicago" for CST), you instruct Splunk to interpret earliest=-1d@d and latest=-0d@d relative to that timezone, making the API search behave identically to your UI search in terms of the time window. This should resolve the discrepancy in event counts Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
Hello,  I am Looking for details of anyone that has successfully setup a enterprise search head cluster that is behind an AWS ALB using SAML with a Pingfederate IdP.  It seems this should be doable,... See more...
Hello,  I am Looking for details of anyone that has successfully setup a enterprise search head cluster that is behind an AWS ALB using SAML with a Pingfederate IdP.  It seems this should be doable, however there does not seem to be a lot of (or really any) details on this setup. 
Hi Rick, same user. i did use the earliest and latest in the search query itself as filters. API is using the services/export
Ah there's your problem. You assign the variable "extracted_ip_1" which then works fine within the function, but in the following phantom.save_run_data function call, it does not actually dump the va... See more...
Ah there's your problem. You assign the variable "extracted_ip_1" which then works fine within the function, but in the following phantom.save_run_data function call, it does not actually dump the value of the "extracted_ip_1" variable into the output, but rather the "code_3__extracted_ip_1" variable, which is previously set to None. You should change the phantom.save_run_data command to use the correct variable name in the value parameter: phantom.save_run_data(key="code_3:extracted_ip_1", value=json.dumps(extracted_ip_1)) Or, if you want to constrain all custom code between the "custom code" comment blocks, you can change the variable name: code_3__extracted_ip_1 = regex_extract_ipv4_3_data_extracted_ipv4[0]   Also you mentioned your data path on the input to the following block is "code_3:customer_function:extraced_ip_1", which has "customer_function" but it should have "custom_function". Not sure if this is just a typo in your post but if it exists also in your SOAR instance then it can also cause problems.