All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @danielbb  Tenable also publish an app to go with the TA-Tenable app (TenableAppForSplunk)  The recommended deployment for this is to install the TA and the App on your Searchhead(s):   Fo... See more...
Hi @danielbb  Tenable also publish an app to go with the TA-Tenable app (TenableAppForSplunk)  The recommended deployment for this is to install the TA and the App on your Searchhead(s):   For more into checkout the app on Splunkbase or the Tenable online docs at https://docs.tenable.com/integrations/Splunk/Content/Welcome.htm  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost. 9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested li... See more...
Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost. 9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested list. Note: Assuming replication queue is full for most of the indexers and as a result indexing pipeline is also full however indexers do have plenty of idle cpu and IO is not an issue. On-prem Splunk version 9.4.0 and above Indexes.conf [default] maxMemMB=100 Server.conf [general] autoAdjustQueue=true ( It can be applied on on any splunk instance UF/HF/SH/IDX) Splunk version 9.1 to 9.3.x Indexes.conf [default] maxMemMB=100 maxConcurrentOptimizes=2 maxRunningProcessGroups=32 processTrackerServiceInterval=0 Server.conf [general] parallelIngestionPipelines = 4 [queue=indexQueue] maxSize=500MB [queue=parsingQueue] maxSize=500MB [queue=httpInputQ] maxSize = 500MB maxMemMB, will try to minimize creation of tsidx files as much as possible at the cost of higher memory usage by mothership(main splunkd). maxConcurrentOptimizes, on indexing side it’s internally 1 no matter what the setting is set to. But on target replication side launching more splunk-optimize processes means pausing receiver until that splunk-optimize process is launched. So reducing it to keep receiver do more of indexing work than launching splunk-optimize process. With 9.4.0, both source (indexprocessor) and target(replication in thread) will internally auto adjust it to 1. maxRunningProcessGroups, allow more splunk-optimize processes concurrently. With 9.4.0, it's auto. processTrackerServiceInterval, run splunk-optimize processes ASAP. With 9.4.0, you don't have to change. parallelIngestionPipelines, have more receivers on target side. With 9.4.0, you can enable auto scaling of  pipelines. maxSize, don’t let huge batch ingestion by HEC client block queues and receive 503. With 9.4.0 autoAdjustQueue set to true, it's no more a fix size.
Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost. 9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested ... See more...
Here are the configs for on-prem customers willing to apply and avoid adding more hardware cost. 9.4.0 and above most of the indexing configs are automated that’s why dropped from 9.4.0 suggested list. Note: Assuming replication queue is full for most of the indexers and as a result indexing pipeline is also full however indexers do have plenty of idle cpu and IO is not an issue. On-prem Splunk version 9.4.0 and above Indexes.conf [default] maxMemMB=100 Server.conf [general] autoAdjustQueue=true Splunk version 9.1 to 9.3.x Indexes.conf [default] maxMemMB=100 maxConcurrentOptimizes=2 maxRunningProcessGroups=32 processTrackerServiceInterval=0 Server.conf [general] parallelIngestionPipelines = 4 [queue=indexQueue] maxSize=500MB [queue=parsingQueue] maxSize=500MB [queue=httpInputQ] maxSize = 500MB maxMemMB, will try to minimize creation of tsidx files as much as possible at the cost of higher memory usage by mothership(main splunkd). maxConcurrentOptimizes, on indexing side it’s internally 1 no matter what the setting is set to. But on target replication side launching more splunk-optimize processes means pausing receiver until that splunk-optimize process is launched. So reducing it to keep receiver do more of indexing work than launching splunk-optimize process. With 9.4.0, both source (indexprocessor) and target(replication in thread) will internally auto adjust it to 1. maxRunningProcessGroups, allow more splunk-optimize processes concurrently. With 9.4.0, it's auto. processTrackerServiceInterval, run splunk-optimize processes ASAP. With 9.4.0, you don't have to change. parallelIngestionPipelines, have more receivers on target side. With 9.4.0, you can enable auto scaling of  pipelines. maxSize, don’t let huge batch ingestion by HEC client block queues and receive 503. With 9.4.0 autoAdjustQueue set to true, it's no more a fix size queue.
Hey @danielbb , Did you already check out the developer-supported Tenable App for Splunk? It should work with your sourcetypes: https://splunkbase.splunk.com/app/4061 Here's the docs for it: https... See more...
Hey @danielbb , Did you already check out the developer-supported Tenable App for Splunk? It should work with your sourcetypes: https://splunkbase.splunk.com/app/4061 Here's the docs for it: https://docs.tenable.com/integrations/Splunk/Content/Splunk2/TenableAppforSplunk.htm And there's also a full integration guide PDF that might be helpful: https://docs.tenable.com/integrations/Splunk/Content/PDF/Tenable_and_Splunk_Integration_Guide.pdf This might give you dashboards and visualizations for your Tenable.io data.  Cheers If this Helps, Please Upvote
Hey @mohsplunking , So a couple things on your setup: First, just to clarify - the UFs actually pull from the DS, not push to it. The deployment server is more like a config store that the forwarde... See more...
Hey @mohsplunking , So a couple things on your setup: First, just to clarify - the UFs actually pull from the DS, not push to it. The deployment server is more like a config store that the forwarders check in with and grab their apps/configs from. And yeah, you're totally right about needing the Windows TA on your heavy forwarder. You might see data without it, but you definitely want it installed on whatever's doing the parsing - which is your HF in this case. Otherwise you'll miss out on proper field extractions and parsing. Here's the install doc: https://docs.splunk.com/Documentation/WindowsAddOn/8.1.2/User/Install Yes - Windows TA goes on the HF (since that's where parsing happens), and then your output app handles forwarding everything along to the indexers. Cheers If this Helps, please Upvote
I copied the effective distsearch.conf from production (using btool) to my lab setup under $SPLUNK_HOME/etc/system/local. After restarting Splunk, I verified it again with btool to confirm it matched... See more...
I copied the effective distsearch.conf from production (using btool) to my lab setup under $SPLUNK_HOME/etc/system/local. After restarting Splunk, I verified it again with btool to confirm it matched the production configuration. Replication is still working fine in the lab setup, so it seems there's nothing wrong with distsearch.conf
Hello Splunkers, I have a question around Splunk Architecture, would greatly appreciate the inputs from Architects. In a Scenario where UF on log source>Heavy Forwarder>Indexer Basically  A Univer... See more...
Hello Splunkers, I have a question around Splunk Architecture, would greatly appreciate the inputs from Architects. In a Scenario where UF on log source>Heavy Forwarder>Indexer Basically  A Universal Forwarder get installed on a log source with a configuration to connect to Deployment server, Once it connects to DS, the DS will push the Output APP & the corresponding technology add-on i.e. Windows/Linux to the Universal Forwarder. The Output APP on the Log source(UF) is basically forwarding to heavy forwarder over standard port 9997 On the Heavy Forwarder an output APP under etc/apps  is there to forward to indexers. So the question is do I need to also have an Windows_TA/Linux TA app on heavy forwarder ? is it necessary ? if I dont install a TA , my understanding is heavy forwarder should still forward everything it receives over port 9997(without a TA/inputs.conf) to the next Splunk , is that correct ? Sorry I know its long reading but I hope to receive some responses. Thank you ,   regards, Moh
Thank you for the responses.  I copy/pasted some of the SOAR info below and as for the questions:   I did define the output variable in the custom code block config I am not using {0} in the samp... See more...
Thank you for the responses.  I copy/pasted some of the SOAR info below and as for the questions:   I did define the output variable in the custom code block config I am not using {0} in the sample block because it kept giving an error .  I was using {1} because that was grabbing the IP through a utility and that was working for me. The variable from the custom code block (extracted_ip_1) worked fine within the code block but was not set outside of it. code_3:customer_function:extraced_ip_1   def code_3(action=None, success=None, container=None, results=None, handle=None, filtered_artifacts=None, filtered_results=None, custom_function=None, loop_state_json=None, **kwargs):     phantom.debug("code_3() called")     regex_extract_ipv4_3__result = phantom.collect2(container=container, datapath=["regex_extract_ipv4_3:custom_function_result.data.extracted_ipv4","regex_extract_ipv4_3:custom_function_result.data.input_value"])     container_artifact_data = phantom.collect2(container=container, datapath=["artifact:*.cef.cs1","artifact:*.cef.cs1Label"])     regex_extract_ipv4_3_data_extracted_ipv4 = [item[0] for item in regex_extract_ipv4_3__result]     regex_extract_ipv4_3_data_input_value = [item[1] for item in regex_extract_ipv4_3__result]     container_artifact_cef_item_0 = [item[0] for item in container_artifact_data]     container_artifact_cef_item_1 = [item[1] for item in container_artifact_data]     input_parameter_0 = ""     code_3__extracted_ip_1 = None     ################################################################################     ## Custom Code Start     ################################################################################    # Write your custom code here...     extracted_ip_1 = regex_extract_ipv4_3_data_extracted_ipv4[0]    ################################################################################     ## Custom Code End     ################################################################################       phantom.save_run_data(key="code_3:extracted_ip_1", value=json.dumps(code_3__extracted_ip_1))       run_query_4(container=container)       return
We have the following sourcetypes that come through Tenable Add-On for Splunk - tenable:io:assets tenable:io:plugin tenable:io:audit_logs Is there any app/dashboard that presents this data?
That URL doesn't work from the DNS server or the Splunk server From DNS Server: https://SPLUNK-SERVERNAME:8000/en-us/custom/splunk_app_stream/ returned a cert warning and if you click on continue, ... See more...
That URL doesn't work from the DNS server or the Splunk server From DNS Server: https://SPLUNK-SERVERNAME:8000/en-us/custom/splunk_app_stream/ returned a cert warning and if you click on continue, you get a 404 Not Found error page. Tried https://SPLUNK-SERVERNAME.DOMAINNAME:8000/en-us/custom/splunk_app_stream/ but got same error From Splunk Server: https://SPLUNK-SERVERNAME:8000/en-us/custom/splunk_app_stream/ returned a cert warning and if you click on continue, you get a 404 Not Found error page. https://localhost:8000/en-us/custom/splunk_app_stream/ returned a 404 Not Found error page.
Hi @thanh_on , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all th... See more...
Hi @thanh_on , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Na_Kang_Lim , I hadn't noticed the problem, but in any case, since it's an add-on maintained by Splunk, open a case with Splunk Support. Ciao. Giuseppe
Splunk Add-on for Windows is well-known and I am using it to parse my XmlWinEventLog. However, upon using, I am getting EventCode as a duplicated codes in multiline, like this: 4688 4688 I think I ... See more...
Splunk Add-on for Windows is well-known and I am using it to parse my XmlWinEventLog. However, upon using, I am getting EventCode as a duplicated codes in multiline, like this: 4688 4688 I think I could find the reason, as in the transforms.conf, there are 2 function for detecting EventCode: [EventID_as_EventCode] SOURCE_KEY = EventID REGEX = (.+) FORMAT = EventCode::$1 [EventID2_as_EventCode] REGEX = <EventID.*?>(.+?)<\/EventID>.* FORMAT = EventCode::$1 And in the props.conf, both function is called: REPORT-EventCode_from_xml = EventID_as_EventCode, EventID2_as_EventCode However, I have never seen someone mentioned this issue, so is this because of my log? My log is the XML WinEventLog like this: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'> <System> <Provider Name='Microsoft-Windows-Security-Auditing' Guid='{68ad733a-0b7e-4010-a246-bad643c2e4c1}' /> <EventID>4688</EventID> <Version>2</Version> <Level>0</Level> <Task>13312</Task> <Opcode>0</Opcode> <Keywords>0x8020000000000000</Keywords> <TimeCreated SystemTime='2025-05-30T10:55:19.179279400Z' /> <EventRecordID>25849216</EventRecordID> <Correlation /> <Execution ProcessID='4' ThreadID='7780' /> <Channel>Security</Channel> <Computer>ABCD-DE01.company.domain</Computer> <Security /> </System> <EventData> <Data Name='SubjectUserSid'>S-1-5-18</Data> <Data Name='SubjectUserName'>ABCD-DE01$</Data> <Data Name='SubjectDomainName'>COMPANY.DOMAIN</Data> <Data Name='SubjectLogonId'>0x3e7</Data> <Data Name='NewProcessId'>0x1c48</Data> <Data Name='NewProcessName'>C:\Windows\System32\net1.exe</Data> <Data Name='TokenElevationType'>%%1936</Data> <Data Name='ProcessId'>0x2a2c</Data> <Data Name='CommandLine'>C:\Windows\system32\net1 accounts</Data> <Data Name='TargetUserSid'>S-1-0-0</Data> <Data Name='TargetUserName'>-</Data> <Data Name='TargetDomainName'>-</Data> <Data Name='TargetLogonId'>0x0</Data> <Data Name='ParentProcessName'>C:\Windows\System32\net.exe</Data> <Data Name='MandatoryLabel'>S-1-16-16384</Data> </EventData> </Event>  The result of this is that the functions called below, using EventCode, cannot match the EventCode, like this one: EVAL-process_name = if(EventCode=4688, New_Process_Name, Process_Name)
Hi @heathramos  You mentioned that you can ping the Splunk server and you are sure port 8000 is open, but please could you confirm you can reach the splunk server from DNS server by accessing https:... See more...
Hi @heathramos  You mentioned that you can ping the Splunk server and you are sure port 8000 is open, but please could you confirm you can reach the splunk server from DNS server by accessing https://SPLUNK-SERVERNAME:8000/en-us/custom/splunk_app_stream/ from the DNS server?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I want to use Stream to forward DNS to Splunk but I am having trouble with the initial configuration. Info: - running Splunk Enterprise on an onprem Windows Server. DNS servers are Windows DCs.  -... See more...
I want to use Stream to forward DNS to Splunk but I am having trouble with the initial configuration. Info: - running Splunk Enterprise on an onprem Windows Server. DNS servers are Windows DCs.  - installed Stream app and add-on on Splunk Enterprise server, add-on is installed on Windows DCs Troubleshooting: - when I go into the Stream app, it runs the set up and I get an error: Unable to establish connection to /en-us/custom/splunk_app_stream/ping/: End of file. Note: I am able to ping splunk server from DNS server and port 8000 is open on the Splunk server firewall. - when I go into Configure Streams, DNS is enabled - on the DNS server, /etc/apps/Splunk_TA_stream/local/inputs.conf file contains splunk_stream_app_location = https://SPLUNK-SERVERNAME:8000/en-us/custom/splunk_app_stream/ - on the DNS server, /etc/apps/Splunk_TA_stream/default/streamsfwd.conf file contains [streamfwd] port = 8889 ipAddr = 127.0.0.1
I am trying to get a list of all services that are in APM. The APM usage report does not provide the name and only provides #of hosts. I need to know the name of all services that are in APM and be a... See more...
I am trying to get a list of all services that are in APM. The APM usage report does not provide the name and only provides #of hosts. I need to know the name of all services that are in APM and be able to export.
In the documentation <https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/build-a-data-model/about-data-models>, there is written: Dataset constrain... See more...
In the documentation <https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/build-a-data-model/about-data-models>, there is written: Dataset constraints determine the first part of the search through Simple search filters (Root event datasets and all child datasets). Complex search strings (Root search datasets). transaction definitions (Root transaction datasets). In my new data model I try to make a new dataset constraint which will try to select only unique field  eventId. EventId is a number, ie.123456. My goal is to drop duplicated log lines. Is it possible to define this kind of data set constraint?
@PrewinThomas , yes that the correct way. I was able to figure it out yesterday <form version="1.1" theme="dark"> <label>Health Log Source Analysis</label> <fieldset submitButton="false"></fi... See more...
@PrewinThomas , yes that the correct way. I was able to figure it out yesterday <form version="1.1" theme="dark"> <label>Health Log Source Analysis</label> <fieldset submitButton="false"></fieldset> <row> <panel> <input type="multiselect" token="selected_index" searchWhenChanged="true"> <label>Select Index(es)</label> <choice value="*">All</choice> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> <search> <query>| rest splunk_server=local /services/data/indexes | fields title | rename title as index | sort index</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <table> <title>Index and Sourcetypes</title> <search> <query>| tstats values(sourcetype) as sourcetypes dc(host) as hosts_count dc(source) as sources_count where index IN($selected_index$) by index</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> that's the query I used which worked.
Hello!  If I will use streamfwd like a light forwarder, is it possible to use outputs.conf ?  Could you provide me with your config for this scenario?  I can't find information in the documentatio... See more...
Hello!  If I will use streamfwd like a light forwarder, is it possible to use outputs.conf ?  Could you provide me with your config for this scenario?  I can't find information in the documentation...  thanks
thank you for the eg i will take a look.  also i tried to do the eval bin but it would not let me do an if or case statement to set the bin size.  Do you have an example