All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You don't define HEC input on CM. CM will not be ingesting your HEC events. You define HEC input (either just a token if you have the general HTTP input already enabled or enable whole HTTP input, d... See more...
You don't define HEC input on CM. CM will not be ingesting your HEC events. You define HEC input (either just a token if you have the general HTTP input already enabled or enable whole HTTP input, define TLS parameters and define a token - for maintainability I'd split it into two separate apps - one with general HEC input settings, another with a token definition for your particular input needs) with CM in an app which is pushed to indexers. So the only thing you should do (as with pretty much everything regarding configuring your indexers) is creating an app in the manager-apps directory and pushing it with a new configuration bundle to your indexers. I'm not sure if enabling HTTP input in general in case you hadn't had one already will not require you to rolling-restart your indexers (and that's why I prefer a dedicated HF-layer in front of indexers so that changes to ingestion process and parsing do not cause indexer restarts but that's a topic for another time.
In my case it is data manager or possibly lambda. There is no inputs.conf in both cases. 
Hi @PickleRick , thanks for the options. I think we are going with the 1st option. we are going to create AWS ELB for load balancing. But I am confused how to create HEC token on all indexers with t... See more...
Hi @PickleRick , thanks for the options. I think we are going with the 1st option. we are going to create AWS ELB for load balancing. But I am confused how to create HEC token on all indexers with the same token? My approach is this please correct me if I am wrong... Going to configure HEC on cluster manager first.. Then take those inputs.conf from etc/apps/http_input/local/ inputs.conf and paste it under etc/manager-apps/hec_input/local/inputs.conf and paste it here. But won't enable http in CM. (disabled = 1 for http) But http:inputname will be enabled (disabled = 0) will it still index HEC data in cluster manager (which should not do). Then push this configurations (app with inputs.conf) through configuration bundle to all my indexers so that each indexer will receive this inputs.conf. additionally should I need to go to each indexer and give [http] disabled = 0 to enable HEC in each indexer in order to receive data? Please confirm and correct me if I am wrong anywhere 
As usual, you have a relatively simply sounding problem which might not turn out to be so simple. As @marnall already pointed out - the HEC input is just an input on the component you define it on. ... See more...
As usual, you have a relatively simply sounding problem which might not turn out to be so simple. As @marnall already pointed out - the HEC input is just an input on the component you define it on. So you can have multiple options here. 1) Create a HEC input on each indexer using the same HEC token and let the sources load-balance their requests between the receivers. But that requires the sources which can actively do load-balancing. 2) Deploy an external HTTP LB which will distribute HTTP requests to HEC inputs defined on indexers (again - with the same HEC token). 3) Create a HF with a HEC input which will receive data and distribute it to indexers using normal s2s load-balancing mechanisms 4) Create multiple HFs with HEC input and either LB between them on source or set up a HTTP LB in front of them. Each of those scenarios have their own pros and cons (simplicity, cost, maintainability, resilience).
@marnall Where should I create HEC token through web interface? In cluster manager or deployment server? And do we need to copy inputs.conf which is generated initially to each of the indexers? And... See more...
@marnall Where should I create HEC token through web interface? In cluster manager or deployment server? And do we need to copy inputs.conf which is generated initially to each of the indexers? And once we copy it do we need to remove the data input created initially because of we don't remove data will index to that component also right? Please confirm?
Hi @arusishere  The issue with Splunk DB Connect appears to be related to authentication mismatches between Splunk and your SQL Server. Please can you confirm - when you created an identity for auth... See more...
Hi @arusishere  The issue with Splunk DB Connect appears to be related to authentication mismatches between Splunk and your SQL Server. Please can you confirm - when you created an identity for authentication, did you setup Windows authentication (Domain/User/Password) rather than just SQL authentication (User/Password)? As it seems like your DB server is setup for just Windows Authentication. Based on the docs you also need Splunk DBX Add-on for Microsoft SQL Server JDBC which I presume has been installed? (See install docs) .  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @hk_baek , Community is the right site for questions! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.... See more...
Hi @hk_baek , Community is the right site for questions! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Thank you for your response. I understand that using a GPU is not mandatory, but I’ve heard that it can significantly improve performance when running deep learning algorithms like TensorFlow. That... See more...
Thank you for your response. I understand that using a GPU is not mandatory, but I’ve heard that it can significantly improve performance when running deep learning algorithms like TensorFlow. That’s why I was trying to find recommended GPU server specifications for using DSDL, but I couldn’t find any official guidance — so I wanted to ask here
hello, also we have the problem with increased SWAP OS: RHEL 9.5 RAM: 32GB SWAP: 16GB SPLUNK: 9.4.1 # free -m total used free shared buff/cache available Mem: 31837 6853 358 0 24953 24984 Swap... See more...
hello, also we have the problem with increased SWAP OS: RHEL 9.5 RAM: 32GB SWAP: 16GB SPLUNK: 9.4.1 # free -m total used free shared buff/cache available Mem: 31837 6853 358 0 24953 24984 Swap: 16383 16292 91  
@Huckleberry  If you cannot access the Splunk SOAR web interface using either IP address shown by ifconfig, the most likely causes are network configuration issues between your host machine and the ... See more...
@Huckleberry  If you cannot access the Splunk SOAR web interface using either IP address shown by ifconfig, the most likely causes are network configuration issues between your host machine and the Amazon Linux 2 VM. Are you able to SSH into the VM from your host machine? If so, which IP is it you are using? You should be able to access SOAR on the same IP. The other things that might be worth checking is any firewall rules on the VM. Run the following to see what rules are set, if it fails then its likely that the firewall isnt enabled so shouldnt be the issue. (Firewalld is the default for Amazon Linux I believe) sudo firewall-cmd --list-all  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @hk_baek , for my knowledge, the minimum reference hardware is the normal reference for Splunk:; 12 CPUs, 12 GB RAM and 300 GB hd. Then you should tune your installation to see if this specifica... See more...
Hi @hk_baek , for my knowledge, the minimum reference hardware is the normal reference for Splunk:; 12 CPUs, 12 GB RAM and 300 GB hd. Then you should tune your installation to see if this specifications are sufficient for the use that you will do. Obviously, this reference is to use DSDL without othe premium apps as ES or ITSI. You can find all the information and documentation at https://splunkbase.splunk.com/app/4607 Ciao. Giuseppe
Dear Splunk Community, I need some advice on how to get DB Connect configured. I'm hitting a brick wall trying to get it up and running. I believe I have done the driver installs, database connectio... See more...
Dear Splunk Community, I need some advice on how to get DB Connect configured. I'm hitting a brick wall trying to get it up and running. I believe I have done the driver installs, database connection settings, JDK install, and set environment variables correctly. I have gotten to the point where we can see login errors in the SQL server logs. With this, I know the servers are attempting to communicate. Here is the system setup: Splunk OS: Windows Splunk Version: 9.0.9 JDBC Drivers installed: 12.4 Connection settings: Tried both MS-SQL Generic and Windows Authentication Database OS: Windows Server 2016 (SQL 2019) Errors received from different attempts: Login failed for user xxx (On splunk) This driver is not configured to perform integrated authentication. (On splunk) Login failed for use <username> Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. (On Windows SQL) My resources: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Extending_the_Platform/Configuring_Splunk_DB_Connect The splunk documentation labyrinth. I would also like to add that I've gone through the labyrinth of documentation Splunk provides (it's overwhelming). Also, oddly enough, a friend with a very similar environment is having the same issue. Any advice would be much appreciated. And no, I will not install the JTDS drivers that some people recommended. It's open source and 10+ years old. Splunk's provided procedures and documentation that should work.   Thanks for your help. First time posting!   Kind Regards,
Hello, I'm planning to install and use the Splunk App for Data Science and Deep Learning(DSDL) in a closed network environment. I’m considering use cases involving deep learning and LLM-RAG archite... See more...
Hello, I'm planning to install and use the Splunk App for Data Science and Deep Learning(DSDL) in a closed network environment. I’m considering use cases involving deep learning and LLM-RAG architecture. Could you please share the minimum server specifications for testing, as well as the recommended specifications for production?
Dear Team,  We have obtained the ITSI installation package "splunk-it-service-intelligence-4193. spl" and installed it according to the installation guide on the official website“ https://docs.splun... See more...
Dear Team,  We have obtained the ITSI installation package "splunk-it-service-intelligence-4193. spl" and installed it according to the installation guide on the official website“ https://docs.splunk.com/Documentation/ITSI/4.20.0/Install/Install ”. In the end, the Splunk Enterprise platform only has the ITEM app. What is the reason for this? Please provide technical support. Thank you.
Hello, I set up an Amazon Linux 2 virtual machine in VirtualBox and successfully installed Splunk SOAR. I am trying to log into the web interface. The documentation says to go to the IP address that... See more...
Hello, I set up an Amazon Linux 2 virtual machine in VirtualBox and successfully installed Splunk SOAR. I am trying to log into the web interface. The documentation says to go to the IP address that I assigned to the Splunk SOAR using the custom HTTPS port. I know that I am using the correct port. When I run ifconfig, I see two IP addresses. I tried both with the port I chose for Splunk, but neither is working, and my browser says that the site cannot be reached. Any help would be appreciated.
The HTTP Event Collector won't do load balancing itself, so you will need to set up a load balancer in front of the indexers. One way you could set up the HEC token is to take a Splunk server with a... See more...
The HTTP Event Collector won't do load balancing itself, so you will need to set up a load balancer in front of the indexers. One way you could set up the HEC token is to take a Splunk server with a web interface (probably not the indexers), go to Settings->Data inputs->HTTP Event Collector, then click the "New Token" button. Go through the menu specifying your desired input name, sourcetype, index, etc. This will generate an inputs.conf stanza for the HTTP input. You can then open the inputs.conf file and copy this stanza to each of your indexers to ensure they have the same token. (Remaining instructions assume your indexers are running Linux) For me, the inputs.conf file was generated in /opt/splunk/etc/apps/launcher/local, because I went to the HTTP Event Collector web interface from the main Splunk Enterprise screen. The stanza will look like this: (with different values, of course) [http://inputname] disabled = 0 host = yourhostname index = main indexes = main source = inputsourcetype token = fe2cfed6-664a-4d75-a79d-41dc0548b9de Of course, you should change the host value for each indexer or remove the host line so that the host value is decided on startup. Then, create a new file on each indexer at: /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf Containing this text: [http] disabled = 0 This will enable the HTTP event collector on the indexers. You can check that the HTTP event listener is opening the port on the indexer by using netstat: netstat -apn | grep 8088
Assuming that you are able to edit the inputs.conf file, and that you have a definite value for env, service, and custom for each input stanza, then you could add meta tags to the input stanzas: _me... See more...
Assuming that you are able to edit the inputs.conf file, and that you have a definite value for env, service, and custom for each input stanza, then you could add meta tags to the input stanzas: _meta = env::<env value> service::<service value> custom::<custom value> I don't know if this works the same way with OTEL collectors.
When you don't include the UID, are there any differences in the field values? What pattern do you see in how it adds artifacts to containers? E.g. are there specific fields which determine the conta... See more...
When you don't include the UID, are there any differences in the field values? What pattern do you see in how it adds artifacts to containers? E.g. are there specific fields which determine the container that the artifact gets added to, or does it add artifacts to the most recently created container? Depending on how you would like it to behave, you could throttle the creation of new artifacts by using a outputlookup and NOT [|inputlookup] commands in your saved search used to forward events to SOAR, then use a time field to make sure the artifacts+containers are different.
This usually means that something in your playbook is referencing a term that does not exist, like a misnamed block or a nonexistent datapath. If you are certain that the error originates from this S... See more...
This usually means that something in your playbook is referencing a term that does not exist, like a misnamed block or a nonexistent datapath. If you are certain that the error originates from this Splunk app block, then you could try setting all of the inputs to be formatted text (as you did with the query input) so that SOAR does not think it could be a datapath.
The first thing to check is the splunkd.log on the problematic (sending) machine. It should tell you if the connection is established at all or if it's being actively rejected or anythin else.