All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to send custom logs to Splunk Enterprise from Apigee API proxy. I have installed the trial version of Splunk Enterprise. I am following the method with HEC token explained in this article: htt... See more...
I want to send custom logs to Splunk Enterprise from Apigee API proxy. I have installed the trial version of Splunk Enterprise. I am following the method with HEC token explained in this article: https://community.splunk.com/t5/Getting-Data-In/How-to-connect-Apigee-Edge-to-Splunk/m-p/546923. However, I am unable to send logs to Splunk. Any help in this regard will be appreciated.
Hi, I have a botsv1 dataset uploaded in Splunk simulated environment. But when I search "index=botsv1" , it returns 0 events. Although I have seen the dataset in apps folder. Also it can be seen in ... See more...
Hi, I have a botsv1 dataset uploaded in Splunk simulated environment. But when I search "index=botsv1" , it returns 0 events. Although I have seen the dataset in apps folder. Also it can be seen in indexes in settings section. Nothing  can be searched using keyword botsv1. I have tried various search options, but all failed. Please help me. Thanks in advance.
Hello , i just created new index on cluster master for new integrated log source, but can not find this new index on heavy forwarders to be configured as new data input. any recommendations for su... See more...
Hello , i just created new index on cluster master for new integrated log source, but can not find this new index on heavy forwarders to be configured as new data input. any recommendations for such as situation ?
I have a visualization in the splunk search -> visualization. I want this visualization as a splunk dashboard panel. how do i do it?  
When one of the indexers fails, I have a problem with the growth of buckets on the working indexer. If one of the indexers is not available, should I change the bucket policy? I currently have RF:2 ... See more...
When one of the indexers fails, I have a problem with the growth of buckets on the working indexer. If one of the indexers is not available, should I change the bucket policy? I currently have RF:2 SF:2
Trying to connect to a Splunk  independent stream forwarder to Stream forwarder Search Header 023-12-27 13:52:47 ERROR [140302542051072] (HTTPRequestSender.cpp:1459) stream.SplunkSenderHTTPEventColl... See more...
Trying to connect to a Splunk  independent stream forwarder to Stream forwarder Search Header 023-12-27 13:52:47 ERROR [140302542051072] (HTTPRequestSender.cpp:1459) stream.SplunkSenderHTTPEventCollector - (#1) Failing over to disk 2023-12-27 13:52:48 WARN [140302542051072] (HTTPRequestSender.cpp:717) stream.SplunkSenderHTTPEventCollector - (#2) Resetting blocked connection 2023-12-27 13:52:48 WARN [140302542051072] (HTTPRequestSender.cpp:717) stream.SplunkSenderHTTPEventCollector - (#3) Resetting blocked connection 2023-12-27 13:52:48 WARN [140302542051072] (HTTPRequestSender.cpp:1485) stream.SplunkSenderHTTPEventCollector - (#2) TCP connection failed: Operation canceled 2023-12-27 13:52:48 WARN [140302542051072] (HTTPRequestSender.cpp:1485) stream.SplunkSenderHTTPEventCollector - (#3) TCP connection failed: Operation canceled enable Stream forwarder Search Header ssl :  same error disable Stream forwarder Search Header ssl : same error What could be the problem? [Independant stream forwarder] inputs.conf [streamfwd://streamfwd] splunk_stream_app_location = http://192.168.0.111:8000/en-us/custom/splunk_app_stream/ streamfwd.conf [streamfwd] port = 8889 ipAddr = 192.168.0.112 [Stream forwarder Search Header] splunk_httpinput/local/input.conf [http://streamfwd] disabled = 0 token = 5aa1c706-2bcd-4f90-857b-636c8afab1f5 index = streamfwd indexes = streamfwd sourcetype = stream [http] disabled = 0 port = 8088 enableSSL = 1 * Stream forwarder Search Header, independent stream forwarder OS : Linux (CentOS 7)
I installed Splunk in RHEL 8.9. I set it up to boot-start, however, splunk does not automatically run after reboot. I followed the instructions within this document. Configure Splunk Enterprise to s... See more...
I installed Splunk in RHEL 8.9. I set it up to boot-start, however, splunk does not automatically run after reboot. I followed the instructions within this document. Configure Splunk Enterprise to start at boot time - Splunk Documentation
Hi, I'm running a test setup with some live kubernetes data and I want to do the following indexer: 1) Route all data matching a certain field to a specific index called "gsp" on my indexer. I a... See more...
Hi, I'm running a test setup with some live kubernetes data and I want to do the following indexer: 1) Route all data matching a certain field to a specific index called "gsp" on my indexer. I already have been playing around with the _MetaData:Index key which seems to work just fine when applied as single transform for a certain sourcetype. However, How I have multiple sourcetypes. This is my props.conf [kube:container*] TRANSFORMS-routing = AnthosGSP This is my transforms.conf [AnthosGSP] REGEX = drnt0-retail-sabbnetservices DEST_KEY = _MetaData:Index FORMAT = gsp However, the routing isn't happening as it should be.Please help!! PS: I am a newbie to splunking.. so pardon my ignorance. Regards, Yaseen.
hi When I type this command, the following error message is displayed. | inputintelligence mitre_attack error command: Error in 'inputintelligence' command: Inputintelligence does not support thr... See more...
hi When I type this command, the following error message is displayed. | inputintelligence mitre_attack error command: Error in 'inputintelligence' command: Inputintelligence does not support threat intel at this time can you help me, how can i solve my problem?
I am working on Linux based usecases that are available in Splunk ESCU. Most of the usecases are using Endpoint. process data model. When checked in the official Splunk Linux add on, only 3 source ty... See more...
I am working on Linux based usecases that are available in Splunk ESCU. Most of the usecases are using Endpoint. process data model. When checked in the official Splunk Linux add on, only 3 source types are shown in Endpoint i.e. (fs_notification, netstat, Unix:Service), Whereas the "process" sourcetype is not mapped with any data model. Will adding "process" sourcetype help in executing the Splunk ESCU queries?
Hi , in my current splunk I have builded a lot of work flow action, when I want to upload all my workflow action into my new seccond splunk. How I do that thank you a lot  
Merry Christmas Everyone!!  I wanted to thank everyone in this community first because within 3 months as Jr. Splunk Admin this page has brought me a lot of Value.  My Org asked me to focus on th... See more...
Merry Christmas Everyone!!  I wanted to thank everyone in this community first because within 3 months as Jr. Splunk Admin this page has brought me a lot of Value.  My Org asked me to focus on the Innovation part of Splunk cloud in the Year 2024 in my End of year review, While I'm thinking of: -digging through apps in Splunk base -reading articles I would love to have suggestions from you all that how I can add some value other than doing maintenance tasks, what it really takes, any prerequisites?  I know it's a generic question but any answer would help!!     Thank you. 
(index=123) sourcetype=XYZ AND type IN ("SERVICE_STOP") )  | _time host type _raw  is the main query where we are searching host where service stop has been observed Here in this scenario we need ... See more...
(index=123) sourcetype=XYZ AND type IN ("SERVICE_STOP") )  | _time host type _raw  is the main query where we are searching host where service stop has been observed Here in this scenario we need to exclude if SERVICE_START event seen with same host within 10 Minutes. Kindly help me with the query Thanks in Advance !!
Hi Team, In my requirement, if any splunk servers are got failed, need to be generated Services now incidents need to be created automatically...   How do we write the query and how do we configur... See more...
Hi Team, In my requirement, if any splunk servers are got failed, need to be generated Services now incidents need to be created automatically...   How do we write the query and how do we configure Service now incidents, please help me     
Hi Team,   In my project need to be implement High Availability servers in below Servers are using. Z1-->L4 -->SearchHead+Indexer single instanceonly(Dev)               SearchHead+Indexer single ... See more...
Hi Team,   In my project need to be implement High Availability servers in below Servers are using. Z1-->L4 -->SearchHead+Indexer single instanceonly(Dev)               SearchHead+Indexer single instanceonly(QA) DeploymentServer(QA) HearvyForwarder(Prod) DeploymentServe(Prod) -------------------------------------------------- App related below server:   Z2 --> HearvyForwarder(Prod,Dev,QA) individual Servers Z3--> HearvyForwarder(Prod,Dev,QA) individual Servers   The above App related servers are connected Deployment server and SearchHead+Indexer Note : my project there is no Cluster master   Please help how do we implement the High Availability implementation, please help the guide the above servers.    
how can i Integrate between splunk and Vectra NDR solution ? what is the full path to get fully integration ?
<Summary of Inquiry> I am working on API integration between SplunkEnterprise and Netskope Netskope API integration with SplunkEnterprise on WIndows Server 2022 Datacenter. I have made an inquiry ... See more...
<Summary of Inquiry> I am working on API integration between SplunkEnterprise and Netskope Netskope API integration with SplunkEnterprise on WIndows Server 2022 Datacenter. I have made an inquiry about trouble related to the integration. <Content of inquiry> In order to create a Netskope Account in Splunk Enterprise, I tried to add a Netskope account in the attached Configuration to create a Netskope Account in Splunk Enterprise, However, I received an error message "Error Request failed with status code 500" and was unable to create a Netskope account. So,I am unable to create a Netskope account. <Troubleshooting Situation> I have troubleshooted and confirmed the settings by browsing the following sites. Please check. ・I tried to enter telnet 127.0.0.1 8089 with cmd, but nothing came back. ・disableDefaultPort=true" in "C:\Program Files\Splunk\etc\system\local\server.conf" has been deleted. ・tools.sessions.timeout=60" in "C:\Program Files\Splunk\etc\system\default\web.conf" is already set ・mgmtHostPort = 127.0.0.1:8089" in "C:\Program Files\Splunk\etc\system\default\web.conf" is set. ・The result of running netstat -a is attached in this attachment. (Only the current communication status of the IP address and port number regarding the Syslog server is shown.) 「About 500 Internal Server Error」 https://community.splunk.com/t5/Splunk-Enterprise/500-Internal-Server-Error-%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6/m-p/434180 「500 Internal Server Error」 https://community.splunk.com/t5/Security/500-Internal-Server-Error/m-p/477677   <What I need help with> (1) Even though the management port is set and there is no "disableDefaultPort=true", we think the reason is that 127.0.0.1 8089 is not "Established".What are some possible ways to deal with this? (2) Are there any other possible causes?
Hi, I have a synthetic script that sometimes ends a run as a "broken job". I see in the documentation that this happens because of an unhandled exception. So I added: try: ....      wait.until(E... See more...
Hi, I have a synthetic script that sometimes ends a run as a "broken job". I see in the documentation that this happens because of an unhandled exception. So I added: try: ....      wait.until(EC.element_to_be_clickable((By.ID, "username"))).click() except Exception as e:     print ("The script threw an exception.") But now, the script runs and if the job has a timeout exception the job status shows as "success", but I can see in the script output that it printed "The script threw an exception." How do I make it so that if an exception is thrown the script status shows as failed? Thanks, Roberto
Hi All,   There are 50 zip files in a folder in those zip folders there are many other files- log/txt/png, out of which I want to monitor a specific log file.   Below is the code i have written b... See more...
Hi All,   There are 50 zip files in a folder in those zip folders there are many other files- log/txt/png, out of which I want to monitor a specific log file.   Below is the code i have written but it is failing to monitor that log file, please suggest. [monitor:///home/splunk/*.zip:./WalkbackDetails.log] disabled = false index = ziptest  
Hi, Everyone!  Not everyone starts with a vanilla environment. How to address your customization needs with agent management?  Come check out the existing questions here: Smart Agent FAQ | Custom ... See more...
Hi, Everyone!  Not everyone starts with a vanilla environment. How to address your customization needs with agent management?  Come check out the existing questions here: Smart Agent FAQ | Custom configuration files, monitors, and extensions What do you think? Our team would love to hear your thoughts, including how we can add to and improve the FAQ Please do share your impressions, considerations, and questions below. Our Smart Agent FAQ has a lot of information about Smart Agent and related features. We thought you might appreciate this quick way to get to the topics that most interest you, paired with a place to ask questions and enlarge on your take...