All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good to all, I have a strange behavior of a python script. The specific python script collects some AWS data. if I run it as it is, it works nicely. If I run through Splunk's python (bin/splunk pyth... See more...
Good to all, I have a strange behavior of a python script. The specific python script collects some AWS data. if I run it as it is, it works nicely. If I run through Splunk's python (bin/splunk python) it also runs. If I schedule it, I get errors related to access to AWS. Especially with the second try (via Splunk's python) I had the impression that I could "emulate" how it will run when it is going to be enabled but appears that that is not the case. Is there any other way to troubleshoot this?
I'm using a AWS setup with ELB sending constant healthcheck messages to my apache server. splunk is indexed to capture access logs. 37% of my access logs are captured with this unwanted healthcheck d... See more...
I'm using a AWS setup with ELB sending constant healthcheck messages to my apache server. splunk is indexed to capture access logs. 37% of my access logs are captured with this unwanted healthcheck data. I wanted to blacklist this incoming data so I have added below configurations to splunkds inputs.conf.Below configuration is not working even after splunkds restart. [monitor://] blacklist1 = Response_code=200 && "healthcheck" Please provide me solution on how the blacklisting of incoming traffic with response_code=200 and message="healthcheck" be configured.
I'm using splunk enterprise. I need inforation on what are the ways and methods we can use to optimize splunk license. For ex: in my log files I'm receiving Healthcheck messages frequently and I do... See more...
I'm using splunk enterprise. I need inforation on what are the ways and methods we can use to optimize splunk license. For ex: in my log files I'm receiving Healthcheck messages frequently and I don't want to log these messages in my indexes. Inorder to stop incoming data with Healthcheck in message and response code as 200 I'm blacklisting the messages in splukds. Similarly, What are the other methods to optimize splunk logs and use max utilization of splunk license?
Find that it has the frequent error message that the search head cannot connect to the Indexer. "Unable to distribute to peer named xx.xx.xx.xx:8089 at uri=xx.xx.xx.xx:8089 using the uri-scheme=htt... See more...
Find that it has the frequent error message that the search head cannot connect to the Indexer. "Unable to distribute to peer named xx.xx.xx.xx:8089 at uri=xx.xx.xx.xx:8089 using the uri-scheme=https because peer has status=Down" It happens from time to time. Any search head will have that error message. Also, SH will have the connection issue to any of the indexers in clusters (not restricted to particular indexer). During the worst case, the SH will report the error to all the indexers and cause some service outage. But after sometimes without doing anything, the service will come back to normal. Also the CPU and memory are normal even the error message is happening.
I am new to splunk and from construction background. challenging myself to do something new. How can you learn, understand and master the search commands in splunk?
In our on-prem splunk cluster attempting to follow these steps in "Enable the peer nodes": Enable the peer To enable an indexer as a peer node: 1. Click Settings in the upper right corner of Sp... See more...
In our on-prem splunk cluster attempting to follow these steps in "Enable the peer nodes": Enable the peer To enable an indexer as a peer node: 1. Click Settings in the upper right corner of Splunk Web. 2. In the Distributed environment group, click Indexer clustering. 3. Select Enable indexer clustering. ... like this: ... takes me instead to a page called "Clustering: Search Head" where there's no option to "Enable indexer clustering" nor to add a new indexer to a given cluster. I am assuming something's not right with our cluster configuration. Any idea how to fix it? P.S. Context: the goal is to add a new indexer to the existing cluster from scratch (it's a freshly deployed CentOS 7 VM with no data and installed-but-not-yet-started Splunk Enterprise). However I am having difficulties following the available documentation given that the steps in it lead me to unexpected results - like what I described above. Thanks!
While PyDen-manager and PyDen does not work with Splunk 8, but I have no issue in open both apps and see their launch page with various tabs (the app GUI). On my Splunk 7.3, I can build CPython and... See more...
While PyDen-manager and PyDen does not work with Splunk 8, but I have no issue in open both apps and see their launch page with various tabs (the app GUI). On my Splunk 7.3, I can build CPython and virtual environments with no issue, and do all the PIP install, but I cannot launch PyDen GUI. I had this issue from the beginning. I compared PyDen dirs/files on both Splunk instance servers (owner/mode etc), and they look the same. I did multiple upgrades, removed the PyDen dir, and did a new install, also restarted the Splunkd. Nothing worked. The strange thing is when launching PyDen, it will launch another app from the apps installed. If I disable that app, it will load another app. I want to upload a picture for you to see. I cannot figure out what I did wrong. Please help.
Hi there, We now have a service that provides us with a threat intel list. However, if we need to access that URL, we need to parse an API key. Can someone suggest how I could get this sorted? ... See more...
Hi there, We now have a service that provides us with a threat intel list. However, if we need to access that URL, we need to parse an API key. Can someone suggest how I could get this sorted? Has someone previously done this? Thanks.
I created a web app that runs in Splunk. It is installed in $SPLUNK_HOME/etc/apps . The user can enter some data in a form, and then the data is saved in the KV Store. The data is sent to the kv stor... See more...
I created a web app that runs in Splunk. It is installed in $SPLUNK_HOME/etc/apps . The user can enter some data in a form, and then the data is saved in the KV Store. The data is sent to the kv store by using the javascript method fetch(). This is what a POST request looks like: fetch( "my_kv_store_url", { method: "POST", mode: "cors", credentials: "include", headers: { "Content-Type": "application/json", }, body: JSON.stringify(formData), } ) The data is successfully added to the kv store in Splunk 8+. It has a server.conf file which includes these lines: [httpServer] crossOriginSharingHeaders = X-Splunk-Form-Key, Authorization, Accept ,Content-Type, User-Agent,Referer However, when I run the app in Splunk version 7.x the POST requests are blocked with this error: "Request header field content-type is not allowed by Access-Control-Allow-Headers in preflight response". Splunk v7 does not support the crossOriginSharingHeaders rule in server.conf. How can I make this work in Splunk 7x?
I have an eval query. The details object returned looks like this: { status: 404, code: ERROR } "details.status"=404 | eval detailsStatus=details.status | table detail... See more...
I have an eval query. The details object returned looks like this: { status: 404, code: ERROR } "details.status"=404 | eval detailsStatus=details.status | table detailsStatus detailsStatus never has a value in the table though. What am I doing wrong?
We plan to upgrade Splunk to version 8 but not sure if this app is compatible with v8 and Python 3.x?
I can't see the custom panels created in the Splunk App for Jenkins. If I create a new one I see all old custom panels created, but when I log in again all of the custom panels previously created... See more...
I can't see the custom panels created in the Splunk App for Jenkins. If I create a new one I see all old custom panels created, but when I log in again all of the custom panels previously created disappear again. I can only see them all again if I create a new custom panel. Any help here is appreciated.
I am trying to use Splunk's HEC to ingest data. I noticed that the HEC tokens' statuses are disabled. How can I enable this to use HEC? Also, I don't see an option to enable HEC. I am using Splun... See more...
I am trying to use Splunk's HEC to ingest data. I noticed that the HEC tokens' statuses are disabled. How can I enable this to use HEC? Also, I don't see an option to enable HEC. I am using Splunk Free on Splunk Cloud. I don't have a license and I didn't download anything.
If we are only adding an event hub input using the Microsoft Azure Add-on for Splunk, do we need to include account information on the configuration tab? We haven't put in any account information ... See more...
If we are only adding an event hub input using the Microsoft Azure Add-on for Splunk, do we need to include account information on the configuration tab? We haven't put in any account information on the configuration tab and are only using an event hub input, but we aren't seeing any data coming in.
We have a custom python REST endpoint that uses the OpenSSL module for some crypto functions. Works fine when we run Splunk with the Python version set to Python 2 but fails with a No module named '... See more...
We have a custom python REST endpoint that uses the OpenSSL module for some crypto functions. Works fine when we run Splunk with the Python version set to Python 2 but fails with a No module named 'OpenSSL' error when we set the version to Python 3. So it looks like the OpenSSL module is not included in Splunk's Python 3? Am I missing something? Are there any plans to add that?
I am trying to do something like this: | stats limit=10 min(Speed) by customer or | sort customer, speed | head(10) by customer
I have a field called CARDFILOGO and I want to search it for ones that start with "JU" and end in numbers. I do not want JU followed by letters so a wild card won't work. What's the best way to get t... See more...
I have a field called CARDFILOGO and I want to search it for ones that start with "JU" and end in numbers. I do not want JU followed by letters so a wild card won't work. What's the best way to get these?
I am currently working on the Splunk fundamentals course and got to Module 3. My Mac won't download Splunk 8.0.3 because Apple can't check for malicious content. How do I get around this? Will Splu... See more...
I am currently working on the Splunk fundamentals course and got to Module 3. My Mac won't download Splunk 8.0.3 because Apple can't check for malicious content. How do I get around this? Will Splunk provide an update that works with Mac version 10.15? I have until June 21 to complete the course.
Need assistance figuring out why we are receiving multiple email alerts. We are trying to setup email alerts for Office365 Service Messages. Search String index="o365data" sourcetype="o365:servi... See more...
Need assistance figuring out why we are receiving multiple email alerts. We are trying to setup email alerts for Office365 Service Messages. Search String index="o365data" sourcetype="o365:service:message" Id=* | where Classification == "Incident" AND Severity == "Sev2" | spath Messages{} output=Messages | spath WorkloadDisplayName | spath Id | spath Status | stats values(WorkloadDisplayName) as WorkloadDisplayName values(Id) as Id values(Status) as Status by Messages | spath input=Messages | eval PublishedTime=strptime(PublishedTime, "%Y-%m-%dT%H:%M:%S.%NZ") | eval CorrectPublished=PublishedTime+25200 | where MessageText != "A post-incident report has been published." | stats count by CorrectPublished Id WorkloadDisplayName MessageText Status | sort - CorrectPublished | dedup CorrectPublished | fields - count | eval CorrectPublished=strftime(CorrectPublished,"%Y/%m/%d %T") | fields - PublishedTime | dedup Id | table CorrectPublished Id WorkloadDisplayName MessageText Status | rename CorrectPublished as "Published", Id as "ID", WorkloadDisplayName as "Workload", MessageText as "Details" Alert Settings Alert Type: Real-time Expires: 48 Hour(s) Trigger alert when: Per-Result Throtte: yes Suppress results containing field value: * Suppress triggering for: 2 minute(s) When triggered: Send email Emails Received
Hi, I have a weird problem, I have a standalone search head but Monitoring console is not available. This standalone SH was earlier part of the SH cluster - which is now broken due to some other is... See more...
Hi, I have a weird problem, I have a standalone search head but Monitoring console is not available. This standalone SH was earlier part of the SH cluster - which is now broken due to some other issues. I can confirm there is no cluster setting -- as I do not see the search head clustering option in Distributed environment. What could be the reason for not seeing MC on this standalone search head?