All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I'm trying to troubleshoot an issue with Splunk that I'm facing: I have a Splunk heavy forwarder setting on the customer side, through it the customer sends the logs of the Splunk ... See more...
Hi Splunkers, I'm trying to troubleshoot an issue with Splunk that I'm facing: I have a Splunk heavy forwarder setting on the customer side, through it the customer sends the logs of the Splunk universal forwarders and the network devices (switches, routers, and firewalls), and this HF sends the logs to 2 indexers (on my side) through a VPN. I have noticed that the HF is having the pipelines are 100% full! I have tried to increase the number of pipelines but with no luck. I checked the "list monitor" command in Splunk: /opt/splunk/bin/splunk list monitor | wc -l and it results with 690 I then modified the "limits.conf" on the HF: [inputproc] max_fd = 900 restarted Splunk but still nothing is fixed In the customer environment, there are about 15 firewalls, they are configured to send syslog logs to a Splunk heavy forwarder that has syslog-ng installed on it. Recently I have noticed that the logs from the firewalls when I query them on the SH they are delayed! Even though the syslog logs from the firewalls themselves are being received in real-time on the heavy forwarder! I have run the following command using CLI to check if there is any blockage and as expected there are a lot of them: grep blocked=true /opt/splunk/var/log/splunk/metrics.log*   And in the GUI, in the "Health Status of Splunkd": under the "Ingestion Latency": And under the "Large and Archive File Reader":   And under the "Real-time Reader": what is the issue that I'm facing? I'm running Splunk 9.0.4 on the HF CPU: 12 RAM: 16
We have same problem as described in https://community.appdynamics.com/t5/Controller-SaaS-On-Premises/Machine-Agent-Http-Listener-not-working/td-p/46694. We followed https://docs.appdynamics.com/app... See more...
We have same problem as described in https://community.appdynamics.com/t5/Controller-SaaS-On-Premises/Machine-Agent-Http-Listener-not-working/td-p/46694. We followed https://docs.appdynamics.com/appd/22.x/22.2/en/infrastructure-visibility/machine-agent/extensions-and-custom-metrics/machine-agent-http-listener, we send metrics, we get 204. But no data are displayed in the GUI console. Only the new metrics are registered, but no values are shown. We send the data from synthetics jobs using python script. Logs from my script: [INFO] Request sent, body=[{"metricName":"Custom Metrics|WebVitals|LCP","aggregatorType":"AVERAGE","value":9695.514}], responseStatus=204
I have two events one is  calculate the SLA percentage from below querys   Start event query  Index=x source type= xx "saved msg" extacted fields s like manid,actionid,batch I'd End event que... See more...
I have two events one is  calculate the SLA percentage from below querys   Start event query  Index=x source type= xx "saved msg" extacted fields s like manid,actionid,batch I'd End event query Index=y source type=y " recived msg" extacted fields like manid ,actionid
Hi I know there are many splunk add on's available to collect azure monitor metrics which collects the logs using app id, client id, directory and secret key.  My question is how these add on's act... See more...
Hi I know there are many splunk add on's available to collect azure monitor metrics which collects the logs using app id, client id, directory and secret key.  My question is how these add on's actually authenticate and pulls these azure metrics, as azure these metrics can only be retrieved using bearer tokens. If we create bearer token in azure monitor its valid for only 24 hours.  Actually we need to create some custom add ons to pull azure metrics, but unable to crack how to authenticate. Can some one please guide us.  
I have followed Splunk official document in which for log monitoring I am using Splunk cloud and for this I  referred GitHub URL = https://github.com/signalfx/splunk-otel-collector-chart  (Kubernetes... See more...
I have followed Splunk official document in which for log monitoring I am using Splunk cloud and for this I  referred GitHub URL = https://github.com/signalfx/splunk-otel-collector-chart  (Kubernetes clusters monitoring) here as per document I have added endpoint of Splunk, Hec token,  and created index inside Splunk cloud , cluster name too but still not able to fetch logs. I need small help for this like configuration is almost done but on Splunk dashboard not able to see logs of Kubernetes cluster. command used: 1. helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart 2.helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
I'm trying to get Slack alerts set on my Splunk Cloud instance but the test give me the following output: 04-14-2023 21:50:28.461 INFO  sendmodalert [11674 phase_1] - action=slack STDERR -  Slack AP... See more...
I'm trying to get Slack alerts set on my Splunk Cloud instance but the test give me the following output: 04-14-2023 21:50:28.461 INFO  sendmodalert [11674 phase_1] - action=slack STDERR -  Slack API responded with HTTP status=200 04-14-2023 21:50:28.461 FATAL sendmodalert [11674 phase_1] - action=slack STDERR -  Alert action failed 04-14-2023 21:50:28.493 INFO  sendmodalert [11674 phase_1] - action=slack - Alert action script completed in duration=171 ms with exit code=1 04-14-2023 21:50:28.493 WARN  sendmodalert [11674 phase_1] - action=slack - Alert action script returned error code=1 04-14-2023 21:50:28.493 ERROR sendmodalert [11674 phase_1] - Error in 'sendalert' command: Alert script returned error code 1. 04-14-2023 21:50:28.494 INFO  ReducePhaseExecutor [11674 phase_1] - Ending phase_1   I can't find the meaning of error code 1 anywhere, and can't put this setup in debug to get more info from the deployment. Does anyone have an idea of what this code means so I can get this functional?
Some Splunk customers have encountered the following error message when performing searches: The search job with sid=<value> failed to launch successfully after the timeout interval elapsed. If s... See more...
Some Splunk customers have encountered the following error message when performing searches: The search job with sid=<value> failed to launch successfully after the timeout interval elapsed. If search jobs time out frequently before successfully launching, check whether the server running Splunk software is overloaded. Alternatively, adjust the 'search_launch_timeout_seconds' setting in the limits.conf file. This error message typically appears when resources, such as network conditions or processors, are at capacity. If you get this error message, contact your administrator or Splunk Support. Alternatively, change the default for the 'search_launch_timeout_seconds' setting in the limits.conf file to a value greater than 180 seconds. Changing this value should give the search process enough time to complete instead of terminating without producing results. However, even after changing the 'search_launch_timeout_seconds' setting, there might be some unique use cases where search still fails. If this happens, contact Splunk Support.
From the below query i want to get the alert when SuccessRate is Less than 40, it should trigger an email alert with customise message like "SuccessRate is less than 40 %, please take action."  ho... See more...
From the below query i want to get the alert when SuccessRate is Less than 40, it should trigger an email alert with customise message like "SuccessRate is less than 40 %, please take action."  how can i do this?? index=app-code host_ip=34.23.234.12 |search activity=done |eval result=if(like(responseHttp, "200"), "Success", "error") |stats count (eval(result="Success")) as Total_Success, count(responseHttp) as Total |eval Success_Count=(Total_Success/Total)*100.0 |stats avg(Success_Count) as SuccessRate |where SuccessRate <40
Hello All, Is Collectord a good option for OpenShift monitoring? How much does it cost? What are the alternatives? Thanks!
Upgraded ITSI to v4.15.1 and now default analyzer is displaying all N/A  for the health scores and all services are grayed out.  KPI's under the services are collecting data and they look okay. Wh... See more...
Upgraded ITSI to v4.15.1 and now default analyzer is displaying all N/A  for the health scores and all services are grayed out.  KPI's under the services are collecting data and they look okay. What are some good SPL queries to narrow down the issue? -Archie
Hello! My situation is I'm doing a new installation of Splunk on a windows instance with an existing splunk.secret. Question: Is there a command line flag to pass the splunk.secret during the initi... See more...
Hello! My situation is I'm doing a new installation of Splunk on a windows instance with an existing splunk.secret. Question: Is there a command line flag to pass the splunk.secret during the initial installation? My process in the past was to install Splunk without launching in order to prevent passwords from being generated. Then copy the splunk.secret over and start Splunk. It has been a while though and I think something changed, because Splunk writes a password to server.conf even if it isn't started for the first time. Now I have to remove the password with another command before starting it. 
I installed splunk standalone (9.0.4) with ansible https://github.com/splunk/splunk-ansible/ on Ubuntu jammy. That has worked well. Data is ingested from port 9997 and for now, everything goes to m... See more...
I installed splunk standalone (9.0.4) with ansible https://github.com/splunk/splunk-ansible/ on Ubuntu jammy. That has worked well. Data is ingested from port 9997 and for now, everything goes to main index. I want to split things between multiple indexes aka windows, linux and other source types.   I think this would be through transforms as per https://docs.splunk.com/Documentation/Splunk/9.0.4/Forwarding/Routeandfilterdatad but this seems to be only valid for heavy forwarder role. Or cluster master as per https://github.com/splunk/splunk-ansible/blob/develop/roles/splunk_cluster_master/tasks/configure_indexes.yml In role variable, only found smartstore with an index array but I believe it is different. I tried * forwarding working with transform in /opt/splunk/etc/system/local/props.conf and /opt/splunk/etc/system/local/transforms.conf but nok     $ sudo cat /opt/splunk/etc/system/local/props.conf # https://docs.splunk.com/Documentation/Splunk/9.0.4/Indexer/Setupmultipleindexes [SOURCE1] TRANSFORMS-index = SOURCE1Redirect $ sudo cat /opt/splunk/etc/system/local/transforms.conf [SOURCE1Redirect] #REGEX = ,"file":{"path":"\/var\/log\/SOURCE1\/SOURCE1.log"}},"message": REGEX = ^{.*SOURCE1.*}$ DEST_KEY = _MetaData:Index FORMAT = SOURCE1     * get tcp data input losing all the json fields extract and only raw unusable data. Similar to https://community.splunk.com/t5/Getting-Data-In/Splunk-is-adding-weird-strings-like-quot-linebreaker-x00-x00/m-p/21598 * set data receiver in forwarding section and setting index in inputs.conf but not getting data ingested even if data received from tcpdump. And not found how to associate a specific receiver port to an index. tried     $ sudo more /opt/splunk/etc/system/local/inputs.conf [splunktcp://9997] disabled = 0 [splunktcp://9525] disabled = 0 index = sourcetype1       Any advices?   Thanks
I am running search. basesearch  |eventstats count values(date) as Date by ID  result I get count 2 or 3 or 1 how do I get count=1 OR count=3.  how I use max(count)  and min(count).  I need... See more...
I am running search. basesearch  |eventstats count values(date) as Date by ID  result I get count 2 or 3 or 1 how do I get count=1 OR count=3.  how I use max(count)  and min(count).  I need this because min(count) will new data and max(count) will old data.  Is there any other way to do this?  
I'm new to writing apps for Splunk, so I'm trying something simple. A raw payload dump. I have the alert set to log the event and fire off my custom action when CPU usage >20%, and only once every 15... See more...
I'm new to writing apps for Splunk, so I'm trying something simple. A raw payload dump. I have the alert set to log the event and fire off my custom action when CPU usage >20%, and only once every 15 minutes. So i have a reliable trigger source. However, it never seems to launch my action and I can't for the life of me figure out why. I'm trying to get the code to write a line to one file when it launches, write debug to another, and write both json and raw to separate files so I can decide on parsing later. Any thoughts on what I'm doing wrong here?? I'm not even getting the line in the file to let me know it tried to run. alert_actions.conf: [NCPAServiceAlert] is_custom = 1 label = NCPA Service Alert description = Test Alert for NCPA Listener Service icon_path = awesomesauce.PNG payload_format = json python.version = python3   NCPAServiceAlert.py: import json import sys import logging import time import datetime ts = time.time() sttime = datetime.datetime.fromtimestamp(ts).strftime('%Y%m%d_%H:%M:%S - ') didirun = "C:/Users/Public/debug/Did_I_Run.txt" with open(didirun, "w+") as d: d.write(sttime + " I ran. Can't say much about the rest though." + \n) logging.basicConfig(filename='C:/Users/Public/debug/debug.txt', filemode='w' encoding='utf-8', level=logging.DEBUG) class NCPAServiceAlert: def __init__(self): logging.debug() self.params = [ #"configuration" #"text" ] def send_alert logging.debug() filejson = "C:/Users/Public/debug/alertdump.txt" with open(filejson, "w+") as f: payload = json.loads(sys.stdin.read()) f.write(payload) fileraw = "C:/Users/Public/debug/generic_dump.txt" with open(fileraw, "w+") as g: payload = sys.stdin.read() g.write(payload) if __name__ == "__main__": logging.debug() if len(sys.argv) < 2 or sys.argv[1] != "--execute": sys.stderr.write(FATAL EXCEPTION (expected --execute flag)\n) sys.exit(1) if not send_alert() sys.exit(2) except Exception as e: sys.stderr.write(ERROR - Unexpected error %s\n % e) sys.exit(3)    
Hi splunkers Right now I'm getting data from FortiWeb Onpremise and I need to know if there are any security use cases I can apply to my Enterprise Security or which Splunk ES "Security Intelligent... See more...
Hi splunkers Right now I'm getting data from FortiWeb Onpremise and I need to know if there are any security use cases I can apply to my Enterprise Security or which Splunk ES "Security Intelligent" and "Security Domains" dashboards I could associate this data with.   I hope to be clear with my doubt
I created a inputs.conf on my deployment server and noticed that my logs were coming in as my sourcetype instead of my host.  Once the I assigned it to the client, I couldn't find the logs. I noticed... See more...
I created a inputs.conf on my deployment server and noticed that my logs were coming in as my sourcetype instead of my host.  Once the I assigned it to the client, I couldn't find the logs. I noticed they were my sourcetype instead of it normally being the host. 
We are getting multiple errors like this Corrupt csv header in CSV file , 2 columns with the same name However we have so many CSV files that finding them will be all but impossible.   Can so... See more...
We are getting multiple errors like this Corrupt csv header in CSV file , 2 columns with the same name However we have so many CSV files that finding them will be all but impossible.   Can someone provide advice on how to find them? 
I am attempting (for the first tiume) to convert the following regex search to work in transforms.conf, but can't seem to get it to work. What am I missing? My search which works: index="fileshar... See more...
I am attempting (for the first tiume) to convert the following regex search to work in transforms.conf, but can't seem to get it to work. What am I missing? My search which works: index="fileshares" sourcetype="fileshares" source="/mnt/auditlog/*" | rex "\"SubjectUserName\">(?<Username>[^\<]+)"   My attempt with transforms.conf: [Username] SOURCE_KEY = Username REGEX = \"SubjectUserName\">(?<Username>[^\<]+) MV_ADD = true   Props.conf: [fileshares] REPORT-fields = Username  
I have noticed that the event_ids that I cannot find documentation for are associated with two eventtypes together. However,  individually, those eventtypes are also associated with other event_ids. ... See more...
I have noticed that the event_ids that I cannot find documentation for are associated with two eventtypes together. However,  individually, those eventtypes are also associated with other event_ids.  How do I exclude the two eventtypes from the search only when they are both associated with an event_id?  I tried eventtype != "xxx" AND eventtype!="yyy" but that doesn't group both of the eventtypes together, if that makes sense. So each event_id associated with "xxx" is excluded from the search, which is not the result I need. 
I inputlookup ip_spywarelist.csv | eval ip_range=split(ip,"-") | eval start_ip=mvindex(ip_range, 0), end_ip=mvindex(ip_range, 1) | eval start_ip_long=tonumber(split(start_ip,"\\.")[3]) | eval... See more...
I inputlookup ip_spywarelist.csv | eval ip_range=split(ip,"-") | eval start_ip=mvindex(ip_range, 0), end_ip=mvindex(ip_range, 1) | eval start_ip_long=tonumber(split(start_ip,"\\.")[3]) | eval end_ip_long=tonumber(split(end_ip,"\\.")[3]) | eval ip_list=mvrange(start_ip_long,end_ip_long) | mvexpand ip_list | eval ip_address=substr(start_ip,1,strlen(start_ip) -length(start_ip_long)) | table ip_address Notes: When I run this query, I get "Unknown search command '3' (Please don't mind any typos, as I typed the query manually here). Why this query does NOT work?  The idea is to create a correlation search that would generate an alert if either the Src_ip or the dest_ip matches the IP within the IP range (in the ip field) . Since "ip_spywarelist.csv" has a field called "ip" that only contains IP ranges as values, I would like to search among all the IPs in each range not just the Start IP and end IP within the range (i.e: 2.60.13.132-2.60.13.137). I just wanted to verify if the query was working perfectly, before I include it in: index=* sourcetype=* [ | inputlookup ip_spywarelist.csv | ... The CSV file is provided by Splunk under "threat intel." The idea is to create a correlation search using that file which only provide the malicious IPs under IP range format.