All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are planning to on-board Akamai platform logs to Splunk. We are following this link to implement the same - SIEM Splunk connector   In the process we have installed this Akamai add-on - Akamai S... See more...
We are planning to on-board Akamai platform logs to Splunk. We are following this link to implement the same - SIEM Splunk connector   In the process we have installed this Akamai add-on - Akamai SIEM Integration | Splunkbase   When we are going to Settings > Data Inputs as mentioned here - SIEM Splunk connector – we are unable to find this data input - ​Akamai​ Security Incident Event Manager API.   And we are getting the following error in Splunk post installing the add-on.   Deployer       Search head   Can you help us in this challenge? We are stuck at “data inputs”. I think we need to perform these pre-requisites to get this Akamai add-on (Modular Input) work –     Please help us in installing Java in our Splunk instance and whether KVStore is installed or not and is it working fine?
Morning, Splunkers! I've been running a dashboard that monitors the performance of the various systems my customer uses, and I recently switched all of my timechart line graphs over to the Downsampl... See more...
Morning, Splunkers! I've been running a dashboard that monitors the performance of the various systems my customer uses, and I recently switched all of my timechart line graphs over to the Downsampled Line Chart because it allows a user to zoom in on a specific time/date range that is already displayed (most of my customer's users aren't Splunk-savy in the slightest). My customer has users literally all over the country, so our Splunk is set for all times to be shown as UTC by default for every account. The problem is the Downsampled Line Chart insists on showing everything in local time, regardless of what our account configurations are set to, and I can't find any documentation on how to get it to stop (I'm not an admin, so I can't just go into settings and start editing configuration files). Does anybody have any idea on how to get it to stop? I'd hate to have to give up the functionality of the chart because it won't show the same times for people on opposite sides of the country, but I'm out of options, here.  
Dear all,   I have the following outputs.conf configuration: [tcpout] defaultGroup = my_indexers   [tcpout:my_indexers] server = mysplunk_indexer1:9997, mysplunk_indexer2:9997   [tcpout-serv... See more...
Dear all,   I have the following outputs.conf configuration: [tcpout] defaultGroup = my_indexers   [tcpout:my_indexers] server = mysplunk_indexer1:9997, mysplunk_indexer2:9997   [tcpout-server://mysplunk_indexer1:9997]   Could you please clarify the Universal Forwarder (UF) behavior in the event that mysplunk_indexer1 goes down? Will the UF continue sending data to both indexers despite mysplunk_indexer1 being down? Or will the UF detect that mysplunk_indexer1 is unreachable and stop forwarding traffic to it?
Hello,   I have a dashboard that checks all indexes and displays the event count for today and the last write time. This allows users of the dashboard to alert if an index has not been written to i... See more...
Hello,   I have a dashboard that checks all indexes and displays the event count for today and the last write time. This allows users of the dashboard to alert if an index has not been written to in a certain amount of time.   My issue is that the dashboard runs when the user clicks into it and runs the searches using their permissions as expected. However they do not have access to all indexes so cannot see the stats for all indexes. What is the easiest way to change this so that they can see an event count for all indexes without having to give them access to the index?    
we have a scenario where we roll logs everyday. we want Splunk to index log file for yesterday only. We don't want to ingest todays log files. what specific setting d i require in  my input. Conf f... See more...
we have a scenario where we roll logs everyday. we want Splunk to index log file for yesterday only. We don't want to ingest todays log files. what specific setting d i require in  my input. Conf file to only ingest yesterdays data.  ignoreOlderThan = 1d  also ingests todays logfiles which i do not want to.    
Hello,  I need some help adding colour to my dashboard. I've got the below block sitting on my high level dashboard view, but I want it to change colour (Red or Green) dependent on the values of the... See more...
Hello,  I need some help adding colour to my dashboard. I've got the below block sitting on my high level dashboard view, but I want it to change colour (Red or Green) dependent on the values of the underlying dashboard that it clicks through to which I will share below.  This is the dashboard it displays below when you click on the above... Is there some way, that if any of these 5 boxes do not display "OK", then the top level block (EazyBI) will change to Red? Can anyone help me with that?        
free -m As a result of this command, we found that the memory usage is about 3% lower, but the swap memory is 100% in use. The same thing happens when you restart Splunk shortly after. Does anyo... See more...
free -m As a result of this command, we found that the memory usage is about 3% lower, but the swap memory is 100% in use. The same thing happens when you restart Splunk shortly after. Does anyone know the cause of the phenomenon and how to solve it The server environment is as follows. OS: CentOS 7 Splunk Enterprise 9.0.4
The use case of mine is to retrieve the data from splunk. I have written the python script to get the data from splunk using splunk rest api. But it takes too much of time to process it and give the ... See more...
The use case of mine is to retrieve the data from splunk. I have written the python script to get the data from splunk using splunk rest api. But it takes too much of time to process it and give the response. Tried oneshot method also but no use. Please guide me is there any other alternate approach to get the data from splunk. kindly suggest.
Hi, I set up a Splunk lab on my Windows 10 laptop, where both the Splunk Forwarder and Splunk Server are installed on the same host. After installing the Splunk Add-on for Windows, I created an inpu... See more...
Hi, I set up a Splunk lab on my Windows 10 laptop, where both the Splunk Forwarder and Splunk Server are installed on the same host. After installing the Splunk Add-on for Windows, I created an inputs.conf file in the local folder under etc/apps. ###### OS Logs ###### [WinEventLog://Application] disabled = 0 index = "windows_logs" start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=0 Despite this setup, I don't see any Windows logs in Splunk.
I have a lookuop that have domain names, I am already using this lookup in a search and its working fine, now I am trying to construct a new search based on same lookup and I get below error.   [i... See more...
I have a lookuop that have domain names, I am already using this lookup in a search and its working fine, now I am trying to construct a new search based on same lookup and I get below error.   [indexer1,indexer2,indexer3,indexer4] The lookup table 'lookup.csv' does not exist or is not available. the lookup is configured to run for all apps and roles. both searches are running on ES. 1 has error and other doesnt why?
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigure... See more...
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigured/disabled/deleted index=mtrc_os_<XXX> with source="source::Storage Nix Metrics" host="host::splunk-sh-02" sourcetype="sourcetype::mcollect_stash". Dropping them as lastChanceIndex setting in indexes.conf is not configured. So far received events from 18 missing index(es).3/4/2025, 1:00:34 AM
We upgraded our Splunk enterprise from 9.2.2 to 9.3.1, after the upgrade one of the app is not working as the related saved search is failing to execute. The saved search is throwing below error. c... See more...
We upgraded our Splunk enterprise from 9.2.2 to 9.3.1, after the upgrade one of the app is not working as the related saved search is failing to execute. The saved search is throwing below error. command="execute", Exception: Traceback (most recent call last): File "/opt/splunk/etc/apps/stormwatch/bin/execute.py", line 16, in <module> from utils.print import print, append_print File "/opt/splunk/etc/apps/stormwatch/bin/utils/print.py", line 3, in <module> import pandas as pd File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/__init__.py", line 138, in <module> from pandas import testing # noqa:PDF015 File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/testing.py", line 6, in <module> from pandas._testing import ( File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/_testing/__init__.py", line 69, in <module> from pandas._testing.asserters import ( File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/_testing/asserters.py", line 17, in <module> import pandas._libs.testing as _testing File "pandas/_libs/testing.pyx", line 1, in init pandas._libs.testing ModuleNotFoundError: No module named 'cmath'  
I want to integrate SentinelOne Singularity Enterprise data into my security workflows. What critical data (e.g., process/network events, threat detections, behavioral analytics) should I prioritize ... See more...
I want to integrate SentinelOne Singularity Enterprise data into my security workflows. What critical data (e.g., process/network events, threat detections, behavioral analytics) should I prioritize ingesting? What’s the best method (API, syslog, SIEM connectors) to pull this data, and how does it add value (e.g., improving threat hunting, automating response, or enriching SIEM/SOAR)?
I’m implementing a Canary Honeypot in my company and want to integrate its data with Splunk. What key information should I prioritize ingesting? and how can this data enhance threat detection (e.g., ... See more...
I’m implementing a Canary Honeypot in my company and want to integrate its data with Splunk. What key information should I prioritize ingesting? and how can this data enhance threat detection (e.g., correlating with firewall logs, spotting lateral movement)? Are there recommended Splunk TAs or SPL queries to structure and analyze this data effectively?
Hi  Need to find Unique Users(Count of distinct business users )& Clients(Count of distinct system client accounts ) I want to  have Unique Users and unqiue client based on cid.id and its associate... See more...
Hi  Need to find Unique Users(Count of distinct business users )& Clients(Count of distinct system client accounts ) I want to  have Unique Users and unqiue client based on cid.id and its associated groups example app unique user unique client groups   name.id 22 1 app.preprod.name   address.id 1 1 app.preprod.address,app.preprod.zipcode   index= AND source="*" | stats dc( claims.sub) as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` {"name":"","hostname":"1","pid":8,"level":,"claims":{"ver":1,"jti":"h7","iss":"https","aud":"https://p","iat":1,"exp":17,"cid":"name.id","uid":"00","scp":["update:","offline_access","read:","readall:","create:","openid","delete:","execute:","read:"],"auth_time":17,"sub":"name@gmail.com","groups":["App.PreProd.name"]},"msg":" JWT Claims -API","time":"2025","v":0} unique client index=* AND source="*" | stats dc( claims.cid) as "Unique Clients" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"``` "name":"","hostname":"1","pid":8,"level":,"claims":{"ver":1,"jti":"h7","iss":"https","aud":"https://p","iat":1,"exp":17,"cid":"address.id","uid":"00","scp":["update:","offline_access","read:","readall:","create:","openid","delete:","execute:","read:"],"auth_time":17,"sub":"name@gmail.com","groups":["App.PreProd.address,app.preprod.zipcode"]},"msg":" JWT Claims -API","time":"2025","v":0}  
Hi there, I just searched for a similar function of the "input=button" known in the Classic XML Dashboard within the Dashboard Studio docs and stumbled over the page "https://docs.splunk.com/Documen... See more...
Hi there, I just searched for a similar function of the "input=button" known in the Classic XML Dashboard within the Dashboard Studio docs and stumbled over the page "https://docs.splunk.com/Documentation/Splunk/9.4.1/DashStudio/inputButton". This is not documented any further, in the visual editor this input is not available and the source code editor gives me an error if I try to setup something up like "type=input.button" - which I guessed due to no further documentation of this input. Is this a future feature of the Dashboard Studio or am I missing something? Thanks in advance, Edgar
Hello, I am new to splunk and dashboards. First of all, I need something like this; I have about 20-30 different domains and different data comes from each of them. In my first main dashboard, we... See more...
Hello, I am new to splunk and dashboards. First of all, I need something like this; I have about 20-30 different domains and different data comes from each of them. In my first main dashboard, we can think of a box (single value) for each domainName. I want to write the last incoming data just below it. Then, as an example, when I click on this domainName data, I want the last 24 hours line chart of that domainName to come up. I wrote spl as follows but I could not get it as a variable. How can I do this?   <dashboard version="1.1" theme="light"> <label>EPS by Domain Dashboard</label> <search id="base_search"> <query>index=* sourcetype=test metric=epsbyDomain | stats latest(EPS) as EPS, latest(EPSLimit) as EPSLimit by domainName | eval underLabel="EPS / Limit: ".EPSLimit</query> <earliest>-7d</earliest> <latest>now</latest> </search> <search id="selected_domain_search" depends="$selected_domain$"> <query>index=* sourcetype=test metric=epsbyDomain domainName="$selected_domain$" | timechart span=5m avg(EPS) as EPS</query> <earliest>-24h</earliest> <latest>now</latest> </search> <row> <panel> <single> <search base="base_search"> <query>| table domainName, EPS, underLabel</query> </search> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">small</option> <option name="trellis.splitBy">domainName</option> <option name="underLabel">$row.underLabel$</option> <option name="useColors">1</option> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x006d9c","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="numberPrecision">0</option> <option name="height">500</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="selected_domain">$row.domainName$</set> <set token="show_chart">true</set> </drilldown> </single> </panel> </row> <row depends="$show_chart$"> <panel> <title>Last 24 hours - $selected_domain$</title> <chart> <search base="selected_domain_search"></search> <option name="charting.chart">line</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY.text">EPS</option> <option name="charting.legend.placement">bottom</option> </chart> </panel> </row> </dashboard>  
Good Day All,      I'm looking for assistance on how to create a Triggered Alert when a certain percentage number in a text .log file is met in real-time. For background, on a remote server there's ... See more...
Good Day All,      I'm looking for assistance on how to create a Triggered Alert when a certain percentage number in a text .log file is met in real-time. For background, on a remote server there's a PowerShell script that runs locally via Task Scheduler set to daily which generates a text .log file containing the used percentage of that drive (F: Drive in this instance). The Data Inputs –> Forwarded Inputs –> Files & Directories on splunk along with the Universal Forwarder on that remote server are configured and the text .log file can be read in splunk when searched as shown below:   Search: index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” Result: Event: 03/03/2025 13:10:40 - F: drive usage 17.989% used (All the text contained in the .log file) source = C:\\Admin\StorageLogs\storage_usage.log   sourcetype = storage_usage-too_small         What would be the best way to go about setting up a triggered alert that notifies you in real-time when that text .log file meets/exceeds 75% of the F: drive used? I attempted saving it as an alert from there by performing the following:   Save As -> Alert: Title: Storage Monitoring Description: (Will add at the end) Permissions: Shared in App Alert Type: Real-time Expires: 24 Hours Trigger Conditions: Custom Trigger alert when: (This is the field I’m trying to articulate the reading/notifying the 75% used part but unfamiliar with what to put) In: 1 minute Trigger: For each result Throttle: (Unsure if needs to be enabled or not) Trigger Actions -> When triggered -> Add to Triggered Alerts -> Severity: Medium        Would it be easier to configure the reading/notifying when 75% used part in the trigger conditions above or by adding the inputs in the main search query then saving? My apologies if I’m incorrect in any of my interpretations/explanations, I just started with this team and have basically no experience with splunk. Any information or guidance is greatly appreciated, thanks again.  
I have seen many struggle with the btool and the some messy output of it. So I made an updated version that makes it far better to use.  Its made as an function, so you can add it to any start up sc... See more...
I have seen many struggle with the btool and the some messy output of it. So I made an updated version that makes it far better to use.  Its made as an function, so you can add it to any start up script on your linux. It make use of color and sort all settings in groups to make it easy to find your settings. In green you see the stanza name. Yellow is each setting for the stanza.  And last in grey is the file that holds the setting.   btools () { # Handle input options if [[ "$1" == "-sd" || "$1" == "-ds" ]]; then opt="etc/system/default" file="$2" stanza="" search="$3" elif [[ "$1" == "-d" && "$2" == "-s" || "$1" == "-s" && "$2" == "-d" ]]; then opt="etc/system/default" file="$3" stanza="" search="$4" elif [[ "$1" == "-d" ]]; then opt="etc/system/default" file="$2" stanza="$3" search="$4" elif [[ "$1" == "-s" ]]; then opt="none" file="$2" stanza="" search="$3" else opt="none" file="$1" stanza="$2" search="$3" fi # If no options are given, show the options [[ -z "$file" ]] && echo -e " btools for Splunk v3.0 Jotne Missing arguments! usage: btools [OPTION] file [STANZA] [SEARCH] -d Do not show splunk default -s All stanza (only needed if search is added and no stanza) file=splunk config file without the .conf [stanza] = complete stanza name or just part of it [search] = search phrase or part of it Example: btools server general servername btools web " && return 1 # If options are not set, give default values [[ -z "$stanza" ]] && stanza=".*" || stanza=".*$stanza.*" [[ -z "$search" ]] && search="" ~/bin/splunk btool $file list --debug | awk -v reset="\033[m\t" \ -v yellow="\033[38;5;226m\t" \ -v green="\033[38;5;46m" ' # set the different ansi color used {sub(/\s+/,"#");split($0,p,"#")} # split the input p[1]=filename p[2]=rest of line p[2]~/^\[.*?\] *$/ {f=0} # if this is a stanza name set flag f=0 f && tolower(p[2])~tolower(search) { # if this is not stanza test if text is part of search or no seach split(p[2],s," ") # Store each stanza in its own group a[st s[1]]++ if(p[1]!~opt)print green st yellow p[2] reset p[1] # Print each block } p[2]~"^\\["stanza"\\]$" {f=1;st=p[2]} # Find next stans ' stanza="$stanza" search="$search" opt="$opt" }   Example: btools server general servername btools web Lets say you like to see all your custom setting in props.conf for the stansa regarding IP 10.36.30.90 and not show any default settings (-q) btools -q props 10.36.30.90   Give me customer setting for index shb_ad btools -q indexes shb_ad Homepath for the shb_ab index: btools -q indexes shb_ad homepath Give me all settings for index shb_ab (includes the default settings) (ps there are more lines than picture shows. btools indexes shb_ad Any suggestion to make it better is welcome
I'm trying to extract endpoint data from Cortex XDR, but I don't want to see just alerts in Splunk—I need all the endpoint data collected by XDR to be replicated in Splunk. Neither Palo Alto nor Splu... See more...
I'm trying to extract endpoint data from Cortex XDR, but I don't want to see just alerts in Splunk—I need all the endpoint data collected by XDR to be replicated in Splunk. Neither Palo Alto nor Splunk support has been able to assist with this. I can't be the first person to ask about it since this is a fundamental requirement—unless it's simply not possible and everyone else already knows that except me. There should be one way calling the APIs through HEC in Splunk, I need to write a script for it, any one tried this approach or any other ?