All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Snorlax  Do you have the full XML for your dashboard? This might make it easier for someone to update it and send back as a working example? Do you have an input (text or dropdown for example) ... See more...
Hi @Snorlax  Do you have the full XML for your dashboard? This might make it easier for someone to update it and send back as a working example? Do you have an input (text or dropdown for example) in your dashboard called "selected_domain" ? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Maybe you find another way to do it after you have read this post an especially that conf presentation https://community.splunk.com/t5/Splunk-Enterprise/How-do-I-back-up-my-Splunk-knowledge-objects-Is... See more...
Maybe you find another way to do it after you have read this post an especially that conf presentation https://community.splunk.com/t5/Splunk-Enterprise/How-do-I-back-up-my-Splunk-knowledge-objects-Is-this-a-big/m-p/568554/highlight/true#M10097 With correctly created rest queries and then use e.g. curl to send those generated SPLs back to splunk via rest can do what you are looking?
Use the rex command to extract the usage value then test the value to see if the alert should be triggered.  I find it more reliable to put the threshold in the alert rather than in the metadata.   ... See more...
Use the rex command to extract the usage value then test the value to see if the alert should be triggered.  I find it more reliable to put the threshold in the alert rather than in the metadata.   index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” | rex "usage (?<usage>[^%]+)% used" | where usage > 75  
There seems to be couple of different way to do this e.g. - https://splunk-guide-for-kafka-monitoring.readthedocs.io/en/latest/index.html - https://www.splunk.com/en_us/blog/devops/monitoring-kafka-... See more...
There seems to be couple of different way to do this e.g. - https://splunk-guide-for-kafka-monitoring.readthedocs.io/en/latest/index.html - https://www.splunk.com/en_us/blog/devops/monitoring-kafka-performance-metrics-with-splunk-infrastructure-monitoring.html - https://www.confluent.io/blog/bring-your-own-monitoring-with-confluent-cloud/ That last one seems to use otel, but I haven’t try it yet.
Hi I just found this https://community.splunk.com/t5/Getting-Data-In/How-to-configure-HTTP-Event-collector-to-log-client-source-IP/m-p/273311#M52449 I haven’t checked if this is valid also on SCP or... See more...
Hi I just found this https://community.splunk.com/t5/Getting-Data-In/How-to-configure-HTTP-Event-collector-to-log-client-source-IP/m-p/273311#M52449 I haven’t checked if this is valid also on SCP or not? r. Ismo
Hi @stevensk  Sorry I missed this reply - leave it with me and let me see if I can dig out what we did (it was a few years ago!)
Hi @DaveyJones  I think the easiest way to achieve this might be to add the following to your search     | rex field=_raw "usage (?<diskUsage>[0-9\.]+)% used" | where diskUsage>75     Adjust t... See more...
Hi @DaveyJones  I think the easiest way to achieve this might be to add the following to your search     | rex field=_raw "usage (?<diskUsage>[0-9\.]+)% used" | where diskUsage>75     Adjust the diskUsage>75 to whatever you need. This works by extracting the % value of the disk usage from the raw event and then only returning events where the diskUsage is over the specified value. You would then create the alert to run on a cron-schedule as required, such as every hour (Real-Time is generally not advised, especially as disk usage shouldnt drastically change that quick! So maybe run on a suitable interval, and adjust the time it looks back over (earliest) accordingly. Set the alert to Trigger alert when: Number of Results, is greater than 0. This will then trigger the alert if there is any result from the search (which has the specified limit on it).     Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hello, I am new to splunk and dashboards. First of all, I need something like this; I have about 20-30 different domains and different data comes from each of them. In my first main dashboard, we... See more...
Hello, I am new to splunk and dashboards. First of all, I need something like this; I have about 20-30 different domains and different data comes from each of them. In my first main dashboard, we can think of a box (single value) for each domainName. I want to write the last incoming data just below it. Then, as an example, when I click on this domainName data, I want the last 24 hours line chart of that domainName to come up. I wrote spl as follows but I could not get it as a variable. How can I do this?   <dashboard version="1.1" theme="light"> <label>EPS by Domain Dashboard</label> <search id="base_search"> <query>index=* sourcetype=test metric=epsbyDomain | stats latest(EPS) as EPS, latest(EPSLimit) as EPSLimit by domainName | eval underLabel="EPS / Limit: ".EPSLimit</query> <earliest>-7d</earliest> <latest>now</latest> </search> <search id="selected_domain_search" depends="$selected_domain$"> <query>index=* sourcetype=test metric=epsbyDomain domainName="$selected_domain$" | timechart span=5m avg(EPS) as EPS</query> <earliest>-24h</earliest> <latest>now</latest> </search> <row> <panel> <single> <search base="base_search"> <query>| table domainName, EPS, underLabel</query> </search> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">small</option> <option name="trellis.splitBy">domainName</option> <option name="underLabel">$row.underLabel$</option> <option name="useColors">1</option> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x006d9c","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="numberPrecision">0</option> <option name="height">500</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="selected_domain">$row.domainName$</set> <set token="show_chart">true</set> </drilldown> </single> </panel> </row> <row depends="$show_chart$"> <panel> <title>Last 24 hours - $selected_domain$</title> <chart> <search base="selected_domain_search"></search> <option name="charting.chart">line</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY.text">EPS</option> <option name="charting.legend.placement">bottom</option> </chart> </panel> </row> </dashboard>  
Good Day All,      I'm looking for assistance on how to create a Triggered Alert when a certain percentage number in a text .log file is met in real-time. For background, on a remote server there's ... See more...
Good Day All,      I'm looking for assistance on how to create a Triggered Alert when a certain percentage number in a text .log file is met in real-time. For background, on a remote server there's a PowerShell script that runs locally via Task Scheduler set to daily which generates a text .log file containing the used percentage of that drive (F: Drive in this instance). The Data Inputs –> Forwarded Inputs –> Files & Directories on splunk along with the Universal Forwarder on that remote server are configured and the text .log file can be read in splunk when searched as shown below:   Search: index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” Result: Event: 03/03/2025 13:10:40 - F: drive usage 17.989% used (All the text contained in the .log file) source = C:\\Admin\StorageLogs\storage_usage.log   sourcetype = storage_usage-too_small         What would be the best way to go about setting up a triggered alert that notifies you in real-time when that text .log file meets/exceeds 75% of the F: drive used? I attempted saving it as an alert from there by performing the following:   Save As -> Alert: Title: Storage Monitoring Description: (Will add at the end) Permissions: Shared in App Alert Type: Real-time Expires: 24 Hours Trigger Conditions: Custom Trigger alert when: (This is the field I’m trying to articulate the reading/notifying the 75% used part but unfamiliar with what to put) In: 1 minute Trigger: For each result Throttle: (Unsure if needs to be enabled or not) Trigger Actions -> When triggered -> Add to Triggered Alerts -> Severity: Medium        Would it be easier to configure the reading/notifying when 75% used part in the trigger conditions above or by adding the inputs in the main search query then saving? My apologies if I’m incorrect in any of my interpretations/explanations, I just started with this team and have basically no experience with splunk. Any information or guidance is greatly appreciated, thanks again.  
If you need to use AD users and groups to authenticate into full splunk enterprise server (no UF) you could/should use that method which has explained in previous posts. This works for both GUI and CL... See more...
If you need to use AD users and groups to authenticate into full splunk enterprise server (no UF) you could/should use that method which has explained in previous posts. This works for both GUI and CLI access. But usually you should have separate roles for users other than SH access. Actually in other nodes there shouldn’t allowed other than admins and in most cases those should allow only CLI not GUI access (except e.g. DB Connect HF).
Of course it depends on what “unused” means and what kind of role you have. I expect that you have admin role which can access all indexes. But as @PickleRick said if your role haven’t access to all i... See more...
Of course it depends on what “unused” means and what kind of role you have. I expect that you have admin role which can access all indexes. But as @PickleRick said if your role haven’t access to all indexes or you role haven’t granted capability to use remote rest to indexers then we have one additional issue. Fortunately we have https://splunkbase.splunk.com/app/6368 which help you on those cases, but still there will be other challenges.
Shortly you can’t do it. Even you can succeed to remove those “additional” buckets, when splunk recognize that cluster has lost SF or RF it starts to rebuild missed buckets. This will happened again ... See more...
Shortly you can’t do it. Even you can succeed to remove those “additional” buckets, when splunk recognize that cluster has lost SF or RF it starts to rebuild missed buckets. This will happened again and again until retention time of those buckets have fulfilled. And even you could do it some weird way you will lose all support from Splunk side when (I don’t say if) you have any issues with your environment. It’s much better that you configure your storage to avoid that replication or use some other storage instead of it.
Again - depends on what "unused" means here. Just listing defined indexes which hadn't received any data - that should be pretty straightforward indeed - check your defined indexes (it might be diff... See more...
Again - depends on what "unused" means here. Just listing defined indexes which hadn't received any data - that should be pretty straightforward indeed - check your defined indexes (it might be difficult though if you're on distributed setup and don't have the capability of spawning rest to indexers!) and compare it with a summary of your data across all indexes. (be aware of the difference between _time and _indextime). Be aware though that if you have shorter retention periods than what you're searching through, you might not get valid data. But that's it. Depending on what you mean by "unused", the rest of the task can be difficult or even impossible. How is Splunk supposed to know what sourcetypes you might have had defined yesterday and haven't searched for them? Or something like that... And if you have two or more SH(C)s connecting to the same indexer(s)... That might get ugly quickly.
I have seen many struggle with the btool and the some messy output of it. So I made an updated version that makes it far better to use.  Its made as an function, so you can add it to any start up sc... See more...
I have seen many struggle with the btool and the some messy output of it. So I made an updated version that makes it far better to use.  Its made as an function, so you can add it to any start up script on your linux. It make use of color and sort all settings in groups to make it easy to find your settings. In green you see the stanza name. Yellow is each setting for the stanza.  And last in grey is the file that holds the setting.   btools () { # Handle input options if [[ "$1" == "-sd" || "$1" == "-ds" ]]; then opt="etc/system/default" file="$2" stanza="" search="$3" elif [[ "$1" == "-d" && "$2" == "-s" || "$1" == "-s" && "$2" == "-d" ]]; then opt="etc/system/default" file="$3" stanza="" search="$4" elif [[ "$1" == "-d" ]]; then opt="etc/system/default" file="$2" stanza="$3" search="$4" elif [[ "$1" == "-s" ]]; then opt="none" file="$2" stanza="" search="$3" else opt="none" file="$1" stanza="$2" search="$3" fi # If no options are given, show the options [[ -z "$file" ]] && echo -e " btools for Splunk v3.0 Jotne Missing arguments! usage: btools [OPTION] file [STANZA] [SEARCH] -d Do not show splunk default -s All stanza (only needed if search is added and no stanza) file=splunk config file without the .conf [stanza] = complete stanza name or just part of it [search] = search phrase or part of it Example: btools server general servername btools web " && return 1 # If options are not set, give default values [[ -z "$stanza" ]] && stanza=".*" || stanza=".*$stanza.*" [[ -z "$search" ]] && search="" ~/bin/splunk btool $file list --debug | awk -v reset="\033[m\t" \ -v yellow="\033[38;5;226m\t" \ -v green="\033[38;5;46m" ' # set the different ansi color used {sub(/\s+/,"#");split($0,p,"#")} # split the input p[1]=filename p[2]=rest of line p[2]~/^\[.*?\] *$/ {f=0} # if this is a stanza name set flag f=0 f && tolower(p[2])~tolower(search) { # if this is not stanza test if text is part of search or no seach split(p[2],s," ") # Store each stanza in its own group a[st s[1]]++ if(p[1]!~opt)print green st yellow p[2] reset p[1] # Print each block } p[2]~"^\\["stanza"\\]$" {f=1;st=p[2]} # Find next stans ' stanza="$stanza" search="$search" opt="$opt" }   Example: btools server general servername btools web Lets say you like to see all your custom setting in props.conf for the stansa regarding IP 10.36.30.90 and not show any default settings (-q) btools -q props 10.36.30.90   Give me customer setting for index shb_ad btools -q indexes shb_ad Homepath for the shb_ab index: btools -q indexes shb_ad homepath Give me all settings for index shb_ab (includes the default settings) (ps there are more lines than picture shows. btools indexes shb_ad Any suggestion to make it better is welcome
Another proposal, don’t use cli to add inputs. That install those under SPLUNK_HOME/etc/system/local. If/when you are taking DS into use you must manually move/update those node by node. You should ... See more...
Another proposal, don’t use cli to add inputs. That install those under SPLUNK_HOME/etc/system/local. If/when you are taking DS into use you must manually move/update those node by node. You should always use separate apps where you are putting those definitions. That way it’s really easy to update those later on and also add same configuration to other nodes too as every log sources have their own app.
Hi peer-apps are place where MN will deploy those apps. It’s not used on MN. There you should use manager-apps instead of it. There on your lab _cluster is ok for testing, but for any real environme... See more...
Hi peer-apps are place where MN will deploy those apps. It’s not used on MN. There you should use manager-apps instead of it. There on your lab _cluster is ok for testing, but for any real environment you should use separate apps.  Are you sure that your REGEX is correct? Can you give a sample to us from both log files? Use </> as a code block. That way we could be sure that examples are what you have! Also could you add your inputs.conf also so we see what you have defined there?
Hi @robj, thanks for the suggestion! That sounds like a solid option. Do you also have your heavy forwarder deployed in AWS? We ended up using the Splunk Data Manager app to ingest AWS CloudTrail lo... See more...
Hi @robj, thanks for the suggestion! That sounds like a solid option. Do you also have your heavy forwarder deployed in AWS? We ended up using the Splunk Data Manager app to ingest AWS CloudTrail logs from an AWS S3 bucket using a cross-account IAM role that can be assumed by Splunk Cloud. Splunk Data Manager documentation: https://docs.splunk.com/Documentation/DM/1.12.0/User/About Configure AWS for onboarding from a single account: https://docs.splunk.com/Documentation/DM/1.12.0/User/AWSSingleAccount You can use the above implementation to either ingest CloudTrail logs from a single AWS account or from your centralized logging account in an AWS Organization or Control Tower environment.
This is quite often asked question when people want know are there unused indexes etc. you could look those by searching with google. Short answer is you can’t get this kind of list which is 100% acc... See more...
This is quite often asked question when people want know are there unused indexes etc. you could look those by searching with google. Short answer is you can’t get this kind of list which is 100% accurate. There are so many ways how you can access that data and there is no requirement that users must use index name or sourcetype names on queries. Of course you can get some estimates and you can get list of indexes and sourcetypes which are used, but there is no way to get list of unused ones!
Hi @Meett, just wanted to update on this old thread, we ended up using the Splunk Data Manager app to ingest AWS CloudTrail logs from an AWS S3 bucket using a cross-account IAM role that can be assum... See more...
Hi @Meett, just wanted to update on this old thread, we ended up using the Splunk Data Manager app to ingest AWS CloudTrail logs from an AWS S3 bucket using a cross-account IAM role that can be assumed by Splunk Cloud. Splunk Data Manager documentation: https://docs.splunk.com/Documentation/DM/1.12.0/User/About Configure AWS for onboarding from a single account: https://docs.splunk.com/Documentation/DM/1.12.0/User/AWSSingleAccount You can use the above implementation to either ingest CloudTrail logs from a single AWS account or from your centralized logging account in an AWS Organization or Control Tower environment.
Hi @securepoint  Unfortunately I cant get any of the Cortex docs to load for me at the moment, however at a previous customer we used Splunk SC4S to receive a syslog feed from Cortex and then sent t... See more...
Hi @securepoint  Unfortunately I cant get any of the Cortex docs to load for me at the moment, however at a previous customer we used Splunk SC4S to receive a syslog feed from Cortex and then sent this to Splunk over HEC. This was the raw data rather than alerts etc.  Are you able to configure any outputs such as syslog from your Cortex XDR configuration? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will