All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I’m implementing a Canary Honeypot in my company and want to integrate its data with Splunk. What key information should I prioritize ingesting? and how can this data enhance threat detection (e.g., ... See more...
I’m implementing a Canary Honeypot in my company and want to integrate its data with Splunk. What key information should I prioritize ingesting? and how can this data enhance threat detection (e.g., correlating with firewall logs, spotting lateral movement)? Are there recommended Splunk TAs or SPL queries to structure and analyze this data effectively?
Hi @jotne  As you may be aware, the * wildcard breaks due to filename expansion (or globbing) from the calling shell, i.e. what is passed to the btools function is a list of filenames in the direc... See more...
Hi @jotne  As you may be aware, the * wildcard breaks due to filename expansion (or globbing) from the calling shell, i.e. what is passed to the btools function is a list of filenames in the directory where the function is called, not the * wildcard. This can be turned off in the shell with the set -f call (set +f to re-enable), or the more useful convention is to escape the wildcard with a backslash or wrap it in single quotes.  Standard *nix commands that use the * wildcard on the command line (e.g. find) use this convention so I think this is a more conventional *nix method than using a ¤. My US keyboard does not provide easy access to this character.  [splunk ~]$ touch test_dummy [splunk ~]$ btools indexes test* coldpath.maxDataSizeMB # shell expands to test_dummy so does not work unless the * is escaped [splunk ~]$ btools indexes test\* coldpath.maxDataSizeMB [test_cust] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [splunk ~]$ [splunk@lrlskt02 ~]$ btools indexes * coldpath.maxDataSizeMB [splunk@lrlskt02 ~]$ [splunk@lrlskt02 ~]$ btools indexes '*' coldpath.maxDataSizeMB [_audit] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_internal] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_introspection] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_metrics] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_metrics_rollup] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_telemetry] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_thefishbucket] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [default] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [history] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [main] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [summary] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [test_cust] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [splunk@lrlskt02 ~]$
Thanks,  as you said the key indicator searches are designed to display metrics, so exactly where and how I can see these metrics? Thank you very much for your answer
Hi @nithys  Something like this should work ...   index=dummy | append [ | makeresults count=22 | eval json=split("{\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":... See more...
Hi @nithys  Something like this should work ...   index=dummy | append [ | makeresults count=22 | eval json=split("{\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":1,\"jti\":\"h7\",\"iss\":\"https\",\"aud\":\"https://p\",\"iat\":1,\"exp\":17,\"cid\":\"name.id\",\"uid\":\"00\",\"scp\":[\"update:\",\"offline_access\",\"read:\",\"readall:\",\"create:\",\"openid\",\"delete:\",\"execute:\",\"read:\"],\"auth_time\":17,\"sub\":\"name@gmail.com\",\"groups\":[\"App.PreProd.name\"]},\"msg\":\" JWT Claims -API\",\"time\":\"2025\",\"v\":0} | {\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":1,\"jti\":\"h7\",\"iss\":\"https\",\"aud\":\"https://p\",\"iat\":1,\"exp\":17,\"cid\":\"address.id\",\"uid\":\"00\",\"scp\":[\"update:\",\"offline_access\",\"read:\",\"readall:\",\"create:\",\"openid\",\"delete:\",\"execute:\",\"read:\"],\"auth_time\":17,\"sub\":\"name@gmail.com\",\"groups\":[\"App.PreProd.address,app.preprod.zipcode\"]},\"msg\":\" JWT Claims -API\",\"time\":\"2025\",\"v\":0}", " | ") ] | mvexpand json | eval _raw=json | spath | streamstats count | eval "claims.sub"=if(count%2=0, count."_".'claims.sub', 'claims.sub') ``` ^^^ create dummy events ^^^ ``` | stats dc(claims.sub) as "Unique Users" dc(claims.cid) as "Unique Clients" BY claims.cid claims.groups{} | rename claims.cid AS app claims.groups{} AS groups | table app "Unique Users" "Unique Clients" groups   Hope that helps 
Hi  Need to find Unique Users(Count of distinct business users )& Clients(Count of distinct system client accounts ) I want to  have Unique Users and unqiue client based on cid.id and its associate... See more...
Hi  Need to find Unique Users(Count of distinct business users )& Clients(Count of distinct system client accounts ) I want to  have Unique Users and unqiue client based on cid.id and its associated groups example app unique user unique client groups   name.id 22 1 app.preprod.name   address.id 1 1 app.preprod.address,app.preprod.zipcode   index= AND source="*" | stats dc( claims.sub) as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` {"name":"","hostname":"1","pid":8,"level":,"claims":{"ver":1,"jti":"h7","iss":"https","aud":"https://p","iat":1,"exp":17,"cid":"name.id","uid":"00","scp":["update:","offline_access","read:","readall:","create:","openid","delete:","execute:","read:"],"auth_time":17,"sub":"name@gmail.com","groups":["App.PreProd.name"]},"msg":" JWT Claims -API","time":"2025","v":0} unique client index=* AND source="*" | stats dc( claims.cid) as "Unique Clients" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"``` "name":"","hostname":"1","pid":8,"level":,"claims":{"ver":1,"jti":"h7","iss":"https","aud":"https://p","iat":1,"exp":17,"cid":"address.id","uid":"00","scp":["update:","offline_access","read:","readall:","create:","openid","delete:","execute:","read:"],"auth_time":17,"sub":"name@gmail.com","groups":["App.PreProd.address,app.preprod.zipcode"]},"msg":" JWT Claims -API","time":"2025","v":0}  
yes it worked .Thanks
Please use code block </> editor button to add your code. Otherwise it could meshed by editor.
Hi there, I just searched for a similar function of the "input=button" known in the Classic XML Dashboard within the Dashboard Studio docs and stumbled over the page "https://docs.splunk.com/Documen... See more...
Hi there, I just searched for a similar function of the "input=button" known in the Classic XML Dashboard within the Dashboard Studio docs and stumbled over the page "https://docs.splunk.com/Documentation/Splunk/9.4.1/DashStudio/inputButton". This is not documented any further, in the visual editor this input is not available and the source code editor gives me an error if I try to setup something up like "type=input.button" - which I guessed due to no further documentation of this input. Is this a future feature of the Dashboard Studio or am I missing something? Thanks in advance, Edgar
Hi Have you read what has changed when splunk is updated from 8.1 to 9.1? There is read me first document(s) which told those. Especially there could be some removed features which have worked on ol... See more...
Hi Have you read what has changed when splunk is updated from 8.1 to 9.1? There is read me first document(s) which told those. Especially there could be some removed features which have worked on old but not in a new version! Also if/when there are versions with higher patch level x.y.Z then you usually should select those instead of lower. https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/AboutupgradingREADTHISFIRST For example you found this from it “Splunk supports a direct upgrade to Splunk Enterprise 9.1 from versions 8.2.x and higher only”! If you have updated directly from 8.1.1 to 9.1.1 this is not supported and now you have missed some important migration steps which modified needed component between versions. Currently splunk support upgrades over only one minor version like 8.1 to 9.0 or 8.2 to 9.1. Also you should always train/test with test environment first and after you see that everything is ok then do those same steps with production. Your best and only supported solution is use your backup and do your upgrade again with supported path. Also you must start splunk in each versions which you are using on path from source to destination version! It didn’t do those migration steps with this. If you haven’t a backup then probably best option is create support ticket and ask if they have any instructions how you could try to fix the situation. r. Ismo
Hi @Snorlax  Do you have the full XML for your dashboard? This might make it easier for someone to update it and send back as a working example? Do you have an input (text or dropdown for example) ... See more...
Hi @Snorlax  Do you have the full XML for your dashboard? This might make it easier for someone to update it and send back as a working example? Do you have an input (text or dropdown for example) in your dashboard called "selected_domain" ? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Maybe you find another way to do it after you have read this post an especially that conf presentation https://community.splunk.com/t5/Splunk-Enterprise/How-do-I-back-up-my-Splunk-knowledge-objects-Is... See more...
Maybe you find another way to do it after you have read this post an especially that conf presentation https://community.splunk.com/t5/Splunk-Enterprise/How-do-I-back-up-my-Splunk-knowledge-objects-Is-this-a-big/m-p/568554/highlight/true#M10097 With correctly created rest queries and then use e.g. curl to send those generated SPLs back to splunk via rest can do what you are looking?
Use the rex command to extract the usage value then test the value to see if the alert should be triggered.  I find it more reliable to put the threshold in the alert rather than in the metadata.   ... See more...
Use the rex command to extract the usage value then test the value to see if the alert should be triggered.  I find it more reliable to put the threshold in the alert rather than in the metadata.   index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” | rex "usage (?<usage>[^%]+)% used" | where usage > 75  
There seems to be couple of different way to do this e.g. - https://splunk-guide-for-kafka-monitoring.readthedocs.io/en/latest/index.html - https://www.splunk.com/en_us/blog/devops/monitoring-kafka-... See more...
There seems to be couple of different way to do this e.g. - https://splunk-guide-for-kafka-monitoring.readthedocs.io/en/latest/index.html - https://www.splunk.com/en_us/blog/devops/monitoring-kafka-performance-metrics-with-splunk-infrastructure-monitoring.html - https://www.confluent.io/blog/bring-your-own-monitoring-with-confluent-cloud/ That last one seems to use otel, but I haven’t try it yet.
Hi I just found this https://community.splunk.com/t5/Getting-Data-In/How-to-configure-HTTP-Event-collector-to-log-client-source-IP/m-p/273311#M52449 I haven’t checked if this is valid also on SCP or... See more...
Hi I just found this https://community.splunk.com/t5/Getting-Data-In/How-to-configure-HTTP-Event-collector-to-log-client-source-IP/m-p/273311#M52449 I haven’t checked if this is valid also on SCP or not? r. Ismo
Hi @stevensk  Sorry I missed this reply - leave it with me and let me see if I can dig out what we did (it was a few years ago!)
Hi @DaveyJones  I think the easiest way to achieve this might be to add the following to your search     | rex field=_raw "usage (?<diskUsage>[0-9\.]+)% used" | where diskUsage>75     Adjust t... See more...
Hi @DaveyJones  I think the easiest way to achieve this might be to add the following to your search     | rex field=_raw "usage (?<diskUsage>[0-9\.]+)% used" | where diskUsage>75     Adjust the diskUsage>75 to whatever you need. This works by extracting the % value of the disk usage from the raw event and then only returning events where the diskUsage is over the specified value. You would then create the alert to run on a cron-schedule as required, such as every hour (Real-Time is generally not advised, especially as disk usage shouldnt drastically change that quick! So maybe run on a suitable interval, and adjust the time it looks back over (earliest) accordingly. Set the alert to Trigger alert when: Number of Results, is greater than 0. This will then trigger the alert if there is any result from the search (which has the specified limit on it).     Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hello, I am new to splunk and dashboards. First of all, I need something like this; I have about 20-30 different domains and different data comes from each of them. In my first main dashboard, we... See more...
Hello, I am new to splunk and dashboards. First of all, I need something like this; I have about 20-30 different domains and different data comes from each of them. In my first main dashboard, we can think of a box (single value) for each domainName. I want to write the last incoming data just below it. Then, as an example, when I click on this domainName data, I want the last 24 hours line chart of that domainName to come up. I wrote spl as follows but I could not get it as a variable. How can I do this?   <dashboard version="1.1" theme="light"> <label>EPS by Domain Dashboard</label> <search id="base_search"> <query>index=* sourcetype=test metric=epsbyDomain | stats latest(EPS) as EPS, latest(EPSLimit) as EPSLimit by domainName | eval underLabel="EPS / Limit: ".EPSLimit</query> <earliest>-7d</earliest> <latest>now</latest> </search> <search id="selected_domain_search" depends="$selected_domain$"> <query>index=* sourcetype=test metric=epsbyDomain domainName="$selected_domain$" | timechart span=5m avg(EPS) as EPS</query> <earliest>-24h</earliest> <latest>now</latest> </search> <row> <panel> <single> <search base="base_search"> <query>| table domainName, EPS, underLabel</query> </search> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">small</option> <option name="trellis.splitBy">domainName</option> <option name="underLabel">$row.underLabel$</option> <option name="useColors">1</option> <option name="colorMode">block</option> <option name="rangeColors">["0x53a051","0x006d9c","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="numberPrecision">0</option> <option name="height">500</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="selected_domain">$row.domainName$</set> <set token="show_chart">true</set> </drilldown> </single> </panel> </row> <row depends="$show_chart$"> <panel> <title>Last 24 hours - $selected_domain$</title> <chart> <search base="selected_domain_search"></search> <option name="charting.chart">line</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY.text">EPS</option> <option name="charting.legend.placement">bottom</option> </chart> </panel> </row> </dashboard>  
Good Day All,      I'm looking for assistance on how to create a Triggered Alert when a certain percentage number in a text .log file is met in real-time. For background, on a remote server there's ... See more...
Good Day All,      I'm looking for assistance on how to create a Triggered Alert when a certain percentage number in a text .log file is met in real-time. For background, on a remote server there's a PowerShell script that runs locally via Task Scheduler set to daily which generates a text .log file containing the used percentage of that drive (F: Drive in this instance). The Data Inputs –> Forwarded Inputs –> Files & Directories on splunk along with the Universal Forwarder on that remote server are configured and the text .log file can be read in splunk when searched as shown below:   Search: index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” Result: Event: 03/03/2025 13:10:40 - F: drive usage 17.989% used (All the text contained in the .log file) source = C:\\Admin\StorageLogs\storage_usage.log   sourcetype = storage_usage-too_small         What would be the best way to go about setting up a triggered alert that notifies you in real-time when that text .log file meets/exceeds 75% of the F: drive used? I attempted saving it as an alert from there by performing the following:   Save As -> Alert: Title: Storage Monitoring Description: (Will add at the end) Permissions: Shared in App Alert Type: Real-time Expires: 24 Hours Trigger Conditions: Custom Trigger alert when: (This is the field I’m trying to articulate the reading/notifying the 75% used part but unfamiliar with what to put) In: 1 minute Trigger: For each result Throttle: (Unsure if needs to be enabled or not) Trigger Actions -> When triggered -> Add to Triggered Alerts -> Severity: Medium        Would it be easier to configure the reading/notifying when 75% used part in the trigger conditions above or by adding the inputs in the main search query then saving? My apologies if I’m incorrect in any of my interpretations/explanations, I just started with this team and have basically no experience with splunk. Any information or guidance is greatly appreciated, thanks again.  
If you need to use AD users and groups to authenticate into full splunk enterprise server (no UF) you could/should use that method which has explained in previous posts. This works for both GUI and CL... See more...
If you need to use AD users and groups to authenticate into full splunk enterprise server (no UF) you could/should use that method which has explained in previous posts. This works for both GUI and CLI access. But usually you should have separate roles for users other than SH access. Actually in other nodes there shouldn’t allowed other than admins and in most cases those should allow only CLI not GUI access (except e.g. DB Connect HF).
Of course it depends on what “unused” means and what kind of role you have. I expect that you have admin role which can access all indexes. But as @PickleRick said if your role haven’t access to all i... See more...
Of course it depends on what “unused” means and what kind of role you have. I expect that you have admin role which can access all indexes. But as @PickleRick said if your role haven’t access to all indexes or you role haven’t granted capability to use remote rest to indexers then we have one additional issue. Fortunately we have https://splunkbase.splunk.com/app/6368 which help you on those cases, but still there will be other challenges.