All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigure... See more...
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigured/disabled/deleted index=mtrc_os_<XXX> with source="source::Storage Nix Metrics" host="host::splunk-sh-02" sourcetype="sourcetype::mcollect_stash". Dropping them as lastChanceIndex setting in indexes.conf is not configured. So far received events from 18 missing index(es).3/4/2025, 1:00:34 AM
Hi @cadrija , are you using some python script in your app? If yes, review your script because python was definitively upgraded to 3.7. if not, there is anothr issue, and the best solution is to o... See more...
Hi @cadrija , are you using some python script in your app? If yes, review your script because python was definitively upgraded to 3.7. if not, there is anothr issue, and the best solution is to open a case to Splunk Support. Ciao. Giuseppe
Also wanting to know if there is an update to Dashboard studio to facilitate this
We upgraded our Splunk enterprise from 9.2.2 to 9.3.1, after the upgrade one of the app is not working as the related saved search is failing to execute. The saved search is throwing below error. c... See more...
We upgraded our Splunk enterprise from 9.2.2 to 9.3.1, after the upgrade one of the app is not working as the related saved search is failing to execute. The saved search is throwing below error. command="execute", Exception: Traceback (most recent call last): File "/opt/splunk/etc/apps/stormwatch/bin/execute.py", line 16, in <module> from utils.print import print, append_print File "/opt/splunk/etc/apps/stormwatch/bin/utils/print.py", line 3, in <module> import pandas as pd File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/__init__.py", line 138, in <module> from pandas import testing # noqa:PDF015 File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/testing.py", line 6, in <module> from pandas._testing import ( File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/_testing/__init__.py", line 69, in <module> from pandas._testing.asserters import ( File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/_testing/asserters.py", line 17, in <module> import pandas._libs.testing as _testing File "pandas/_libs/testing.pyx", line 1, in init pandas._libs.testing ModuleNotFoundError: No module named 'cmath'  
Hello, I decided to let go on JSON file In stead i receive a simple txt file now, whcih works better Thank you for you help. Harry
Thanks all for the insightful discussion. Upon further research i realized i am not supposed to let my indexers see the other indexers file as well, so that's one more reason why this idea wont work ... See more...
Thanks all for the insightful discussion. Upon further research i realized i am not supposed to let my indexers see the other indexers file as well, so that's one more reason why this idea wont work out.   Cheers
I want to integrate SentinelOne Singularity Enterprise data into my security workflows. What critical data (e.g., process/network events, threat detections, behavioral analytics) should I prioritize ... See more...
I want to integrate SentinelOne Singularity Enterprise data into my security workflows. What critical data (e.g., process/network events, threat detections, behavioral analytics) should I prioritize ingesting? What’s the best method (API, syslog, SIEM connectors) to pull this data, and how does it add value (e.g., improving threat hunting, automating response, or enriching SIEM/SOAR)?
I restarted the service, it was still showing "Status (Old)". Left it for a day or so it fixed it itself and showed the expected "Status"
Thanks for your reply @marnall . I was looking at Health > Status and Health > logs and both were showing "Status (Old)"
I’m implementing a Canary Honeypot in my company and want to integrate its data with Splunk. What key information should I prioritize ingesting? and how can this data enhance threat detection (e.g., ... See more...
I’m implementing a Canary Honeypot in my company and want to integrate its data with Splunk. What key information should I prioritize ingesting? and how can this data enhance threat detection (e.g., correlating with firewall logs, spotting lateral movement)? Are there recommended Splunk TAs or SPL queries to structure and analyze this data effectively?
Hi @jotne  As you may be aware, the * wildcard breaks due to filename expansion (or globbing) from the calling shell, i.e. what is passed to the btools function is a list of filenames in the direc... See more...
Hi @jotne  As you may be aware, the * wildcard breaks due to filename expansion (or globbing) from the calling shell, i.e. what is passed to the btools function is a list of filenames in the directory where the function is called, not the * wildcard. This can be turned off in the shell with the set -f call (set +f to re-enable), or the more useful convention is to escape the wildcard with a backslash or wrap it in single quotes.  Standard *nix commands that use the * wildcard on the command line (e.g. find) use this convention so I think this is a more conventional *nix method than using a ¤. My US keyboard does not provide easy access to this character.  [splunk ~]$ touch test_dummy [splunk ~]$ btools indexes test* coldpath.maxDataSizeMB # shell expands to test_dummy so does not work unless the * is escaped [splunk ~]$ btools indexes test\* coldpath.maxDataSizeMB [test_cust] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [splunk ~]$ [splunk@lrlskt02 ~]$ btools indexes * coldpath.maxDataSizeMB [splunk@lrlskt02 ~]$ [splunk@lrlskt02 ~]$ btools indexes '*' coldpath.maxDataSizeMB [_audit] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_internal] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_introspection] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_metrics] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_metrics_rollup] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_telemetry] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_thefishbucket] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [default] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [history] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [main] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [summary] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [test_cust] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [splunk@lrlskt02 ~]$
Thanks,  as you said the key indicator searches are designed to display metrics, so exactly where and how I can see these metrics? Thank you very much for your answer
Hi @nithys  Something like this should work ...   index=dummy | append [ | makeresults count=22 | eval json=split("{\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":... See more...
Hi @nithys  Something like this should work ...   index=dummy | append [ | makeresults count=22 | eval json=split("{\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":1,\"jti\":\"h7\",\"iss\":\"https\",\"aud\":\"https://p\",\"iat\":1,\"exp\":17,\"cid\":\"name.id\",\"uid\":\"00\",\"scp\":[\"update:\",\"offline_access\",\"read:\",\"readall:\",\"create:\",\"openid\",\"delete:\",\"execute:\",\"read:\"],\"auth_time\":17,\"sub\":\"name@gmail.com\",\"groups\":[\"App.PreProd.name\"]},\"msg\":\" JWT Claims -API\",\"time\":\"2025\",\"v\":0} | {\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":1,\"jti\":\"h7\",\"iss\":\"https\",\"aud\":\"https://p\",\"iat\":1,\"exp\":17,\"cid\":\"address.id\",\"uid\":\"00\",\"scp\":[\"update:\",\"offline_access\",\"read:\",\"readall:\",\"create:\",\"openid\",\"delete:\",\"execute:\",\"read:\"],\"auth_time\":17,\"sub\":\"name@gmail.com\",\"groups\":[\"App.PreProd.address,app.preprod.zipcode\"]},\"msg\":\" JWT Claims -API\",\"time\":\"2025\",\"v\":0}", " | ") ] | mvexpand json | eval _raw=json | spath | streamstats count | eval "claims.sub"=if(count%2=0, count."_".'claims.sub', 'claims.sub') ``` ^^^ create dummy events ^^^ ``` | stats dc(claims.sub) as "Unique Users" dc(claims.cid) as "Unique Clients" BY claims.cid claims.groups{} | rename claims.cid AS app claims.groups{} AS groups | table app "Unique Users" "Unique Clients" groups   Hope that helps 
Hi  Need to find Unique Users(Count of distinct business users )& Clients(Count of distinct system client accounts ) I want to  have Unique Users and unqiue client based on cid.id and its associate... See more...
Hi  Need to find Unique Users(Count of distinct business users )& Clients(Count of distinct system client accounts ) I want to  have Unique Users and unqiue client based on cid.id and its associated groups example app unique user unique client groups   name.id 22 1 app.preprod.name   address.id 1 1 app.preprod.address,app.preprod.zipcode   index= AND source="*" | stats dc( claims.sub) as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` {"name":"","hostname":"1","pid":8,"level":,"claims":{"ver":1,"jti":"h7","iss":"https","aud":"https://p","iat":1,"exp":17,"cid":"name.id","uid":"00","scp":["update:","offline_access","read:","readall:","create:","openid","delete:","execute:","read:"],"auth_time":17,"sub":"name@gmail.com","groups":["App.PreProd.name"]},"msg":" JWT Claims -API","time":"2025","v":0} unique client index=* AND source="*" | stats dc( claims.cid) as "Unique Clients" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"``` "name":"","hostname":"1","pid":8,"level":,"claims":{"ver":1,"jti":"h7","iss":"https","aud":"https://p","iat":1,"exp":17,"cid":"address.id","uid":"00","scp":["update:","offline_access","read:","readall:","create:","openid","delete:","execute:","read:"],"auth_time":17,"sub":"name@gmail.com","groups":["App.PreProd.address,app.preprod.zipcode"]},"msg":" JWT Claims -API","time":"2025","v":0}  
yes it worked .Thanks
Please use code block </> editor button to add your code. Otherwise it could meshed by editor.
Hi there, I just searched for a similar function of the "input=button" known in the Classic XML Dashboard within the Dashboard Studio docs and stumbled over the page "https://docs.splunk.com/Documen... See more...
Hi there, I just searched for a similar function of the "input=button" known in the Classic XML Dashboard within the Dashboard Studio docs and stumbled over the page "https://docs.splunk.com/Documentation/Splunk/9.4.1/DashStudio/inputButton". This is not documented any further, in the visual editor this input is not available and the source code editor gives me an error if I try to setup something up like "type=input.button" - which I guessed due to no further documentation of this input. Is this a future feature of the Dashboard Studio or am I missing something? Thanks in advance, Edgar
Hi Have you read what has changed when splunk is updated from 8.1 to 9.1? There is read me first document(s) which told those. Especially there could be some removed features which have worked on ol... See more...
Hi Have you read what has changed when splunk is updated from 8.1 to 9.1? There is read me first document(s) which told those. Especially there could be some removed features which have worked on old but not in a new version! Also if/when there are versions with higher patch level x.y.Z then you usually should select those instead of lower. https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/AboutupgradingREADTHISFIRST For example you found this from it “Splunk supports a direct upgrade to Splunk Enterprise 9.1 from versions 8.2.x and higher only”! If you have updated directly from 8.1.1 to 9.1.1 this is not supported and now you have missed some important migration steps which modified needed component between versions. Currently splunk support upgrades over only one minor version like 8.1 to 9.0 or 8.2 to 9.1. Also you should always train/test with test environment first and after you see that everything is ok then do those same steps with production. Your best and only supported solution is use your backup and do your upgrade again with supported path. Also you must start splunk in each versions which you are using on path from source to destination version! It didn’t do those migration steps with this. If you haven’t a backup then probably best option is create support ticket and ask if they have any instructions how you could try to fix the situation. r. Ismo
Hi @Snorlax  Do you have the full XML for your dashboard? This might make it easier for someone to update it and send back as a working example? Do you have an input (text or dropdown for example) ... See more...
Hi @Snorlax  Do you have the full XML for your dashboard? This might make it easier for someone to update it and send back as a working example? Do you have an input (text or dropdown for example) in your dashboard called "selected_domain" ? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Maybe you find another way to do it after you have read this post an especially that conf presentation https://community.splunk.com/t5/Splunk-Enterprise/How-do-I-back-up-my-Splunk-knowledge-objects-Is... See more...
Maybe you find another way to do it after you have read this post an especially that conf presentation https://community.splunk.com/t5/Splunk-Enterprise/How-do-I-back-up-my-Splunk-knowledge-objects-Is-this-a-big/m-p/568554/highlight/true#M10097 With correctly created rest queries and then use e.g. curl to send those generated SPLs back to splunk via rest can do what you are looking?