All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a lookuop that have domain names, I am already using this lookup in a search and its working fine, now I am trying to construct a new search based on same lookup and I get below error.   [i... See more...
I have a lookuop that have domain names, I am already using this lookup in a search and its working fine, now I am trying to construct a new search based on same lookup and I get below error.   [indexer1,indexer2,indexer3,indexer4] The lookup table 'lookup.csv' does not exist or is not available. the lookup is configured to run for all apps and roles. both searches are running on ES. 1 has error and other doesnt why?
Thanks for your reply.   I tried this, but it still expanded a file name. (i did see the problem with expansion so I used a symbol on my key board   )   test () { set -f if [[ "$1" == "-q" ... See more...
Thanks for your reply.   I tried this, but it still expanded a file name. (i did see the problem with expansion so I used a symbol on my key board   )   test () { set -f if [[ "$1" == "-q" ]]; then opt="etc/system/default" file="$2" stansa="$3" search="$4" else opt="x**x#x" file="$1" stansa="$2" search="$3" fi echo "opt=$opt" echo "file=$file" echo "stansa=$stansa" echo "search=$search" set +f }   One way I see to solve it is to add another option, like -a  (all splunk files) btools -a <stansa> <search>
I added the all coding.
Hi @nithys , when you use json fields use brachets or rename them: index= AND source="*" | rename claims.sub AS claims_sub | stats dc(claims_sub) as "Unique Users" ``` dc(claims.sub) as "Unique Use... See more...
Hi @nithys , when you use json fields use brachets or rename them: index= AND source="*" | rename claims.sub AS claims_sub | stats dc(claims_sub) as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims_sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` or index= AND source="*" | stats dc('claims.sub') as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` Ciao. Giuseppe
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigure... See more...
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigured/disabled/deleted index=mtrc_os_<XXX> with source="source::Storage Nix Metrics" host="host::splunk-sh-02" sourcetype="sourcetype::mcollect_stash". Dropping them as lastChanceIndex setting in indexes.conf is not configured. So far received events from 18 missing index(es).3/4/2025, 1:00:34 AM
Hi @cadrija , are you using some python script in your app? If yes, review your script because python was definitively upgraded to 3.7. if not, there is anothr issue, and the best solution is to o... See more...
Hi @cadrija , are you using some python script in your app? If yes, review your script because python was definitively upgraded to 3.7. if not, there is anothr issue, and the best solution is to open a case to Splunk Support. Ciao. Giuseppe
Also wanting to know if there is an update to Dashboard studio to facilitate this
We upgraded our Splunk enterprise from 9.2.2 to 9.3.1, after the upgrade one of the app is not working as the related saved search is failing to execute. The saved search is throwing below error. c... See more...
We upgraded our Splunk enterprise from 9.2.2 to 9.3.1, after the upgrade one of the app is not working as the related saved search is failing to execute. The saved search is throwing below error. command="execute", Exception: Traceback (most recent call last): File "/opt/splunk/etc/apps/stormwatch/bin/execute.py", line 16, in <module> from utils.print import print, append_print File "/opt/splunk/etc/apps/stormwatch/bin/utils/print.py", line 3, in <module> import pandas as pd File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/__init__.py", line 138, in <module> from pandas import testing # noqa:PDF015 File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/testing.py", line 6, in <module> from pandas._testing import ( File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/_testing/__init__.py", line 69, in <module> from pandas._testing.asserters import ( File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/_testing/asserters.py", line 17, in <module> import pandas._libs.testing as _testing File "pandas/_libs/testing.pyx", line 1, in init pandas._libs.testing ModuleNotFoundError: No module named 'cmath'  
Hello, I decided to let go on JSON file In stead i receive a simple txt file now, whcih works better Thank you for you help. Harry
Thanks all for the insightful discussion. Upon further research i realized i am not supposed to let my indexers see the other indexers file as well, so that's one more reason why this idea wont work ... See more...
Thanks all for the insightful discussion. Upon further research i realized i am not supposed to let my indexers see the other indexers file as well, so that's one more reason why this idea wont work out.   Cheers
I want to integrate SentinelOne Singularity Enterprise data into my security workflows. What critical data (e.g., process/network events, threat detections, behavioral analytics) should I prioritize ... See more...
I want to integrate SentinelOne Singularity Enterprise data into my security workflows. What critical data (e.g., process/network events, threat detections, behavioral analytics) should I prioritize ingesting? What’s the best method (API, syslog, SIEM connectors) to pull this data, and how does it add value (e.g., improving threat hunting, automating response, or enriching SIEM/SOAR)?
I restarted the service, it was still showing "Status (Old)". Left it for a day or so it fixed it itself and showed the expected "Status"
Thanks for your reply @marnall . I was looking at Health > Status and Health > logs and both were showing "Status (Old)"
I’m implementing a Canary Honeypot in my company and want to integrate its data with Splunk. What key information should I prioritize ingesting? and how can this data enhance threat detection (e.g., ... See more...
I’m implementing a Canary Honeypot in my company and want to integrate its data with Splunk. What key information should I prioritize ingesting? and how can this data enhance threat detection (e.g., correlating with firewall logs, spotting lateral movement)? Are there recommended Splunk TAs or SPL queries to structure and analyze this data effectively?
Hi @jotne  As you may be aware, the * wildcard breaks due to filename expansion (or globbing) from the calling shell, i.e. what is passed to the btools function is a list of filenames in the direc... See more...
Hi @jotne  As you may be aware, the * wildcard breaks due to filename expansion (or globbing) from the calling shell, i.e. what is passed to the btools function is a list of filenames in the directory where the function is called, not the * wildcard. This can be turned off in the shell with the set -f call (set +f to re-enable), or the more useful convention is to escape the wildcard with a backslash or wrap it in single quotes.  Standard *nix commands that use the * wildcard on the command line (e.g. find) use this convention so I think this is a more conventional *nix method than using a ¤. My US keyboard does not provide easy access to this character.  [splunk ~]$ touch test_dummy [splunk ~]$ btools indexes test* coldpath.maxDataSizeMB # shell expands to test_dummy so does not work unless the * is escaped [splunk ~]$ btools indexes test\* coldpath.maxDataSizeMB [test_cust] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [splunk ~]$ [splunk@lrlskt02 ~]$ btools indexes * coldpath.maxDataSizeMB [splunk@lrlskt02 ~]$ [splunk@lrlskt02 ~]$ btools indexes '*' coldpath.maxDataSizeMB [_audit] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_internal] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_introspection] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_metrics] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_metrics_rollup] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_telemetry] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [_thefishbucket] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [default] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [history] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [main] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [summary] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [test_cust] coldPath.maxDataSizeMB = 0 /opt/splunk/etc/system/default/indexes.conf [splunk@lrlskt02 ~]$
Thanks,  as you said the key indicator searches are designed to display metrics, so exactly where and how I can see these metrics? Thank you very much for your answer
Hi @nithys  Something like this should work ...   index=dummy | append [ | makeresults count=22 | eval json=split("{\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":... See more...
Hi @nithys  Something like this should work ...   index=dummy | append [ | makeresults count=22 | eval json=split("{\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":1,\"jti\":\"h7\",\"iss\":\"https\",\"aud\":\"https://p\",\"iat\":1,\"exp\":17,\"cid\":\"name.id\",\"uid\":\"00\",\"scp\":[\"update:\",\"offline_access\",\"read:\",\"readall:\",\"create:\",\"openid\",\"delete:\",\"execute:\",\"read:\"],\"auth_time\":17,\"sub\":\"name@gmail.com\",\"groups\":[\"App.PreProd.name\"]},\"msg\":\" JWT Claims -API\",\"time\":\"2025\",\"v\":0} | {\"name\":\"\",\"hostname\":\"1\",\"pid\":8,\"level\":\"\",\"claims\":{\"ver\":1,\"jti\":\"h7\",\"iss\":\"https\",\"aud\":\"https://p\",\"iat\":1,\"exp\":17,\"cid\":\"address.id\",\"uid\":\"00\",\"scp\":[\"update:\",\"offline_access\",\"read:\",\"readall:\",\"create:\",\"openid\",\"delete:\",\"execute:\",\"read:\"],\"auth_time\":17,\"sub\":\"name@gmail.com\",\"groups\":[\"App.PreProd.address,app.preprod.zipcode\"]},\"msg\":\" JWT Claims -API\",\"time\":\"2025\",\"v\":0}", " | ") ] | mvexpand json | eval _raw=json | spath | streamstats count | eval "claims.sub"=if(count%2=0, count."_".'claims.sub', 'claims.sub') ``` ^^^ create dummy events ^^^ ``` | stats dc(claims.sub) as "Unique Users" dc(claims.cid) as "Unique Clients" BY claims.cid claims.groups{} | rename claims.cid AS app claims.groups{} AS groups | table app "Unique Users" "Unique Clients" groups   Hope that helps 
Hi  Need to find Unique Users(Count of distinct business users )& Clients(Count of distinct system client accounts ) I want to  have Unique Users and unqiue client based on cid.id and its associate... See more...
Hi  Need to find Unique Users(Count of distinct business users )& Clients(Count of distinct system client accounts ) I want to  have Unique Users and unqiue client based on cid.id and its associated groups example app unique user unique client groups   name.id 22 1 app.preprod.name   address.id 1 1 app.preprod.address,app.preprod.zipcode   index= AND source="*" | stats dc( claims.sub) as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` {"name":"","hostname":"1","pid":8,"level":,"claims":{"ver":1,"jti":"h7","iss":"https","aud":"https://p","iat":1,"exp":17,"cid":"name.id","uid":"00","scp":["update:","offline_access","read:","readall:","create:","openid","delete:","execute:","read:"],"auth_time":17,"sub":"name@gmail.com","groups":["App.PreProd.name"]},"msg":" JWT Claims -API","time":"2025","v":0} unique client index=* AND source="*" | stats dc( claims.cid) as "Unique Clients" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"``` "name":"","hostname":"1","pid":8,"level":,"claims":{"ver":1,"jti":"h7","iss":"https","aud":"https://p","iat":1,"exp":17,"cid":"address.id","uid":"00","scp":["update:","offline_access","read:","readall:","create:","openid","delete:","execute:","read:"],"auth_time":17,"sub":"name@gmail.com","groups":["App.PreProd.address,app.preprod.zipcode"]},"msg":" JWT Claims -API","time":"2025","v":0}  
yes it worked .Thanks
Please use code block </> editor button to add your code. Otherwise it could meshed by editor.