All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Nice trick @livehybrid , but no luck. Here's the field extraction for dest_ip You can see the preview says '1000 events'... and there's a "dest_ip" at bottom left. then >Save, >Finish ... See more...
Nice trick @livehybrid , but no luck. Here's the field extraction for dest_ip You can see the preview says '1000 events'... and there's a "dest_ip" at bottom left. then >Save, >Finish >Explore the fields I just created in Search: it has changed time range to last 24h, showing no Results; then I change to All Time and get the usual result: no sign of my extraction field on the left, in the 105 more fields or in All fields, etc.    
The use case of mine is to retrieve the data from splunk. I have written the python script to get the data from splunk using splunk rest api. But it takes too much of time to process it and give the ... See more...
The use case of mine is to retrieve the data from splunk. I have written the python script to get the data from splunk using splunk rest api. But it takes too much of time to process it and give the response. Tried oneshot method also but no use. Please guide me is there any other alternate approach to get the data from splunk. kindly suggest.
Hi @KKuser  Are you referring to the Thinkst Canary tool? If so there are a couple of Thinkst apps for Splunk that will help you out.  Check out the app [for visualising the data] (https://splunkba... See more...
Hi @KKuser  Are you referring to the Thinkst Canary tool? If so there are a couple of Thinkst apps for Splunk that will help you out.  Check out the app [for visualising the data] (https://splunkbase.splunk.com/app/6196) and the Add-On [for bringing the data in] (https://splunkbase.splunk.com/app/6195). The priority order of onboarding will depend a lot on your teams ability to respond, and company policy around priorities on threats. Do you have any internal processes which dictate what type of security event is higher priority than others? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @cadrija  Is the stormwatch app an internally developed app? It looks like the cmath module is not included in the app - this might have previously been included as part of the previous Splunk ve... See more...
Hi @cadrija  Is the stormwatch app an internally developed app? It looks like the cmath module is not included in the app - this might have previously been included as part of the previous Splunk version. You can try installing the cmath library into the site-packages folder using pip using: /opt/splunk/bin/splunk python3 -m pip install cmath -t /opt/splunk/etc/apps/stormwatch/bin/site-packages This might temporarily resolve the issue and allow you to determine the best permanent solution, however there is no guarantee that after adding the missing library that there wouldnt be further issues. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
It sounds like your lookup has not been replicated to the lookups. What does your architecture look like?  Try adding local=true to your lookup command, does it work then? That might help us work ou... See more...
It sounds like your lookup has not been replicated to the lookups. What does your architecture look like?  Try adding local=true to your lookup command, does it work then? That might help us work out what the issue might be. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @AL3Z  Just for clarity, did you put the inputs.conf within an app folder in $SPLUNK_HOME/etc/apps (e.g $SPLUNK_HOME/etc/apps/yourApp/local/inputs.conf ? Rather than $SPLUNK_HOME/etc/apps/local/i... See more...
Hi @AL3Z  Just for clarity, did you put the inputs.conf within an app folder in $SPLUNK_HOME/etc/apps (e.g $SPLUNK_HOME/etc/apps/yourApp/local/inputs.conf ? Rather than $SPLUNK_HOME/etc/apps/local/inputs.conf (incorrect) ? When you refer to "Splunk Forwarder and Splunk Server are installed on the same host" - Is this two deployments of Splunk on the same instance? If so, have you confirmed that your forwarder deployment is able to send its internal logs to the main instance.  Please review the _internal logs logs to confirm your forwarder is sending logs to your main Splunk instance (if applicable) and also if there are any errors relating to the Windows TA. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi, I set up a Splunk lab on my Windows 10 laptop, where both the Splunk Forwarder and Splunk Server are installed on the same host. After installing the Splunk Add-on for Windows, I created an inpu... See more...
Hi, I set up a Splunk lab on my Windows 10 laptop, where both the Splunk Forwarder and Splunk Server are installed on the same host. After installing the Splunk Add-on for Windows, I created an inputs.conf file in the local folder under etc/apps. ###### OS Logs ###### [WinEventLog://Application] disabled = 0 index = "windows_logs" start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=0 Despite this setup, I don't see any Windows logs in Splunk.
I have a lookuop that have domain names, I am already using this lookup in a search and its working fine, now I am trying to construct a new search based on same lookup and I get below error.   [i... See more...
I have a lookuop that have domain names, I am already using this lookup in a search and its working fine, now I am trying to construct a new search based on same lookup and I get below error.   [indexer1,indexer2,indexer3,indexer4] The lookup table 'lookup.csv' does not exist or is not available. the lookup is configured to run for all apps and roles. both searches are running on ES. 1 has error and other doesnt why?
Thanks for your reply.   I tried this, but it still expanded a file name. (i did see the problem with expansion so I used a symbol on my key board   )   test () { set -f if [[ "$1" == "-q" ... See more...
Thanks for your reply.   I tried this, but it still expanded a file name. (i did see the problem with expansion so I used a symbol on my key board   )   test () { set -f if [[ "$1" == "-q" ]]; then opt="etc/system/default" file="$2" stansa="$3" search="$4" else opt="x**x#x" file="$1" stansa="$2" search="$3" fi echo "opt=$opt" echo "file=$file" echo "stansa=$stansa" echo "search=$search" set +f }   One way I see to solve it is to add another option, like -a  (all splunk files) btools -a <stansa> <search>
I added the all coding.
Hi @nithys , when you use json fields use brachets or rename them: index= AND source="*" | rename claims.sub AS claims_sub | stats dc(claims_sub) as "Unique Users" ``` dc(claims.sub) as "Unique Use... See more...
Hi @nithys , when you use json fields use brachets or rename them: index= AND source="*" | rename claims.sub AS claims_sub | stats dc(claims_sub) as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims_sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` or index= AND source="*" | stats dc('claims.sub') as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` Ciao. Giuseppe
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigure... See more...
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigured/disabled/deleted index=mtrc_os_<XXX> with source="source::Storage Nix Metrics" host="host::splunk-sh-02" sourcetype="sourcetype::mcollect_stash". Dropping them as lastChanceIndex setting in indexes.conf is not configured. So far received events from 18 missing index(es).3/4/2025, 1:00:34 AM
Hi @cadrija , are you using some python script in your app? If yes, review your script because python was definitively upgraded to 3.7. if not, there is anothr issue, and the best solution is to o... See more...
Hi @cadrija , are you using some python script in your app? If yes, review your script because python was definitively upgraded to 3.7. if not, there is anothr issue, and the best solution is to open a case to Splunk Support. Ciao. Giuseppe
Also wanting to know if there is an update to Dashboard studio to facilitate this
We upgraded our Splunk enterprise from 9.2.2 to 9.3.1, after the upgrade one of the app is not working as the related saved search is failing to execute. The saved search is throwing below error. c... See more...
We upgraded our Splunk enterprise from 9.2.2 to 9.3.1, after the upgrade one of the app is not working as the related saved search is failing to execute. The saved search is throwing below error. command="execute", Exception: Traceback (most recent call last): File "/opt/splunk/etc/apps/stormwatch/bin/execute.py", line 16, in <module> from utils.print import print, append_print File "/opt/splunk/etc/apps/stormwatch/bin/utils/print.py", line 3, in <module> import pandas as pd File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/__init__.py", line 138, in <module> from pandas import testing # noqa:PDF015 File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/testing.py", line 6, in <module> from pandas._testing import ( File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/_testing/__init__.py", line 69, in <module> from pandas._testing.asserters import ( File "/opt/splunk/etc/apps/stormwatch/bin/site-packages/pandas/_testing/asserters.py", line 17, in <module> import pandas._libs.testing as _testing File "pandas/_libs/testing.pyx", line 1, in init pandas._libs.testing ModuleNotFoundError: No module named 'cmath'  
Hello, I decided to let go on JSON file In stead i receive a simple txt file now, whcih works better Thank you for you help. Harry
Thanks all for the insightful discussion. Upon further research i realized i am not supposed to let my indexers see the other indexers file as well, so that's one more reason why this idea wont work ... See more...
Thanks all for the insightful discussion. Upon further research i realized i am not supposed to let my indexers see the other indexers file as well, so that's one more reason why this idea wont work out.   Cheers
I want to integrate SentinelOne Singularity Enterprise data into my security workflows. What critical data (e.g., process/network events, threat detections, behavioral analytics) should I prioritize ... See more...
I want to integrate SentinelOne Singularity Enterprise data into my security workflows. What critical data (e.g., process/network events, threat detections, behavioral analytics) should I prioritize ingesting? What’s the best method (API, syslog, SIEM connectors) to pull this data, and how does it add value (e.g., improving threat hunting, automating response, or enriching SIEM/SOAR)?
I restarted the service, it was still showing "Status (Old)". Left it for a day or so it fixed it itself and showed the expected "Status"
Thanks for your reply @marnall . I was looking at Health > Status and Health > logs and both were showing "Status (Old)"