All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm following the same steps, but don't see the drill down appearing
I don't think that is the case, the drilldowns are not appearing at all
Hi, @livehybrid  > I had placed the inputs.conf file within an app folder $SPLUNK_HOME/etc/apps/yourApp/local/inputs.conf only.  > Splunk Forwarder and Splunk Server are installed on the same... See more...
Hi, @livehybrid  > I had placed the inputs.conf file within an app folder $SPLUNK_HOME/etc/apps/yourApp/local/inputs.conf only.  > Splunk Forwarder and Splunk Server are installed on the same host,  yes forwarder deployment is sending its internal logs to the main instance.   
Hi @khj  Typically your server will use swap if there is not enough RAM available on the system for the processes that are running.  Please could you let us know how much RAM the server has, and ho... See more...
Hi @khj  Typically your server will use swap if there is not enough RAM available on the system for the processes that are running.  Please could you let us know how much RAM the server has, and how much is typically being used? It could be that it is under-spec'd for the ES role. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@livehybrid Thanks for your reply I couldn't execute the splunk query in the splunk rest api using python script. Getting error message while executing the job. job has failed. | makeresults | eval... See more...
@livehybrid Thanks for your reply I couldn't execute the splunk query in the splunk rest api using python script. Getting error message while executing the job. job has failed. | makeresults | eval msg="HelloWorld"  i can execute it on splunk UI. It takes "This search has completed and has returned 1 results by scanning 0 events in 0.302 seconds". I 
free -m As a result of this command, we found that the memory usage is about 3% lower, but the swap memory is 100% in use. The same thing happens when you restart Splunk shortly after. Does anyo... See more...
free -m As a result of this command, we found that the memory usage is about 3% lower, but the swap memory is 100% in use. The same thing happens when you restart Splunk shortly after. Does anyone know the cause of the phenomenon and how to solve it The server environment is as follows. OS: CentOS 7 Splunk Enterprise 9.0.4
Hi @AL3Z , i don't think that you can install on the same VM both Spunk Enterprise and Splunk Universal Forwarder because they have the same IP and hostname and it's completely unuseful. If you wan... See more...
Hi @AL3Z , i don't think that you can install on the same VM both Spunk Enterprise and Splunk Universal Forwarder because they have the same IP and hostname and it's completely unuseful. If you want to test the windows logs ingestion from the local machine, you don't need to use the UF and you can use your Splunk instance to create the input (you can do it also by GUI but It's always better to use the Splunk_TA_Windows enabling the interesting inputs). If instead you want to test the connection between an UF and an Indexer, you have to use two different VMs and, on the UF, install the Splunk_TA_Windows enabling the interesting inputs. Ciao. Giuseppe
Hi @BalajiRaju  Please could you try two things to see if this gives us any further information on what might be happening here. Please could you run the same search in both Splunk UI and via REST ... See more...
Hi @BalajiRaju  Please could you try two things to see if this gives us any further information on what might be happening here. Please could you run the same search in both Splunk UI and via REST API and compare the runtimes for the same search. Please post the timing differences. Also, try a very basic search via the API such as  | makeresults | eval msg="HelloWorld" How long does the makeresults command take? Are you using the Splunk Python SDK, if so, which version?  Please feel free to post code snippets and searches to help us look into this further. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Nice trick @livehybrid , but no luck. Here's the field extraction for dest_ip You can see the preview says '1000 events'... and there's a "dest_ip" at bottom left. then >Save, >Finish ... See more...
Nice trick @livehybrid , but no luck. Here's the field extraction for dest_ip You can see the preview says '1000 events'... and there's a "dest_ip" at bottom left. then >Save, >Finish >Explore the fields I just created in Search: it has changed time range to last 24h, showing no Results; then I change to All Time and get the usual result: no sign of my extraction field on the left, in the 105 more fields or in All fields, etc.    
The use case of mine is to retrieve the data from splunk. I have written the python script to get the data from splunk using splunk rest api. But it takes too much of time to process it and give the ... See more...
The use case of mine is to retrieve the data from splunk. I have written the python script to get the data from splunk using splunk rest api. But it takes too much of time to process it and give the response. Tried oneshot method also but no use. Please guide me is there any other alternate approach to get the data from splunk. kindly suggest.
Hi @KKuser  Are you referring to the Thinkst Canary tool? If so there are a couple of Thinkst apps for Splunk that will help you out.  Check out the app [for visualising the data] (https://splunkba... See more...
Hi @KKuser  Are you referring to the Thinkst Canary tool? If so there are a couple of Thinkst apps for Splunk that will help you out.  Check out the app [for visualising the data] (https://splunkbase.splunk.com/app/6196) and the Add-On [for bringing the data in] (https://splunkbase.splunk.com/app/6195). The priority order of onboarding will depend a lot on your teams ability to respond, and company policy around priorities on threats. Do you have any internal processes which dictate what type of security event is higher priority than others? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @cadrija  Is the stormwatch app an internally developed app? It looks like the cmath module is not included in the app - this might have previously been included as part of the previous Splunk ve... See more...
Hi @cadrija  Is the stormwatch app an internally developed app? It looks like the cmath module is not included in the app - this might have previously been included as part of the previous Splunk version. You can try installing the cmath library into the site-packages folder using pip using: /opt/splunk/bin/splunk python3 -m pip install cmath -t /opt/splunk/etc/apps/stormwatch/bin/site-packages This might temporarily resolve the issue and allow you to determine the best permanent solution, however there is no guarantee that after adding the missing library that there wouldnt be further issues. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
It sounds like your lookup has not been replicated to the lookups. What does your architecture look like?  Try adding local=true to your lookup command, does it work then? That might help us work ou... See more...
It sounds like your lookup has not been replicated to the lookups. What does your architecture look like?  Try adding local=true to your lookup command, does it work then? That might help us work out what the issue might be. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @AL3Z  Just for clarity, did you put the inputs.conf within an app folder in $SPLUNK_HOME/etc/apps (e.g $SPLUNK_HOME/etc/apps/yourApp/local/inputs.conf ? Rather than $SPLUNK_HOME/etc/apps/local/i... See more...
Hi @AL3Z  Just for clarity, did you put the inputs.conf within an app folder in $SPLUNK_HOME/etc/apps (e.g $SPLUNK_HOME/etc/apps/yourApp/local/inputs.conf ? Rather than $SPLUNK_HOME/etc/apps/local/inputs.conf (incorrect) ? When you refer to "Splunk Forwarder and Splunk Server are installed on the same host" - Is this two deployments of Splunk on the same instance? If so, have you confirmed that your forwarder deployment is able to send its internal logs to the main instance.  Please review the _internal logs logs to confirm your forwarder is sending logs to your main Splunk instance (if applicable) and also if there are any errors relating to the Windows TA. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi, I set up a Splunk lab on my Windows 10 laptop, where both the Splunk Forwarder and Splunk Server are installed on the same host. After installing the Splunk Add-on for Windows, I created an inpu... See more...
Hi, I set up a Splunk lab on my Windows 10 laptop, where both the Splunk Forwarder and Splunk Server are installed on the same host. After installing the Splunk Add-on for Windows, I created an inputs.conf file in the local folder under etc/apps. ###### OS Logs ###### [WinEventLog://Application] disabled = 0 index = "windows_logs" start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=0 Despite this setup, I don't see any Windows logs in Splunk.
I have a lookuop that have domain names, I am already using this lookup in a search and its working fine, now I am trying to construct a new search based on same lookup and I get below error.   [i... See more...
I have a lookuop that have domain names, I am already using this lookup in a search and its working fine, now I am trying to construct a new search based on same lookup and I get below error.   [indexer1,indexer2,indexer3,indexer4] The lookup table 'lookup.csv' does not exist or is not available. the lookup is configured to run for all apps and roles. both searches are running on ES. 1 has error and other doesnt why?
Thanks for your reply.   I tried this, but it still expanded a file name. (i did see the problem with expansion so I used a symbol on my key board   )   test () { set -f if [[ "$1" == "-q" ... See more...
Thanks for your reply.   I tried this, but it still expanded a file name. (i did see the problem with expansion so I used a symbol on my key board   )   test () { set -f if [[ "$1" == "-q" ]]; then opt="etc/system/default" file="$2" stansa="$3" search="$4" else opt="x**x#x" file="$1" stansa="$2" search="$3" fi echo "opt=$opt" echo "file=$file" echo "stansa=$stansa" echo "search=$search" set +f }   One way I see to solve it is to add another option, like -a  (all splunk files) btools -a <stansa> <search>
I added the all coding.
Hi @nithys , when you use json fields use brachets or rename them: index= AND source="*" | rename claims.sub AS claims_sub | stats dc(claims_sub) as "Unique Users" ``` dc(claims.sub) as "Unique Use... See more...
Hi @nithys , when you use json fields use brachets or rename them: index= AND source="*" | rename claims.sub AS claims_sub | stats dc(claims_sub) as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims_sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` or index= AND source="*" | stats dc('claims.sub') as "Unique Users" ``` dc(claims.sub) as "Unique Users" count(claims.sub) as "Total" ``` ```| addcoltotals labelfield="Grand Total"` Ciao. Giuseppe
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigure... See more...
Hi All  I get this message but the indexes does exist, not permanent , it happens at 01:00 in the morning some days  Search peer idx-03 has the following message: Received event for unconfigured/disabled/deleted index=mtrc_os_<XXX> with source="source::Storage Nix Metrics" host="host::splunk-sh-02" sourcetype="sourcetype::mcollect_stash". Dropping them as lastChanceIndex setting in indexes.conf is not configured. So far received events from 18 missing index(es).3/4/2025, 1:00:34 AM