All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I am working on one application which needs to export the records into a CSV using splunk job. i have checked the SID for that job and ran the loadjob command, it showing 70,000 records... See more...
Hi all, I am working on one application which needs to export the records into a CSV using splunk job. i have checked the SID for that job and ran the loadjob command, it showing 70,000 records but when i export got 50,000 records.  checked the savedsearch.conf file for that macro under /opt/splunk/etc/system/default and found that dispatch.max_count = 500000 . so please suggest on this. Thank you. Frontend react js back end : splunk splunk version : 8.2.0
Dear Team, Can you tell us about some possible custom use cases for Imperva WAF?   Regards, Anshul Sharma  
<html> Hello If you look at the search manual, one of the restrictions is the inputlookup command. [Search manual] Restrictions for standard mode federated search Standard mode federated search ... See more...
<html> Hello If you look at the search manual, one of the restrictions is the inputlookup command. [Search manual] Restrictions for standard mode federated search Standard mode federated search does not support the following: Generating commands other than search, from, loadjob, mstats, or tstats. For example, federated searches cannot include the datamodel or inputlookup commands. You can find a list of generating commands in Command types, in the Search Reference. Is this inputlookup a constraint on local Splunk? Or is it a remote restriction? Thank you </html>
Hello Splunkers,  My _internaldb and _introspection indexes are getting bigger and I am wondering if I can delete some events in order to reduce their sizes ? How would I do that ? Is it safe to do ... See more...
Hello Splunkers,  My _internaldb and _introspection indexes are getting bigger and I am wondering if I can delete some events in order to reduce their sizes ? How would I do that ? Is it safe to do it ?  Thanks for your time ! GaetanVP  
News!? It seems we need an updated version for this App compatible with Splunk Cloud (vetted) as it is no longer/lost its Splunk Cloud compatibility (no longer vetted). Is anyone aware of any updat... See more...
News!? It seems we need an updated version for this App compatible with Splunk Cloud (vetted) as it is no longer/lost its Splunk Cloud compatibility (no longer vetted). Is anyone aware of any updates here? Thank you.        
Hello all, I am getting below error in splunk deployment, On checking the splunk internal logs index="_internal" component=HotBucketRoller caller=clusterslavecontrol_rollhotbucket idx=_internal The... See more...
Hello all, I am getting below error in splunk deployment, On checking the splunk internal logs index="_internal" component=HotBucketRoller caller=clusterslavecontrol_rollhotbucket idx=_internal The hotbucket is rolled out to warm by the component HotBucketRoller by clusterslavecontrol_rollhotbucket   07-23-2023 12:39:23.522 -0400 INFO HotBucketRoller [35263 indexerPipe] - finished moving hot to warm bid=_internal~18979~0754D341-849D-4A16-9B2B-322415DA7E29 idx=_internal from=hot_v1_18979 to=db_1690130735_1690130086_18979_0754D341-849D-4A16-9B2B-322415DA7E29 size=3182592 caller=clusterslavecontrol_rollhotbucket isinit=false selective=false the bucket rolled out by this method are too smal size.
I have a HEC token sending various logs from AWS Cloudwatch. HEC token is set to have two indexes paloalto and aws. And all the data is going to the default index aws. I want all the data from the ... See more...
I have a HEC token sending various logs from AWS Cloudwatch. HEC token is set to have two indexes paloalto and aws. And all the data is going to the default index aws. I want all the data from the source=syslogng to be sent to index=paloalto instead of index=aws. Should I change the inputs.conf or props.conf for the splunk_httpinput in the deployment apps?
Hi team, Could you please guide me on how to push/import the dashboard using the client's secret and temporary access token?  Thankyou
Hi folks, I have a very simple alert set up that triggers if the number of results is greater than 0. I'd like to throttle the alert from triggering again for a specified time period, but the thrott... See more...
Hi folks, I have a very simple alert set up that triggers if the number of results is greater than 0. I'd like to throttle the alert from triggering again for a specified time period, but the throttle seems to be ignored. Search: index=sample host=example_host Schedule: Cron - */5 * * * * Trigger: Number of Results > 0 Trigger Once Throttle: Suppress triggering for 10 minutes. Action: Send email. The alert triggers with no problem; however, rather than throttling for 10 minutes, the alert gets triggered again after 5 minutes if the condition is met. It's a simple search where the trigger condition is there being any results at all. What am I doing wrong here? Any help would be greatly appreciated!
1) I want to list top 10 usernames those got most 403 status codes.      for example a username named sigma got 2000 of this code. I want to this username be in the top of the list. 2) I want to li... See more...
1) I want to list top 10 usernames those got most 403 status codes.      for example a username named sigma got 2000 of this code. I want to this username be in the top of the list. 2) I want to list top 10 usernames those got most 403 status code on some obejcts.      for example username named sigma got 2000 of 403 status code on secret object.   fields: username, status_code, object_ref
Hello  I have a distributed Splunk environment and  ThreatQ tip installed in the network. I'm trying to integrate ThreatQ TIP (threat intel platform) with my Splunk distributed environment, but I... See more...
Hello  I have a distributed Splunk environment and  ThreatQ tip installed in the network. I'm trying to integrate ThreatQ TIP (threat intel platform) with my Splunk distributed environment, but I can't find any good resources or walkthroughs. Note: I Have already installed ThreatQ app and add-on on the Splunk search head.
Hi there, I have a <html> <style> block defined for a table in my dashboard. It is defining a custom column width. It works when I only have a limited number of columns in my table (See "Table 1" i... See more...
Hi there, I have a <html> <style> block defined for a table in my dashboard. It is defining a custom column width. It works when I only have a limited number of columns in my table (See "Table 1" in the example dashboard code below) However when I add additional columns it stops working (see "Table 2").  I have searched high and low for a solution to this with no luck. Anyone got any ideas? Here is an example dashboard that demonstrates the issue - the cveID column is the focus.   <form version="1.1" theme="dark" hideFilters="true"> <label>custom table column width not working</label> <row> <panel depends="$alwaysHideCSSPanel$"> <html> <style> #customWidth1 th[data-sort-key=cveID] { width: 50% !important; } </style> </html> </panel> <panel> <table id="customWidth1"> <title>Table 1</title> <search> <query> | makeresults | table _time Hostname instance_id Remediation Description Severity ExploitStatus DaysOpen OldestCVEDays cveID</query> </search> </table> </panel> </row> <row> <panel depends="$alwaysHideCSSPanel$"> <html> <style> #customWidth2 th[data-sort-key=cveID] { width: 50% !important; } </style> </html> </panel> <panel> <table id="customWidth2"> <title>Table 2</title> <search> <query> | makeresults | table _time Hostname instance_id Remediation Description Severity ExploitStatus DaysOpen OldestCVEDays OpenVulnCount DetectionLogic VendorRef VendorAdvisory Link cveID OldestCVEDate Application Version OldestCVEDate Application Version FirstSeen InstanceName amiName amiID awsOwner awsApplication Account_id AccountName DNSDomain OS IPAddress Team OU Site Environment Location OSGroup OSType</query> </search> </table> </panel> </row> </form>    
I can't see the events log when I searching on splunk enterprise server. But I already check the splunk server status is running and I created the index = linux_universal_forwarder and host = linux_u... See more...
I can't see the events log when I searching on splunk enterprise server. But I already check the splunk server status is running and I created the index = linux_universal_forwarder and host = linux_uf_1 into inputs.confg on forwarder linux machine. And I also created the receiving new port 8889 and new index = linux_universal_forwarder on splunk server. Why I can't see the logs ? I can able to ping between indexer and forwarder. Pls help how to fix this issue? how to troubleshoot step by step? I'm beginner to learn the Splunk. Thank you.  
Hi, I am trying to setup an alert and notify by email, when count of last 3hrs is greater than rolling average of last 7 days using the below query. Query is working fine but in the alert is not wor... See more...
Hi, I am trying to setup an alert and notify by email, when count of last 3hrs is greater than rolling average of last 7 days using the below query. Query is working fine but in the alert is not working/not getting triggered I tried as below Alert Config Trigger conditions in alert Screen are, Trigger alert when ,Custom option ,search alert==true   Query: sourcetype="cloudwatch" index=***** earliest=-6d@d latest=@d |bucket _time span=1d |stats count by _time |stats avg(count) as SevenDayAverage |appendcols [search sourcetype="cloudwatch" index=***** |stats count as IndividualCount] |eval alert = if((IndividualCount.SevenDayAverage),"true","false") SevenDayAverage IndividualCount alert 5 1139 true
Hi, I am new to splunk, could you please help me with below SPL, I am trying to use stats and table command We have 4 entries for same incident, I need to pick earliest time. Index="monitoring" so... See more...
Hi, I am new to splunk, could you please help me with below SPL, I am trying to use stats and table command We have 4 entries for same incident, I need to pick earliest time. Index="monitoring" sourcetype="tool" incident_id=INC* | stats earliest(_time) as early | table "mc_host" "incident_id" "early" | convert ctime(early)   I am getting error if execute.    
Hello, When trying to install this from my Splunk Enterprise (In my Windows10 client) I'm getting: Unable to initialize modular input "oci_logging" defined in the app "TA-oci-logging-addon": Intros... See more...
Hello, When trying to install this from my Splunk Enterprise (In my Windows10 client) I'm getting: Unable to initialize modular input "oci_logging" defined in the app "TA-oci-logging-addon": Introspecting scheme=oci_logging: script running failed (exited with code 1)..
Hello Splunkers !! I am getting below while executing backfill summary index command in my Splunk machine.  Anyone can suggest me what will be the issue and where I need to correct Below command I ... See more...
Hello Splunkers !! I am getting below while executing backfill summary index command in my Splunk machine.  Anyone can suggest me what will be the issue and where I need to correct Below command I was executed : splunk cmd python fill_summary_index.py -app commissioning_reports -name "scada_alarms_start_base" -et -7d -lt now -index si_error -showprogress true -j 8 -dedup true -owner admin -auth admin:!Splunk001 Screenshot for the error:      
where can i find all the Splunk queries and how to use them?
hello everyone we have Splunk_TA_nix version 8.7.0 ,some of scripts dose not work properly such as lsof.sh,rlog.sh ,... also sometimes after the times scripts dose not work properly and it's log wa... See more...
hello everyone we have Splunk_TA_nix version 8.7.0 ,some of scripts dose not work properly such as lsof.sh,rlog.sh ,... also sometimes after the times scripts dose not work properly and it's log was interupted. can anyone helps me? 
How to perform lookup in CSV file from index without combining data in one row (and without mvexpand)? | index=vulnerability_index | table ip, vulnerability, score ip vulnerability score ... See more...
How to perform lookup in CSV file from index without combining data in one row (and without mvexpand)? | index=vulnerability_index | table ip, vulnerability, score ip vulnerability score 192.168.1.1 SQL Injection 9 192.168.1.1 OpenSSL 7 192.168.1.2 Cross Site-Scripting 8 192.168.1.2 DNS 5 CSV file:   company.csv ip_address company location 192.168.1.1 Comp-A Loc-A 192.168.1.2 Comp-B Loc-B 192.168.1.5 Comp-E Loc-E Lookup in CSV from index | index=vulnerability_index | lookup company.csv ip_address as ip OUTPUTNEW ip_address, company, location The vulnerability and score merged into one row. ip_address company location vulnerability score 192.168.1.1 Comp-A Loc-A SQL Injection OpenSSL 9 7 192.168.1.2 Comp-B Loc-B Cross Site-Scripting DNS 8 5 How do I match the ip_address in a separate row without using mvexpand? MVExpand has memory limitation and is very slow. It will not work in a large data set and time frame My expected result is: ip_address company location vulnerability Score 192.168.1.1 Comp-A Loc-A SQL Injection 9 192.168.1.1 Comp-A Loc-A OpenSSL 7 192.168.1.2 Comp-B Loc-B Cross Site-Scripting 8 192.168.1.2 Comp-B Loc-B DNS 5 Thank you for your help