All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

During my health checks I usually get a list of missing forwarders, I have found that these forwarders were on a few de-commissioned servers that are no longer around. But Splunk repeats the missing ... See more...
During my health checks I usually get a list of missing forwarders, I have found that these forwarders were on a few de-commissioned servers that are no longer around. But Splunk repeats the missing FWs. How do I tell it to ignore certain FWs reporting
Hello All, My Goal: I need to create a dashboard with multiple panels. Panel 1 would be total number of indexes reporting Splunk. command: | tstats count where index=* by index | where count<=0 T... See more...
Hello All, My Goal: I need to create a dashboard with multiple panels. Panel 1 would be total number of indexes reporting Splunk. command: | tstats count where index=* by index | where count<=0 This is posting the results.    Panel 2 would be total number of indexes, which doesn't have the data command: | tstats count where index=* by index | where count>=1 | stats count   Need help on this: Panel 3 would be the difference between the total old indexes name (last 3 months) total new indexes if we created any in last 24 hours. So this should gives me the any new index created in last 24 hours, which I need to update to my security group.  Since I am doing for 3 months I would like to use lighting command such as "tstats" command.  Appreciate your help.  @manjunathmeti  @to4kawa @woodcock  @richgalloway 
Hi,   I have two servers running on Centos that have Universal Forwarder installed and I've enabled the following: But using htop command on the servers, the CPU utilization is almost 100% but... See more...
Hi,   I have two servers running on Centos that have Universal Forwarder installed and I've enabled the following: But using htop command on the servers, the CPU utilization is almost 100% but on splunk, it shows 20-30% at most.   Below is the query I used to find the CPU utilization for each available host: host=* source="vmstat" | bucket span=300s _time | stats max(memUsedPct) as memUsedPct by _time host | timechart span=300s max(memUsedPct) as "Used Memory Percentage" by host limit=0   Please, is there a way to resonate with the htop results?
Hi Everyone, we're migrating a Customer's Splunk Enterprise to SplunkCloud and many users saved private reports and alerts into some apps, so we should migrate also the content of the $SPLUNK_HOME/e... See more...
Hi Everyone, we're migrating a Customer's Splunk Enterprise to SplunkCloud and many users saved private reports and alerts into some apps, so we should migrate also the content of the $SPLUNK_HOME/etc/users directory to SplunkCloud.   Any suggestions?   Thanks a lot for support!!! Marco 
Hi Suppose I have this log source here: index=main sourcetype=pan host=pa3250 It generates a massive amount of logs daily. I know sometime within the last 20 days it stopped sending traffic. Wha... See more...
Hi Suppose I have this log source here: index=main sourcetype=pan host=pa3250 It generates a massive amount of logs daily. I know sometime within the last 20 days it stopped sending traffic. What's the best search query to help me identify the day that logs stopped coming in?  
i've tried so much but don't reached something, so i hope someone can help me here. I want to add a alert action python skript to a correlation search. In this python script i use the pycurl module ... See more...
i've tried so much but don't reached something, so i hope someone can help me here. I want to add a alert action python skript to a correlation search. In this python script i use the pycurl module to send some data to another API. But the installed python in splunk havent the pycurl module and i found no way to install it.  When i call the script in the shell with de local installed python it works fine and i can reach the API. Have anybody a idea how to install the pycurl in splunk, or witch lib in splunk allready exists for a cURL HTTP request in a python script?? Thanks for answer.
I want to get top 10 destination IP's for each top 2 source IP's .  Where count of is more that 1000 for Source IP Right now I'm using sub search for getting top 2 source IP's then I'm filtering the... See more...
I want to get top 10 destination IP's for each top 2 source IP's .  Where count of is more that 1000 for Source IP Right now I'm using sub search for getting top 2 source IP's then I'm filtering the top 10 Source IP in base search. But I want to do it without using sub search. Someone please help to get this done. I'm looking for command something like : top limit=10 destIP by top limit=2 sourceIP where count > 1000 
Does anyone know if HEC endpoint can be configured directly onto the IDM so SC4S traffic can be sent to it? It is tailor made for Splunk Cloud but I have not read anything that says that in their doc... See more...
Does anyone know if HEC endpoint can be configured directly onto the IDM so SC4S traffic can be sent to it? It is tailor made for Splunk Cloud but I have not read anything that says that in their documentation.
I'm posting this here as I wasn't seeing many others reporting this issue nor resolution and hope it saves someone else a lot of headache in solving it. We setup AWS Inspector for our four AWS accou... See more...
I'm posting this here as I wasn't seeing many others reporting this issue nor resolution and hope it saves someone else a lot of headache in solving it. We setup AWS Inspector for our four AWS accounts to check for scan results once a day and we were seeing odd Rate Limit Exceeded Errors and only a limited number of results were getting pulled in. 2021-03-17 14:17:31,524 level=ERROR pid=32092 tid=Thread-6 logger=splunk_ta_aws.modinputs.inspector.aws_inspector_data_loader pos=aws_inspector_data_loader.py:__call__:307 | | message="Failed to collect inspector findings for region=us-east-1, datainput=Corp - Inspector, error=Traceback (most recent call last): File "/data/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/aws_inspector_data_loader.py", line 302, in __call__ self._do_indexing() File "/data/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/aws_inspector_data_loader.py", line 284, in _do_indexing AWSInspectorFindingsDataLoader(self._config, self._cli, account_id).run() File "/data/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/aws_inspector_data_loader.py", line 176, in run self._schedule() File "/data/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/aws_inspector_data_loader.py", line 191, in _schedule arns = self._list_findings_by_time_window(begin, end) File "/data/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/aws_inspector_data_loader.py", line 249, in _list_findings_by_time_window response = self._cli.list_findings(**params) File "/data/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/botocore/client.py", line 276, in _api_call return self._make_api_call(operation_name, kwargs) File "/data/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/botocore/client.py", line 586, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (ThrottlingException) when calling the ListFindings operation (reached max retries: 4): Rate exceeded " At once a day, it seems like we should not be hitting this ThrottlingException.
Hi, I have a working Python script that when ran as whoami=splunk in the same box, works just fine and as expected. When the script is enabled in the Scripted Inputs with "every 5 min" schedule, on... See more...
Hi, I have a working Python script that when ran as whoami=splunk in the same box, works just fine and as expected. When the script is enabled in the Scripted Inputs with "every 5 min" schedule, one line in my ciode does not work and the python processor logs "Permission Denied" in index=_internal. This is the line that doesn't work (Line  3)       Line 1. temp_filename = sess + '.tmp' Line 2. wget_result = os.system('wget -O ./' + temp_filename + ' --append-output=' + LOGFILE_DIR_WGET + ' --user ' + svcacct_un + ' --password ' + svcacct_pw + ' --no-check-certificate ' + _url) Line 3. checksum = hashlib.md5(open('./' + temp_filename, "rb").read()).hexdigest();       the Error looks like this       03-29-2021 15:55:17.507 +0100 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/***************" ./b97fcd39-1201-4638-8d41-8ae32168cd70.tmp: Permission denied       Anyone?  
I have a JSON Input Request like below {"liabilityDetailsVOs":[{"processMasterId":null,"transactionMasterId":null,"transactionMasterType":null,"checkDate":"2020-12-31T00:00:00.000-0800","payrollCy... See more...
I have a JSON Input Request like below {"liabilityDetailsVOs":[{"processMasterId":null,"transactionMasterId":null,"transactionMasterType":null,"checkDate":"2020-12-31T00:00:00.000-0800","payrollCycleId":null,"payrollId":51113251,"cashCareXfer":null,"generalLedgerCoa99":null,"priorPeriodAdjustmentByCheck":null,"midYear":false,"midQuarter":false,"processDetailId":null,"transactionDetailId":null,"agencyId":1012,"companyAgencyId":51233519,"taxAmount":-72999.16,"amount2":null,"gross":null,"amount3":null,"traceId":null,"userField1":"V<%libh_id>","userField2":null,"userField3":null,"userField4":null,"userField5":null,"userField6":null,"userField7":null,"userField8":null,"tranCode":"2012","memo":null,"newAdjustment":false,"amendmentId":null,"depositCompanyId":51113251,"depositAgencyId":996,"depositCompanyAgencyId":51113260,"liabilityType":"7","liabilityPeriodEndDate":"2020-12-31T00:00:00.000-0800","fedPayAgent":"","entity":null,"liabilityId":null,"liabilityStatus":"0","forcedStatus":"0","processed":null,"depositAdjustedId":null,"payrollRunId":null,"rate":null,"commentCode":"35","previousLiabilityId":null,"totalEmployee":null,"femaleEmployeeCount":null,"maleEmployeeCount":null,"creditId":null,"paidBy":null,"sourceSystemIdentifier":null,"glAccount":null,"source":"VP","includeAmtInPlateau":true,"sundryFlag":"Y","varianceType":"3","fractionType":"NN","varianceDueDate":"2021-02-01T00:00:00.000-0800","formId":994,"id":null,"depositId":null,"holdFlag":null,"holdReasonCode":null,"createDate":"2021-01-03T00:00:00.000-0800","updateCredit":false,"cashStatus":null,"status":null,"depositHistoryStatus":null,"liabilitySource":null,"fullyAbsorbed":false,"insertMtLibDepHistStatus":false,"cartRate":null,"depositDate":null,"depositAmount":null,"deferralDepositAmount":null,"firstDeferralPayment":null,"secondDeferralPayment":null,"forcedDepositDueDate":null,"deferredLiabilityState":null,"deferredLiability":false,"dataType":null,"credit":false,"mmcommunicationIgnore":false,"createCouponCallRequired":false,"creditSplit":false,"creditGroupOne":false,"depositProcessed":false,"midQuarterSameAsSUI":false,"prepaidDummy":false,"depositedDummy":false,"notSystemAdjustment":true},{"processMasterId":null,"transactionMasterId":null,"transactionMasterType":null,"checkDate":"2020-12-31T00:00:00.000-0800","payrollCycleId":null,"payrollId":51113251,"cashCareXfer":null,"generalLedgerCoa99":null,"priorPeriodAdjustmentByCheck":null,"midYear":false,"midQuarter":false,"processDetailId":null,"transactionDetailId":null,"agencyId":195162,"companyAgencyId":51966742,"taxAmount":72999.16,"amount2":null,"gross":null,"amount3":null,"traceId":null,"userField1":"V<%libh_id>","userField2":null,"userField3":null,"userField4":null,"userField5":null,"userField6":null,"userField7":null,"userField8":null,"tranCode":"2012","memo":null,"newAdjustment":false,"amendmentId":null,"depositCompanyId":51113251,"depositAgencyId":996,"depositCompanyAgencyId":51113260,"liabilityType":"7","liabilityPeriodEndDate":"2020-12-31T00:00:00.000-0800","fedPayAgent":"","entity":null,"liabilityId":null,"liabilityStatus":"0","forcedStatus":"0","processed":null,"depositAdjustedId":null,"payrollRunId":null,"rate":null,"commentCode":"39","previousLiabilityId":null,"totalEmployee":null,"femaleEmployeeCount":null,"maleEmployeeCount":null,"creditId":null,"paidBy":null,"sourceSystemIdentifier":null,"glAccount":null,"source":"VP","includeAmtInPlateau":true,"sundryFlag":"Y","varianceType":"3","fractionType":"NN","varianceDueDate":"2021-02-01T00:00:00.000-0800","formId":994,"id":null,"depositId":null,"holdFlag":null,"holdReasonCode":null,"createDate":"2021-01-03T00:00:00.000-0800","updateCredit":false,"cashStatus":null,"status":null,"depositHistoryStatus":null,"liabilitySource":null,"fullyAbsorbed":false,"insertMtLibDepHistStatus":false,"cartRate":null,"depositDate":null,"depositAmount":null,"deferralDepositAmount":null,"firstDeferralPayment":null,"secondDeferralPayment":null,"forcedDepositDueDate":null,"deferredLiabilityState":null,"deferredLiability":false,"dataType":null,"credit":false,"mmcommunicationIgnore":false,"createCouponCallRequired":false,"creditSplit":false,"creditGroupOne":false,"depositProcessed":false,"midQuarterSameAsSUI":false,"prepaidDummy":false,"depositedDummy":false,"notSystemAdjustment":true}],"liabilitySource":"VP","processVO":{"ntProcess":null,"background":null,"processType":null,"companyGroupId":null,"marker":null,"ntProcessMaster":{"id":null,"ntProcess":null,"payrId":null,"tmstId":null,"periodEndDate":null,"amount1":null,"amount2":null,"amount3":null,"status":null,"creator":null,"createDate":null,"memo":null,"vschemaId":null,"midYear":null,"midQuarter":null,"description":null,"type":null,"applyToDate":null,"pfleId":null},"ntProcessDetail":{"id":null,"ntProcessMaster":null,"tdtlId":null,"amount1":null,"amount2":null,"amount3":null,"status":null,"tranCode":null,"creator":null,"createDate":null,"memo":null,"periodEndDate":null,"vschemaId":null,"outputFilename":null,"procId":null,"libhId":null,"dephId":null,"cagyId":null,"dueDate":null,"aid":null},"statusMessage":null,"feedback":{"errors":[],"warnings":[],"successes":[],"infos":[],"processId":null,"procMasterId":null,"messageIdMap":{},"userName":null,"errorMessages":[],"warningMessages":[],"infoMessages":[],"successMessages":[],"all":[]}} Year_Quarter _Blank But I am not able to parse the Year & Quarter from the checkDate field. Below is what I am trying |rename liabilityDetailsVOs{}.payrollId AS cpnyId, liabilityDetailsVOs{}.depositAgencyId AS majorAgency, liabilityDetailsVOs{}.taxAmount AS taxAmount, liabilityDetailsVOs{}.sundryFlag AS isSundry, liabilityDetailsVOs{}.checkDate as checkDate |eval chkdate=strptime(checkDate,"%Y-%m-%dT%H:%M:%S.%Q") |eval month=strftime(chkdate,"%m") |eval year_quarter=case(month<=3,"Q1",month<=6,"Q2",month<=9,"Q3",month<=12,"Q4")."-".strftime(chkdate,"%Y") |where cpnyId="51113251" |table cpnyId majorAgency taxAmount isSundry checkDate month year_quarter
Hi,   How to install a splunk forwarder in bamboo tool and get the bamboo logs into splunk. I dont have a bamboo addon for splunk, need to do it buy going into Setting-->AddData--->forward  and th... See more...
Hi,   How to install a splunk forwarder in bamboo tool and get the bamboo logs into splunk. I dont have a bamboo addon for splunk, need to do it buy going into Setting-->AddData--->forward  and then check for the forwarder which is installed and proceed with the further step.  Can i get steps to install the forwarder in Bamboo At Splunk enterprise i have enable the receiver  @adonio can you please help Thanks
Hi, I am trying to run some SQL queries using SQL Explorer on DB Connect app, but It looks like the queries are not finishing. Every query I execute keeps on 20% of completion and does not load any ... See more...
Hi, I am trying to run some SQL queries using SQL Explorer on DB Connect app, but It looks like the queries are not finishing. Every query I execute keeps on 20% of completion and does not load any result. Also, all of my inputs that are configured on DB Connect app are working as expected, the issue is happening only when I try configuring new inputs or when I use the SQL Explorer tab. I have already tried to run the queries using different users and browsers, and I have already restarted Splunk service, but the issue is still occurring. Does anyone has ever faced some issue like this one before on Db Connect? Any tips on what can be the cause of this error? Thanks in advance.
Could someone please help me with the Splunk query to configure the alert if Forwarder, Indexer, or search head had restart? @scelikok @soutamo @saravanan90 @thambisetty @ITWhisperer @gcusello @bowe... See more...
Could someone please help me with the Splunk query to configure the alert if Forwarder, Indexer, or search head had restart? @scelikok @soutamo @saravanan90 @thambisetty @ITWhisperer @gcusello @bowesmana   @to4kawa 
I have a UF installed on syslog server and now I want the data to come to HF and not to go to UF. I just need the syslog data to be completely redirected to HF from UF. What are the config changes th... See more...
I have a UF installed on syslog server and now I want the data to come to HF and not to go to UF. I just need the syslog data to be completely redirected to HF from UF. What are the config changes that I need to do?    
Hi Everyone, Can some one explain me what is the meaning of these two functions: cluster showcount=t t=0.3 cluster showcount=t t=0.9 Thanks in advance
Hi All, I'm in this situation index a   index b id  neme   id neme 1 simone   1 simone 3 francesco   2 marco 4 luca         I have a scheduled search that ... See more...
Hi All, I'm in this situation index a   index b id  neme   id neme 1 simone   1 simone 3 francesco   2 marco 4 luca         I have a scheduled search that extracts data from index a and writes it to index b. as you can see in both index a and b there is id = 1. my search currently duplicates the result this way index b id neme 1 simone 2 marco 1 simone 3 francesco 4 luca   is there a function merge type sql? the expected result is the following index b id neme 1 simone 2 marco 3 francesco 4 luca   can you help me? thanks for any answer Best Regards, Simone
I have installed (several times) the Splunk App for Unix (*nix) Version 6.0.1. I have changed the default index in the settings to use the index=main by editing the related search Macro. I have confi... See more...
I have installed (several times) the Splunk App for Unix (*nix) Version 6.0.1. I have changed the default index in the settings to use the index=main by editing the related search Macro. I have configured the SUFs 'downstream' to send data to the main index and I can see all the data arriving in the index as expected.  Note this is installed on a Splunk dedicated single instance running version 8.1.3 (Enterprise On-Premises) In the settings section of the App, I can see the correct index is specified (main) and clicking on the various Preview button options returns valid data. See below for examples: Index Specification, and verify "Preview " selections: CPU data preview:   DF Data Preview:   Suffice it to say that all the other Preview buttons also return valid data. This would imply that the data is correctly configured and the applicaiton should be able to consume it. However, when I try and look at the dashboards of the app, they all remain free of any data, as can be seen from the screen captures below:   I am kinda out of ideas. Anyone got anything? Cheers Chris   
Hi Team, We have recently upgraded our Deployment Master server from 7.3.1 to 8.1.2 version. The upgrade seems to be success whereas if i perform any of the commands below I am getting an error. Ref... See more...
Hi Team, We have recently upgraded our Deployment Master server from 7.3.1 to 8.1.2 version. The upgrade seems to be success whereas if i perform any of the commands below I am getting an error. Refer below for more information. Commands: splunk restart splunk start splunk stop splunk reload deploy-server   Errors: [splunk@servername bin]$ splunk reload deploy-server Error processing line 1 of /opt/splunk/lib/python3.7/site-packages/zc.lockfile-2.0-py3.7-nspkg.pth: Fatal Python error: initsite: Failed to import the site module Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site.py", line 168, in addpackage exec(line) File "<string>", line 1, in <module> File "/opt/splunk/lib/python3.7/importlib/util.py", line 14, in <module> from contextlib import contextmanager File "/opt/splunk/lib/python3.7/contextlib.py", line 5, in <module> from collections import deque File "/opt/splunk/lib/python3.7/collections/__init__.py", line 27, in <module> from reprlib import recursive_repr as _recursive_repr File "/opt/splunk/lib/python2.7/site-packages/reprlib/__init__.py", line 7, in <module> raise ImportError('This package should not be accessible on Python 3. ' ImportError: This package should not be accessible on Python 3. Either you are trying to run from the python-future src folder or your installation of python-future is corrupted. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site.py", line 579, in <module> main() File "/opt/splunk/lib/python3.7/site.py", line 566, in main known_paths = addsitepackages(known_paths) File "/opt/splunk/lib/python3.7/site.py", line 349, in addsitepackages addsitedir(sitedir, known_paths) File "/opt/splunk/lib/python3.7/site.py", line 207, in addsitedir addpackage(sitedir, name, known_paths) File "/opt/splunk/lib/python3.7/site.py", line 178, in addpackage import traceback File "/opt/splunk/lib/python3.7/traceback.py", line 3, in <module> import collections File "/opt/splunk/lib/python3.7/collections/__init__.py", line 27, in <module> from reprlib import recursive_repr as _recursive_repr File "/opt/splunk/lib/python2.7/site-packages/reprlib/__init__.py", line 7, in <module> raise ImportError('This package should not be accessible on Python 3. ' ImportError: This package should not be accessible on Python 3. Either you are trying to run from the python-future src folder or your installation of python-future is corrupted.   So kindly help to check and let me know how to resolve the issue. But our Deployment server has been upgraded successfully to 8.1.2 but couldn't able to perform any commands due to this issue.   Whereas if we ran the command with (./) as prefix then i can able to execute all commands: ./splunk restart ./splunk start ./splunk stop ./splunk reload deploy-server   So kindly help me to fix this issue.
Hi Team I have set of 5 hosts which are coming from an index=xyz and with sourcetype=iis so for example if any of the host logs from index=xyz and sourcetype=iis is getting stopped then we need to g... See more...
Hi Team I have set of 5 hosts which are coming from an index=xyz and with sourcetype=iis so for example if any of the host logs from index=xyz and sourcetype=iis is getting stopped then we need to get an email notification from which server the logs has stopped getting ingested into splunk. Timespan is for last 15 minutes. 5 host information: abc, def, ijk, lmn, opq.   So can you kindly help to build the query.