All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I'm checking out the "merge-buckets" command. I created an index with 1000 events per bucket. in sum my index have    ~/splunk/bin/splunk search "| dbinspect index=testbuckets2 | stats ... See more...
Hi all, I'm checking out the "merge-buckets" command. I created an index with 1000 events per bucket. in sum my index have    ~/splunk/bin/splunk search "| dbinspect index=testbuckets2 | stats count" count ----- 5479     buckets.   ~/splunk/bin/splunk merge-buckets --index-name=testbuckets2 --min-size=1 --max-count=1000 Using the following config: --max-count=1000 --min-size=1 --max-size=1000 --max-timespan=7776000 Found (300) buckets to merge. Starting to merge (300) buckets. Number of buckets already merged: 0/300 (0.00%). New Bucket: /Users/andreas/splunk/var/lib/splunk/testbuckets2/db/db_1653310364_1653310268_17359 Number of buckets merged: 300/300 (100.00%). Number of buckets created: 1. Time taken: 27 seconds, 21 milliseconds     after the operation i see 299 buckets less   ~/splunk/bin/splunk search "| dbinspect index=testbuckets2 | stats count" count ----- 5180     running merge-bucket a second time doesn't merge any further buckets.  It seems there is a hardcoded limit of 300 buckets?! any good reason for this? best regards, Andreas
It says that my eval is malformed, any suggestions?   | inputlookup US.csv | eval current_date=strftime(time(),"%Y-%m-%dt%H:%M:%S") | append [ | makeresults | eval 3month="$3month$"] | eval 3mont... See more...
It says that my eval is malformed, any suggestions?   | inputlookup US.csv | eval current_date=strftime(time(),"%Y-%m-%dt%H:%M:%S") | append [ | makeresults | eval 3month="$3month$"] | eval 3month=30*24*60*60 | eval relative_time = current_date "+3month" | eval duration = if(current_date >= date, "Expired", "Valid") | table current_date, user, category, department, description, revisit, duration  
Is it possible to apply event sampling only to a part of a search instead of complete search? For ex I have data coming from two datasets and I want the event sampling applied to only one dataset rel... See more...
Is it possible to apply event sampling only to a part of a search instead of complete search? For ex I have data coming from two datasets and I want the event sampling applied to only one dataset related search, is there a way to do this in splunk? 
Hi I have a use case where were are sending in Number of Metric per second 28,000 Number of Logs per second 3,360. We are using approximately 2 CPU cores, the image is from a 6 core machine so ... See more...
Hi I have a use case where were are sending in Number of Metric per second 28,000 Number of Logs per second 3,360. We are using approximately 2 CPU cores, the image is from a 6 core machine so 32 is ~2 cores. Is this expected or is there something I can do to reduce this usage? Any help would be great - cheers
I have below query as query returning  null   <search id="dfLatencyOverallProcessingDelayBaseSearch"> <query>index="deng03-cis-dev-audit" | eval serviceName = mvindex(split(index, "-"), 1)."-".mv... See more...
I have below query as query returning  null   <search id="dfLatencyOverallProcessingDelayBaseSearch"> <query>index="deng03-cis-dev-audit" | eval serviceName = mvindex(split(index, "-"), 1)."-".mvindex(split(host, "-"), 2) |search "data.labels.activity_type_name"="ViolationOpenEventv1" |spath PATH=data.labels.verbose_message output=verbose_message | where verbose_message like "%overall_processing_delay%Dataflow Job labels%" | eval error=case(like(verbose_message,"%is above the threshold of 60.000%"), "warning", like(verbose_message,"%is above the threshold of 300.000%"), "failure") </query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <done> <condition> <set token="dfLatencyOverallProcessingDelay_sid">$job.sid$</set> </condition> </done> </search> Then SomeQuery.append [ loadjob $dfLatencyOverallProcessingDelay_sid$ | eval alertName = "Dataflow-Latency-Overall processing high delay" | stats values(alertName) as AlertName values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount] If result from dfLatencyOverallProcessingDelay_sid are null, then AlertName is also coming as blank, I want this to be  "Dataflow-Latency-Overall processing high delay"
Hello Splunkers!   I am looking for the Splunk Add-on for Ruckus Wireless apps file.    I can't find in the Splunkbase.    Is there any place that I can find 'Splunk Add-on for Ruckus Wir... See more...
Hello Splunkers!   I am looking for the Splunk Add-on for Ruckus Wireless apps file.    I can't find in the Splunkbase.    Is there any place that I can find 'Splunk Add-on for Ruckus Wireless App'?   Thank you in advance. 
Error downloading update from https://splunkbase.splunk.com/app/3803/release/2.0.10/download/: Forbidden   can some one please share reason why it is throwing this error,  i m able to add-on other ... See more...
Error downloading update from https://splunkbase.splunk.com/app/3803/release/2.0.10/download/: Forbidden   can some one please share reason why it is throwing this error,  i m able to add-on other apps but Sailpoint adoptive response add-on is failing to install in splunk. appreciate your answer!! Thanks, Gopinadh
I'm using Database connect V3.9.0. I'm trying use it to connect to impala Database which is external Database but I cant find the external database option t... See more...
I'm using Database connect V3.9.0. I'm trying use it to connect to impala Database which is external Database but I cant find the external database option to add it (as shown in the attached snapshot). Should I add another app or plugin to add external database connection feature? Thanks,
Experts, Our Splunk Dashboard was converted from XML to HTML file. In the left hand side of the page, there are hyperlinks to various Dashboards/Views. When we click on those links, corresponding D... See more...
Experts, Our Splunk Dashboard was converted from XML to HTML file. In the left hand side of the page, there are hyperlinks to various Dashboards/Views. When we click on those links, corresponding Dashboards/Views would load in the right hand side of the page in the "iframe". Currenly I am getting "Splunk has deprecated HTML dashboards. We recommend all HTML dashboards to be built in Dashboard studio" message. Our customer uses Splunk Enterprise 8.2.6. To fix the above error, I tried to rebuild the Dashboard from HTML code to XML. The Dashboard is loading properly with hyperlinks on the left side of the page, but Dashboards/Views not getting loaded on the right hand side of the page in the "iframe". Splunk seem to be automatically removing the "iframe". Could you please help me how to fix this issue ? Thanks, Ravikumar
Hello Splunk Community! Regarding extract new fields in splunk search, what's the lifespan of the new created fields? will be available after re-login and available to all users? and can be ... See more...
Hello Splunk Community! Regarding extract new fields in splunk search, what's the lifespan of the new created fields? will be available after re-login and available to all users? and can be easily removed later? thank you in advance!
Hi geeks, I integrated the TheHive and Cortex with Splunk ES for getting some alerts after triggering the correlation search rule. According to the attached Image-01, please help me for filling the ... See more...
Hi geeks, I integrated the TheHive and Cortex with Splunk ES for getting some alerts after triggering the correlation search rule. According to the attached Image-01, please help me for filling the correct values for "Data field name" and "Datatype field name". Also, Do I have to specify the exact name according to what is in the Cortex to identify the "Analyzers"?   Image-01:   Image-02:   image-03:   Regards, Amir
Hi everyone, I have limited disk space on /var/log path, so I try to manage phantom log rotation ( follow this link: Configure the logging levels for Splunk SOAR (On-premises) daemons - Splunk Docum... See more...
Hi everyone, I have limited disk space on /var/log path, so I try to manage phantom log rotation ( follow this link: Configure the logging levels for Splunk SOAR (On-premises) daemons - Splunk Documentation) but I found a large file named "app_interface.log" that was not included in phantom_logrotate.conf Does anyone have any suggestions on what kind of records are collected in this file? and What is the best practice to rotate this file? Thank you  
Hi everyone, I want to process the delta which is null in the middle of a time series by taking the next delta after the null to divide to the (count of null + 1) Here is the data: time id va... See more...
Hi everyone, I want to process the delta which is null in the middle of a time series by taking the next delta after the null to divide to the (count of null + 1) Here is the data: time id value delta 01/02/2022 123 12   02/02/2022 123 15 3 03/02/2022 123 20 5 04/02/2022 123     05/02/2022 123     06/02/2022 123     07/02/2022 123 60 40 08/02/2022 123 60 0 09/02/2022 123     10/02/2022 123     01/02/2022 145 20   02/02/2022 145 50 30 03/02/2022 145 70 20 04/02/2022 145 100 30 05/02/2022 145     06/02/2022 145     07/02/2022 145 190 90 08/02/2022 145     09/02/2022 145     10/02/2022 145     01/02/2022 987 50   02/02/2022 987 100 50 03/02/2022 987 160 60 04/02/2022 987 200 40 05/02/2022 987 230 30 06/02/2022 987 280 50 07/02/2022 987 360 80 08/02/2022 987 420 60 09/02/2022 987 500 80 10/02/2022 987 550 50   Here is when I untable time 123 145 987 01/02/2022 0 0 0 02/02/2022 3 30 50 03/02/2022 5 20 60 04/02/2022 10 30 40 05/02/2022 10 30 30 06/02/2022 10 30 50 07/02/2022 10 30 80 08/02/2022 0 0 60 09/02/2022   0 80 10/02/2022   0 50 As in the table, there is 1 record in 08/02/2022 when delta is 0. For other data where it doesn't have data, it is null --> ok But for id=145, delta from 08 to 10/02 should be empty. So that when I calculate the avg(delta), it will not effect. So my question is how to distinguish the zero and the null in this case? Thanks!
Greetings!!   I'm getting the warning alerts showing me that splunk forwarder is not active, as shown on the below pic, splunk forwarder is running (/opt/splunkforwarder/bin/splunk status ... See more...
Greetings!!   I'm getting the warning alerts showing me that splunk forwarder is not active, as shown on the below pic, splunk forwarder is running (/opt/splunkforwarder/bin/splunk status ) but in Monitoring Console under Forwader:Management is not active it's showing a missing status,as shown on the above screenshot even when I try to stop and restart the splunkforwader service(/opt/splunkforwarder/bin/splunk stop) can't be stopped, as shown on the below screenshot Kindly help me on how i can fix the error,   Another error while searching:(  I am running splunk_security_essentials version 3.0.0.) *********************************** error 1: Could not load lookup=LOOKUP-splunk_security_essentials   error2: How about and root cause of this error2 above? and how to fix this?   Also find the this Warning error, I got from splunkd.log 05-22-2022 19:54:02.957 +0200 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='certificate expired'. 05-22-2022 19:54:02.957 +0200 ERROR TcpOutputFd - Connection to host=x.x.x.17:9997 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. 05-22-2022 19:54:02.960 +0200 ERROR X509Verify - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) failed validation; error=10, reason="certificate has expired" 05-22-2022 19:54:02.960 +0200 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='certificate expired'. 05-22-2022 19:54:02.960 +0200 ERROR TcpOutputFd - Connection to host=x.x.x.16:9997 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. 05-22-2022 19:54:02.964 +0200 ERROR X509Verify - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) failed validation; error=10, reason="certificate has expired" 05-22-2022 19:54:02.964 +0200 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='certificate expired'. 05-22-2022 19:54:02.964 +0200 ERROR TcpOutputFd - Connection to host=x.x.x.14:9997 failed. sock_error = 0. SSL Error = error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name. Kindly help and guide me on how to fix this, Thank you in advance.        
Hi All, I have installed splunk UF on windows . I have one static log file in system (json)  and that need to be monitored.   I have configure this in inputs.conf file. I see only system/applicat... See more...
Hi All, I have installed splunk UF on windows . I have one static log file in system (json)  and that need to be monitored.   I have configure this in inputs.conf file. I see only system/application and security logs being sent to indexer whereas the static log file is not seen. I ran "splunk list inputstatus" and checked,    C:\Users\Administrator\Downloads\test\test.json file position = 75256 file size = 75256 percent = 100.00 type = finished reading   So, this means the file is being read properly. What can be the issue that I dont see test.json logs at splunk side ? I tried checking index=_internal at indexer but not able to figure out what is causing issue, I checked few blogs on Internet as well. Can anyone please help on this. Thanks in Advance, Newbie to splunk
After successfully installed Splunk cloud trial I was able to use it. Next day the system doesn't let me in with proper credentials (sc_admin) with the following message: Is it something expec... See more...
After successfully installed Splunk cloud trial I was able to use it. Next day the system doesn't let me in with proper credentials (sc_admin) with the following message: Is it something expected with trial version? Will it resolve by itself after some time?
Hi, I have created a React app and symlinked it to Splunk. It now appears as one of the installed apps but I am given no option to launch the app. What can I do? Thanks
Hi, I have a column timechart with numerical values, and I would like to add strings, or characters, after these values, when displayed on the dashboard. I have tried to append the string to the... See more...
Hi, I have a column timechart with numerical values, and I would like to add strings, or characters, after these values, when displayed on the dashboard. I have tried to append the string to the results themselves, but it seems like timechart is unable to populate non-numerical data. Any help or alternative ideas on how I can achieve the above results visually? Thanks.  
im trying to play around with tags from our aws environments (using the aws addon metadata input) the tags come in looking like this:   "TagList": [ {"Key": "Project", "Value": "project... See more...
im trying to play around with tags from our aws environments (using the aws addon metadata input) the tags come in looking like this:   "TagList": [ {"Key": "Project", "Value": "project1"} {"Key": "ProdState", "Value": "prod"} {"Key": "Product", "Value": "product1"} {"Key": "Team", "Value": "team1"} {"Key": "power_state", "Value": "-1:1800:1900:-1:1800:1900:-1:1800:1900:-1:1800:1900:-1:1800:1900:-1:-1:-1:-1:-1:-1:Australia/Brisbane"} {"Key": "CostCentre", "Value": "000000"} {"Key": "workload_type", "Value": "production"} {"Key": "Name", "Value": "name1"} ]​ to make life easier for myself, i am trying to un-nest these tags - ultimately i want this to look something like this: tags.Project = project1 tags.ProdState = prod tags.Product = product1 ... ive tried with a foreach like this (below), but it doesnt seem to get all my tags out - for example, it will only extract CostCenter, ProdState Project and workload_type.     index=testing source="xxxxxxxxxxxx:ap-southeast-2:rds_instances" | spath TagList{} output=tmp_taglist | foreach tmp_taglist{} [ rex "..Key.:\s\"(?<this_key>[^\"]+)\".\s.Value..\s\"(?<this_value>[^\"]+)\"." | eval tags.{this_key} = this_value ] | table DBInstanceArn tmp_taglist tags.*​     can anyone help me understand what i am doing wrong here ?
How can I get Splunk data in my dot net core application. I need to read the logs either via database, or web api calls.