All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Working with this query, I'm hoping to get only results where field values are greater than the other.     index="index*" | eval MonthNumber=strftime(_time,"%m") | chart eval(round(avg(duratio... See more...
Working with this query, I'm hoping to get only results where field values are greater than the other.     index="index*" | eval MonthNumber=strftime(_time,"%m") | chart eval(round(avg(durationMs), 0)) AS avg_durationMs by properties.url, MonthNumber | rename 04 AS "Apr", 05 AS "May"      I want to get only results of where Apr values is greater than May by 10  
The Splunk documentation for setting up Predictive Analytics Alerting has you create a correlation PER Service that Predictive Analytics is setup on.  Has anyone tried to create a single correlation ... See more...
The Splunk documentation for setting up Predictive Analytics Alerting has you create a correlation PER Service that Predictive Analytics is setup on.  Has anyone tried to create a single correlation search to handle all their Predictive Analytics alerting and is there any documentation(Jeff Wiedemann's Alerting Blueprint didn't cover that) that would give someone some ideas on best approach? Below is the relevant Splunk Documentation: “ You click the bell icon  in the Worst Case Service Health Score panel to open the correlation search modal. You want ITSI to generate a notable event in Episode Review the next time the Middleware service's health is expected to degrade.” https://docs.splunk.com/Documentation/ITSI/4.12.1/SI/UseCase#:~:text=ITSI%20Predictive%20Analytics%20use%20case%20With%20ITSI%20Predictive,that%20you%20can%20use%20to%20make%20business%20decisions.
Hello,   I need some help. I am new to Splunk and have run into an issue. I want to have table that will display Computer Name, Physical Address, Device Type, IP Adress, and what version of Offic... See more...
Hello,   I need some help. I am new to Splunk and have run into an issue. I want to have table that will display Computer Name, Physical Address, Device Type, IP Adress, and what version of Office thy Have (2013 or 365). The data is under one index but 3 different source types.   Index=Desktops SourceType 1= AssetInfo - It has lots of fields but the 3 I care about is PysAddress, DevType, ComputerName  SourceType 2 = Network - It has many fields but the only tw I want, and they are called IPAddress, Computer SourceType 3 = Software -  It has 3 fields, I care about all 3, which are Compuetr, SoftwareName, Software Verison. I want to pull info from all 3 source types and make one table. the common filed is computer name. The first issue is that in SourceType 1 the field is called ComputerName and the other 2 sourcetypes it is Computer. I know I could do a rename command on the sourcetype 1 if I had to. I have tried the OR Boolaen, Multisearch command, Union command and Join but I can never seem to get it to work right, the table gets created but the info it pulls he IP and creates one line then a seperate line for software. they are never ont eh same line. the next issue is that I need to filter on software that contains Microsoft office 2013 or Office 365.  Any Ideas would be welcomed
Hi, I'm not sure if i understand maxVolumeDataSizeMB correctly Lets say, i have a volume stanza like this in an index cluster wih 4 peers:     [volume:volume_name] path = /foo/bar maxVolume... See more...
Hi, I'm not sure if i understand maxVolumeDataSizeMB correctly Lets say, i have a volume stanza like this in an index cluster wih 4 peers:     [volume:volume_name] path = /foo/bar maxVolumeDataSizeMB = 5242880     How does Splunk handle the total volume capacity? Does every peerhas it's own 5TB in this volume until buckets roll or does maxVolumeDataSizeMB counts against the total volume size across all peers, e.g. 5TB/4? Thanks in advance!
Hi all, I'm in the process of setting up performance reporting for services provided for a client. The logic in question is very simple: average response of the service and volume over a predeterm... See more...
Hi all, I'm in the process of setting up performance reporting for services provided for a client. The logic in question is very simple: average response of the service and volume over a predetermined time period. I'm having trouble including services with 0 calls during the time period in question. It is important to the end user that they see which services are not being called as well. I've coded up two solutions and neither is giving me what I need. I've also looked at https://community.splunk.com/t5/Splunk-Search/Include-zero-count-items-from-lookup/m-p/177260, but I'm having trouble understanding how this applies to my solution outside of the outputlookup.  The format of the table desired is: Service - Average Response - Volume. Solution 1 - base search - works but does not include 0 values for obvious reasons: index=******************************************** | lookup services.csv service AS liveDataField OUTPUT service | stats avg(ResponseTime) AS AverageResponseTime, count AS Volume BY service | eval AverageResponseTime=round((1000*AverageResponseTime), 2) | fillnull value=0 AverageResponseTime My first thought was to add in a lookup table with all the services, and build off of that: index=******************************************** | inputlookup append=t services.csv | lookup services.csv service AS liveDataField OUTPUT service | stats avg(ResponseTime) AS AverageResponseTime, count AS Volume BY service | eval AverageResponseTime=round((1000*AverageResponseTime), 2) | fillnull value=0 AverageResponseTime This gives me what I need in terms of zero values, but instead of returning a count of 0, it returns Volume=1 for each service with zero hits. It also increments each service's volume by one erroneously. I suspect this is due to the inputlookup append=t. I've also tried doing an eval count=0 initially.  TL;DR - adding in a lookup to address zero count items caused each service's volume to be incremented by 1. Is there a quick fix for this? Or perhaps a better way of doing it altogether? Cheers
Hello folks,  Been busting my head here.. trying to pull data from multiple sourcetypes which I thought would run like: Index=test sourcetype=A OR sourcetype=B | search host=* | where <appname> =... See more...
Hello folks,  Been busting my head here.. trying to pull data from multiple sourcetypes which I thought would run like: Index=test sourcetype=A OR sourcetype=B | search host=* | where <appname> ="value" AND  | table Host, IPAddress, Appname host is a field in both sourcetypes and IP related info is in B. Just trying to pull out host, it's IP address, and the app in question. What I get is a real long host list (so that's good) with a few IP's and a few apps..  Looking abit like this: Host | IPAddress |Appname host1 | IP                | host2 | ip                | host3 |                     | appname host4|                      | appname so on and so forth seems like any place that shows an ip address refuses to show an appname and vice versa??  Still acts the same. I pulled each part separately so I know the data is good. 
For the latest version, Version 5.2.4, I have vulnerability data coming in from Tenable.SC. How can I filter the results based on the scan name? Cannot seem to figure it out. I remember in previous v... See more...
For the latest version, Version 5.2.4, I have vulnerability data coming in from Tenable.SC. How can I filter the results based on the scan name? Cannot seem to figure it out. I remember in previous versions, we could leverage scan_result_info.name, but not in this latest version. Any thoughts is appreciated. Thanks
When a user is added i need the time to be recorded and displayed in a field called used_added. I created the field name and can see if in the csv table but there is no time per entry. <query> ... See more...
When a user is added i need the time to be recorded and displayed in a field called used_added. I created the field name and can see if in the csv table but there is no time per entry. <query> | inputlookup USB.csv | eval user_added = strftime(_time, "%Y-%d-%m %H:%M:%S") | append [ | makeresults | eval user="$user_tok$", description="$description_tok$", revisit="$revisit_tok$", Action="$dropdown_tok$" ] | fields - _time | table user_added, user_added, user, category, department, description, revisit, status | lookup lookup_ims user as user OUTPUT category department | outputlookup USB.csv </query>
Not sure if this is a limitation of Phantom prompt block or if someone has figured this out already. I am using a prompt block to allow a user build up a config file that will eventually be sent to... See more...
Not sure if this is a limitation of Phantom prompt block or if someone has figured this out already. I am using a prompt block to allow a user build up a config file that will eventually be sent to Splunk to create a saved search. The questions allow the user to select specific values for fields to generate the metadata necessary for the splunk saved search (splunk query, time fields, eval fields, etc).  The response type for the question is a list of choices. There are two choices: The existing field value (which comes from the config file that was pulled via prior action call) CHANGE (which would be selected when the value needs to be changed) When using the response type 'list', is there a way to have #1 be set as the default response? Therefore, you would only have to select CHANGE from the drop down, rather than having to select the existing field's value every time if it doesn't need changed.
I installed Splunk on an EC2 instance. I saw the login page on port 8000 but when I tried to login I got a server error. I am not sure what I should look at.
Hello All,  I want to know how can i Get the controller UI Access? Thanks in advance
I have a project where I want to use a Splunk dashboard to show how some metrics can change over time. The metrics come from a device that we log in to via CLI and run a command to show some stats. I... See more...
I have a project where I want to use a Splunk dashboard to show how some metrics can change over time. The metrics come from a device that we log in to via CLI and run a command to show some stats. I'm new to doing this from scratch with Splunk. I would appreciate any help in understanding the best way to do it. The workflow is as follows: Using a script: script, gain access to the target device run a command that gathers the required data Capture the output Parse/edit to a compatible format Ingest/Import into Splunk Display in Dashboard Update metrics every x minutes (provisinally 15 or 30)   Assuming we have a script that can gather the data, the questions I have are: 1) What's the best format to have the data for Splunk?  2) What's the best way to get the data into Splunk ? 3) How do I automate this process?   Thanks for any help and guidance.    
Hi all, I'm checking out the "merge-buckets" command. I created an index with 1000 events per bucket. in sum my index have    ~/splunk/bin/splunk search "| dbinspect index=testbuckets2 | stats ... See more...
Hi all, I'm checking out the "merge-buckets" command. I created an index with 1000 events per bucket. in sum my index have    ~/splunk/bin/splunk search "| dbinspect index=testbuckets2 | stats count" count ----- 5479     buckets.   ~/splunk/bin/splunk merge-buckets --index-name=testbuckets2 --min-size=1 --max-count=1000 Using the following config: --max-count=1000 --min-size=1 --max-size=1000 --max-timespan=7776000 Found (300) buckets to merge. Starting to merge (300) buckets. Number of buckets already merged: 0/300 (0.00%). New Bucket: /Users/andreas/splunk/var/lib/splunk/testbuckets2/db/db_1653310364_1653310268_17359 Number of buckets merged: 300/300 (100.00%). Number of buckets created: 1. Time taken: 27 seconds, 21 milliseconds     after the operation i see 299 buckets less   ~/splunk/bin/splunk search "| dbinspect index=testbuckets2 | stats count" count ----- 5180     running merge-bucket a second time doesn't merge any further buckets.  It seems there is a hardcoded limit of 300 buckets?! any good reason for this? best regards, Andreas
It says that my eval is malformed, any suggestions?   | inputlookup US.csv | eval current_date=strftime(time(),"%Y-%m-%dt%H:%M:%S") | append [ | makeresults | eval 3month="$3month$"] | eval 3mont... See more...
It says that my eval is malformed, any suggestions?   | inputlookup US.csv | eval current_date=strftime(time(),"%Y-%m-%dt%H:%M:%S") | append [ | makeresults | eval 3month="$3month$"] | eval 3month=30*24*60*60 | eval relative_time = current_date "+3month" | eval duration = if(current_date &gt;= date, "Expired", "Valid") | table current_date, user, category, department, description, revisit, duration  
Is it possible to apply event sampling only to a part of a search instead of complete search? For ex I have data coming from two datasets and I want the event sampling applied to only one dataset rel... See more...
Is it possible to apply event sampling only to a part of a search instead of complete search? For ex I have data coming from two datasets and I want the event sampling applied to only one dataset related search, is there a way to do this in splunk? 
Hi I have a use case where were are sending in Number of Metric per second 28,000 Number of Logs per second 3,360. We are using approximately 2 CPU cores, the image is from a 6 core machine so ... See more...
Hi I have a use case where were are sending in Number of Metric per second 28,000 Number of Logs per second 3,360. We are using approximately 2 CPU cores, the image is from a 6 core machine so 32 is ~2 cores. Is this expected or is there something I can do to reduce this usage? Any help would be great - cheers
I have below query as query returning  null   <search id="dfLatencyOverallProcessingDelayBaseSearch"> <query>index="deng03-cis-dev-audit" | eval serviceName = mvindex(split(index, "-"), 1)."-".mv... See more...
I have below query as query returning  null   <search id="dfLatencyOverallProcessingDelayBaseSearch"> <query>index="deng03-cis-dev-audit" | eval serviceName = mvindex(split(index, "-"), 1)."-".mvindex(split(host, "-"), 2) |search "data.labels.activity_type_name"="ViolationOpenEventv1" |spath PATH=data.labels.verbose_message output=verbose_message | where verbose_message like "%overall_processing_delay%Dataflow Job labels%" | eval error=case(like(verbose_message,"%is above the threshold of 60.000%"), "warning", like(verbose_message,"%is above the threshold of 300.000%"), "failure") </query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <done> <condition> <set token="dfLatencyOverallProcessingDelay_sid">$job.sid$</set> </condition> </done> </search> Then SomeQuery.append [ loadjob $dfLatencyOverallProcessingDelay_sid$ | eval alertName = "Dataflow-Latency-Overall processing high delay" | stats values(alertName) as AlertName values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount] If result from dfLatencyOverallProcessingDelay_sid are null, then AlertName is also coming as blank, I want this to be  "Dataflow-Latency-Overall processing high delay"
Hello Splunkers!   I am looking for the Splunk Add-on for Ruckus Wireless apps file.    I can't find in the Splunkbase.    Is there any place that I can find 'Splunk Add-on for Ruckus Wir... See more...
Hello Splunkers!   I am looking for the Splunk Add-on for Ruckus Wireless apps file.    I can't find in the Splunkbase.    Is there any place that I can find 'Splunk Add-on for Ruckus Wireless App'?   Thank you in advance. 
Error downloading update from https://splunkbase.splunk.com/app/3803/release/2.0.10/download/: Forbidden   can some one please share reason why it is throwing this error,  i m able to add-on other ... See more...
Error downloading update from https://splunkbase.splunk.com/app/3803/release/2.0.10/download/: Forbidden   can some one please share reason why it is throwing this error,  i m able to add-on other apps but Sailpoint adoptive response add-on is failing to install in splunk. appreciate your answer!! Thanks, Gopinadh
I'm using Database connect V3.9.0. I'm trying use it to connect to impala Database which is external Database but I cant find the external database option t... See more...
I'm using Database connect V3.9.0. I'm trying use it to connect to impala Database which is external Database but I cant find the external database option to add it (as shown in the attached snapshot). Should I add another app or plugin to add external database connection feature? Thanks,