All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to develop a visualization showing Splunk distributed architecture with dataflow using Flow Viz App. I want to be able to show architecture as per below diagram with tcp_eps as Events/s. ... See more...
I am trying to develop a visualization showing Splunk distributed architecture with dataflow using Flow Viz App. I want to be able to show architecture as per below diagram with tcp_eps as Events/s. To achieve this, I am looking at example 2 shown in the documentation section of the app. You can also find that example image on Splunkbase of the app  or if you have got the add-on on your local host, its link is most likely this  But I am quite confused with the instructions. It says, "Each node should be delimited by three hypens "---"." But where or how do I setup a query that will show link in that path syntax format ? Another thing I am unsure is where it states,          <existing query> | append [|inputlookup my_table_of_nodes.csv]         What exactly node data should the csv file contain ?   Can someone please help me with this ? @chrisyounger
Hi I have a query which results me data in the below format, I am trying to put out a table assigning priority based on the response(>2s is violator) for module and number of times violation oc... See more...
Hi I have a query which results me data in the below format, I am trying to put out a table assigning priority based on the response(>2s is violator) for module and number of times violation occurred.      | foreach *-2020 or *-2021 [ | eval LastViolatedMonth = if('<<FIELD>>'>2,"<<FIELD>>", LastViolatedMonth) , LastViolatedMonthNumber = substr(LastViolatedMonth, 0, 2) , ViolationCount=if(('<<FIELD>>'>2), ViolationCount+1, ViolationCount) , LastViolatedResponse=if('<<FIELD>>'>2,'<<FIELD>>', LastViolatedResponse) , Deviation=case(LastViolatedResponse>2,round(((LastViolatedResponse-2)/2)*100,1)) , Priority = case( (Deviation >= 100 AND ViolationCount >=1), "P1" , ((Deviation >= 75 AND Deviation < 100) AND ViolationCount >=3), "P1" , ((Deviation >= 75 AND Deviation < 100) AND (ViolationCount >= 0 AND ViolationCount < 3)), "P2" , ((Deviation >= 50 AND Deviation < 75) AND ViolationCount >= 3), "P2" )] | fields Module, LastViolatedMonth, LastViolatedResponse, ViolationCount, Deviation, Priority     Currently the Module is considered P1 violator when the violation count is >3. I would like to add one more condition to check for the previous month response - if it was a violator or not.If previous month is not a violator but the latest/last month is a violator and the violation count >=3, I want that module to be marked as P2(not P1). I am not sure how to check the previous column value(that is previous month value - to check if it violated then) against the last/latest month under for each statement. Could someone please help me out here.  @bowesmana Can you help me on this. Thanks
Hello, i'm trying to developed a custom ReportingCommand. Like the build-in command stats, I want only the global result on all my events and not the partial results from the reduce function being u... See more...
Hello, i'm trying to developed a custom ReportingCommand. Like the build-in command stats, I want only the global result on all my events and not the partial results from the reduce function being use multiples times. I tried with the example given in the splunk sdk : import os,sys splunkhome = os.environ['SPLUNK_HOME'] sys.path.append(os.path.join(splunkhome, 'etc', 'apps', 'sum-dev', 'lib')) from splunklib.searchcommands import dispatch, ReportingCommand, Configuration, Option, validators from splunklib.searchcommands.validators import Fieldname import splunk import logging, logging.handlers def setup_logging(): logger = logging.getLogger('splunk.sumdev') SPLUNK_HOME = os.environ['SPLUNK_HOME'] LOGGING_DEFAULT_CONFIG_FILE = os.path.join(SPLUNK_HOME, 'etc', 'log.cfg') LOGGING_LOCAL_CONFIG_FILE = os.path.join(SPLUNK_HOME, 'etc', 'log-local.cfg') LOGGING_STANZA_NAME = 'python' LOGGING_FILE_NAME = "sumdev.log" BASE_LOG_PATH = os.path.join('var', 'log', 'splunk') LOGGING_FORMAT = "%(asctime)s %(levelname)-s\t%(module)s:%(lineno)d - %(message)s" splunk_log_handler = logging.handlers.RotatingFileHandler(os.path.join(SPLUNK_HOME, BASE_LOG_PATH, LOGGING_FILE_NAME), mode='a') splunk_log_handler.setFormatter(logging.Formatter(LOGGING_FORMAT)) logger.addHandler(splunk_log_handler) splunk.setupSplunkLogger(logger, LOGGING_DEFAULT_CONFIG_FILE, LOGGING_LOCAL_CONFIG_FILE, LOGGING_STANZA_NAME) return logger @Configuration() class SumCommand(ReportingCommand): total = Option( doc=''' **Syntax:** **total=***<fieldname>* **Description:** Name of the field that will hold the computed sum''', require=True, validate=validators.Fieldname()) @Configuration() def map(self, records): """ Computes sum(fieldname, 1, n) and stores the result in 'total' """ self.logger.debug('SumCommand.map') fieldnames = self.fieldnames total = 0.0 for record in records: for fieldname in fieldnames: total += float(record[fieldname]) yield {self.total: total} @Configuration() def reduce(self, records): """ Computes sum(total, 1, N) and stores the result in 'total' """ self.logger.debug('SumCommand.reduce') fieldname = self.total total = 0.0 for record in records: value = record[fieldname] try: total += float(value) except ValueError: self.logger.debug(' could not convert %s value to float: %s', fieldname, repr(value)) yield [{self.total: total}] dispatch(SumCommand, sys.argv, sys.stdin, sys.stdout, __name__)   With that code, the search index=_internal | head 200 | sum total=lines linecount   gives me a field "lines" with multiples values, and not one value corresponding to the total count like I want to. It's my first time doing a ReportingCommand, I will really appreciate anyone helps !
hi All, I am trying to create a floating menu which remain visible whenever we scroll down the dashboard.  it would be similar to google answers when we scroll down the search term stays on top, lo... See more...
hi All, I am trying to create a floating menu which remain visible whenever we scroll down the dashboard.  it would be similar to google answers when we scroll down the search term stays on top, looking same for filter menu line to behave similarly. open for ideas
My logs showing before April 2nd only when I check for previous 7 days it's not showing what may be the issue please share solution to us . But there is no error is showing . In actuall  the log is b... See more...
My logs showing before April 2nd only when I check for previous 7 days it's not showing what may be the issue please share solution to us . But there is no error is showing . In actuall  the log is batchdog.log under this log there are similar logs are rolled like batchdog.lig.mmddyy.*log  Any help please . There is no issues in splunkd.log also
Hi Everyone, Is that possible to create a popup modal  window in Splunk without using Javacript? Thanks in advance
Due to some Performance Issues, Lookup/Dashboard failures, search failures and taking longtime to execute the searches. we have done some troubleshooting and come up with some exclusion list which ne... See more...
Due to some Performance Issues, Lookup/Dashboard failures, search failures and taking longtime to execute the searches. we have done some troubleshooting and come up with some exclusion list which needs to be blacklist. here I have few questions 1. how to blacklist these exclusion list? what will be the process and procedure that needs to be followed? 2. where should we blacklist? should we create any global App? is there any specific App or place to do this? 3. most of these are .csv files, Bin and Jar files. I could see few splunk community answers, but I couldn't see any complete process or any procedure to follow. Thanks in Advance, Appreciate your help!
I'm new to Splunk And I'm trying to build summary index  i have KVStore and index A: inputlookup spam_ip (which is Indicator of compromise) B: index=main (which is event log) Both indexes have a ... See more...
I'm new to Splunk And I'm trying to build summary index  i have KVStore and index A: inputlookup spam_ip (which is Indicator of compromise) B: index=main (which is event log) Both indexes have a field that has the same data: eg: A has a field (spam_ip), B has a field (source_ip) And populate all record in set A that the record have data field contain in set B into summary index  
Hi, In our organization, some teams would like to see the new index logs. To explain, they want to see who created a new index. We are creating indexes via indexes conf.  Is there any way to see thi... See more...
Hi, In our organization, some teams would like to see the new index logs. To explain, they want to see who created a new index. We are creating indexes via indexes conf.  Is there any way to see this log by searching on the Splunk? Thank a lot for your helps.
Hi everyone, below is my sample query   index=xyz source=ABC | stats count   If I schedule this search then result have to saved on path "C:\\demo" automatically. I don't have access to \var repo... See more...
Hi everyone, below is my sample query   index=xyz source=ABC | stats count   If I schedule this search then result have to saved on path "C:\\demo" automatically. I don't have access to \var repository so cannot make changes over there. I'm using windows OS. How i can get the data on path automatically? 
Hi, two questions One : In our environment we have got a multi site cluster with multiple peers.  In the bucket status we have got much errors which the  fixe up reason "change masks failed" and wi... See more...
Hi, two questions One : In our environment we have got a multi site cluster with multiple peers.  In the bucket status we have got much errors which the  fixe up reason "change masks failed" and with current status "Cannot replicate as bucket hasn't rolled yet" . Whats does that  "change masks failed" mean? And what to do about it? Second question,  is there a possibility to resync the buckets with a cli command or other command line to sync multiple buckets (say about a 1000+ buckets) ?  Doing them all by hand is ... let's say.... time consuming thanx in advance for the answer. greetz Jari  
I have a dashboard setup like this.      First panel shows statistics by categories, and you can choose a time interval to display the data. Search code is: index=kopo sourcetype="kopo primjer... See more...
I have a dashboard setup like this.      First panel shows statistics by categories, and you can choose a time interval to display the data. Search code is: index=kopo sourcetype="kopo primjeri" category1 |top limit=20 category1. Time range scope is set with the drop down menu above the table, named time_prev Second panel basically does the same thing, again you can choose the time interval. Search code is: index=kopo sourcetype="kopo primjeri" category1 |top limit=20 category1. Time range scope is set with the drop down menu above the table, named time_curr   What I'm trying to do is, in the third panel, show percentage difference between the second panel count and first panel count.
So I am trying to run a splunk search using Splunk REST API which finds a list of triggered alerts.     | rest /servicesNS/-/search/alerts/fired_alerts/Alert%20Name       So the problem is tha... See more...
So I am trying to run a splunk search using Splunk REST API which finds a list of triggered alerts.     | rest /servicesNS/-/search/alerts/fired_alerts/Alert%20Name       So the problem is that if I try running this search for say 15 minutes, I want this API to return only the triggered alerts that occurred within the last 15 minutes, but it doesn't happen like that here. Instead this returns all the alerts that happened during the course of the day and is listed in triggered alerts. So is there a way by which I can get this to work?
Scenario example Index: Index=os, Ingested data _time, type, id 08:00,A,1 08:10,A,2 08:11,A,3 08:12,A,4 08:13,A,5 09:00,B,1 09:10,B,2 09:11,B,3 09:12,B,4 10:00,C,1 10:10,C,2 10:11,C,3 ... See more...
Scenario example Index: Index=os, Ingested data _time, type, id 08:00,A,1 08:10,A,2 08:11,A,3 08:12,A,4 08:13,A,5 09:00,B,1 09:10,B,2 09:11,B,3 09:12,B,4 10:00,C,1 10:10,C,2 10:11,C,3 we want to calculate the number of  ID in type  B  that exist in type A.  like type B have (1,2,3,4,) and type A have (1,2,3,4,5).  so result should be 4/5=80% Since we have huge amount of data, Is there  any solution  to handle  that with on SPL?
Hi all, I have a dynamic dropdown which has the values of a search result. I have a condition where i have to change the display name of one label without affecting the actual value. Does anyone has... See more...
Hi all, I have a dynamic dropdown which has the values of a search result. I have a condition where i have to change the display name of one label without affecting the actual value. Does anyone has the solution for this? Thanks in advance.
Hi Splunk Community, How can I add a check box to unhide and hide panel? This panel is not dependent of any other panel. Only need to check to hide, check to unhide. My code is below.  Such a simple... See more...
Hi Splunk Community, How can I add a check box to unhide and hide panel? This panel is not dependent of any other panel. Only need to check to hide, check to unhide. My code is below.  Such a simple thing, but can't get it done. Any help is greatly apricated.  *****MY ATTEMPT ON CREATING THE CHECKBOX INPUT***** <panel depends="$tok_hide$" > <input type="checkbox" token="tok_hide" searchWhenChanged="true"> <label></label> <choice value="TRUE"></choice> </input> *****BELOW IS THE PANEL I WOULD LIKE TO HIDE AND UNHIDE***** <table> <title>FileS To Read</title> <search> <query> Blah blah blah..... </query>
Hi Everyone, I need to create one variable using Eval my variable name is  - JOB_EXEC_TIME how can I create a variable using eval. And then I need to pass it in below query: timechart sum(JOB_EXEC... See more...
Hi Everyone, I need to create one variable using Eval my variable name is  - JOB_EXEC_TIME how can I create a variable using eval. And then I need to pass it in below query: timechart sum(JOB_EXEC_TIME) as TotalExecTime by JOB_NM Can someone guide me How can I do this in splunk.
Hi Guys,  I'd like to calculate the time delta. Here is the sample: _time                                    _raw  2021-5-26 00:00:00      port is down 2021-5-26 00:02:20      port is up 2021-5-... See more...
Hi Guys,  I'd like to calculate the time delta. Here is the sample: _time                                    _raw  2021-5-26 00:00:00      port is down 2021-5-26 00:02:20      port is up 2021-5-26 00:05:00      port is down 2021-5-26 00:10:05      port is up May I know how to calculate each downtime and sort by '_time' ? Thanks. What I'd like to see: _time                                    downtime 2021-5-26 00:00:00      02:20 2021-5-26 00:05:00      05:05  
I have a host that I am receiving logs into my heavy forwarder and that works fine. I now have a new log source on the same host and the entry in my inputs.conf is not passing the data I need throug... See more...
I have a host that I am receiving logs into my heavy forwarder and that works fine. I now have a new log source on the same host and the entry in my inputs.conf is not passing the data I need through. [monitor:///mnt/nfs/host/Backup/DHCP/2021-05-*] disabled = 0 The wildcard is to cover a longish string of text that forms the file name.  For example /mnt/nfs/host/Backup/DHCP/2021-05-22-192.168.64.88.log.0.ExtractedOption82Data I am not getting the data from this log file no matter what variation or combination I try.  Even if I specify a specific file name, the data is not appearing in the search. I've tried using the full path and file in the monitor stanza, I've tried just the path in monitor and then the filename in whitelist=() but the same result.  No Data yet I know the files exist and they contain data. This is driving me crazy as I have done similar things previously with no issue.  What am I missing??
Hello Friends, I am looking for your help for a rex expression. message =  [2021-05-26 00:00:33,477] {taskinstance.py:669} INFO - Dependencies all met for <TaskInstance: example_dag_oidc.test_bash ... See more...
Hello Friends, I am looking for your help for a rex expression. message =  [2021-05-26 00:00:33,477] {taskinstance.py:669} INFO - Dependencies all met for <TaskInstance: example_dag_oidc.test_bash 2021-05-25 00:00:00+00:00 [None]>  I would like to split this message field as below fields: logDateTime = 2021-05-26 00:00:33,477 logLevel = INFO logMessage = Dependencies all met for <TaskInstance: example_dag_oidc.test_bash 2021-05-25 00:00:00+00:00 [None]>  Thanks