All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi. I have a task to extract all fields from raw logs used by our alerts and I wonder if there is an automated way to do it, or I have to go manually through each alert to check what fields are used?... See more...
Hi. I have a task to extract all fields from raw logs used by our alerts and I wonder if there is an automated way to do it, or I have to go manually through each alert to check what fields are used? All help is really appreciated
Hi ,  I would like to check if there are multiple instances of a job/process running . Ex: My Splunk search :     index=abc <jobname> | stats earliest(_time) AS earliest_time, latest(_time) ... See more...
Hi ,  I would like to check if there are multiple instances of a job/process running . Ex: My Splunk search :     index=abc <jobname> | stats earliest(_time) AS earliest_time, latest(_time) AS latest_time count by source | convert ctime(earliest_time), ctime(latest_time) | sort - count   Returns :   source earliest_time latest_time count logA 06/06/2020 15:24:09 06/06/2020 15:24:59 1 logB 06/06/2020 15:24:24 06/06/2020 15:25:12 2   In the above since logB indicates job run before logA completion time,  it is an indication of the concurrent run of the process. I would like to generate a list of all such jobs if it is possible, any help is appreciated.   Thank you. 
I create a search query as follows: sourcetype="websense:proxy" | table src_host policy | dedup src_host policy | search NOT [inputlookup ip_white_list.csv] The ip_white_list.csv file contains... See more...
I create a search query as follows: sourcetype="websense:proxy" | table src_host policy | dedup src_host policy | search NOT [inputlookup ip_white_list.csv] The ip_white_list.csv file contains 2 columns  (policy,src_host) and 21,435 rows. I found some src_host are not filtered out from the search result so I want to know Is there a limit on the search terms or the number of AND/OR conditions?
Hello, I have two standalone Splunk instances, Splunk A and Splunk B. Splunk A has a scripted input that runs on a cron schedule and indexes results. What I am trying to do is have Splunk A send tha... See more...
Hello, I have two standalone Splunk instances, Splunk A and Splunk B. Splunk A has a scripted input that runs on a cron schedule and indexes results. What I am trying to do is have Splunk A send that same data to Splunk B so that it is indexed again (yes I know it's redundant and doubles license usage). I have studied examples here https://docs.splunk.com/Documentation/Splunk/8.0.4/Forwarding/Routeandfilterdatad and have managed to get half way: Splunk A sends the data to Splunk B where it is indexed, but does not index the data itself.  Here are my config files: props.conf [splunk_a_sourcetype] ... TRANSFORMS-defaultRouting=defaultRouting TRANSFORMS-secondaryRouting=secondaryRouting transforms.conf [defaultRouting] REGEX=. DEST_KEY=queue FORMAT=indexQueue [secondaryRouting] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=secondaryGroup outputs.conf [tcpout:secondaryGroup] server=dns.for.splunk.b:9997 What am I missing so that Splunk A will index the events as well as forward them to Splunk B? Thanks! Andrew
Hi Team, I am trying to get list of apis , whose avg response time is greater than particular threshold. Using Chart and timechart to gain avg response stats in dashboard to display APIs whose avg re... See more...
Hi Team, I am trying to get list of apis , whose avg response time is greater than particular threshold. Using Chart and timechart to gain avg response stats in dashboard to display APIs whose avg response time is greater than particular threshold. Query using: chart command: index=### sourcetype=### | rex field=_raw "###(?[^ ]+)" | eval fields=split(Application_Name,"-") | eval Service_name=mvindex(fields,1)."-".mvindex(fields,2) |chart span=15m avg(response_time) over _time by Service_name where avg > 5 usenull=f | fields - OTHER and timechart command: index=### sourcetype=### | rex field=_raw "###(?[^ ]+)" | eval fields=split(Application_Name,"-") | eval Service_name=mvindex(fields,1)."-".mvindex(fields,2) |timechart span=15m avg(response_time) by Service_name where avg > 5 usenull=f | fields - OTHER .Results, for both using where condition, i could still see those api ,whose avg time is less than 5sec , but near to 5sec, e.g. api with 3 sec or 3.5 comes up panel.
Hi All, I am getting an error message: "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)" 2020-06-10 08:22:04,117 ERROR : unique_id=51e81cec-aaf3-11ea-b2b2-069403cc3d18 aws... See more...
Hi All, I am getting an error message: "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)" 2020-06-10 08:22:04,117 ERROR : unique_id=51e81cec-aaf3-11ea-b2b2-069403cc3d18 aws_s3_bucketname=*********** error_msg="(<class 'ssl.SSLError'>, SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'), <traceback object at 0x7f1c37d70a28>)" . I get this error message when trying to run the python script through custom alert, and the use case is to upload the CSV file from Splunk search results to the AWS S3 bucket. It is running fine when the script is executed without a custom alert.
Hi All,    I have a weird requirement here but maybe some expert help might be showered . I have a set of 800+ agents distributed across 3 Data center ( 1 primary DC and 2 secondary DC) and  client... See more...
Hi All,    I have a weird requirement here but maybe some expert help might be showered . I have a set of 800+ agents distributed across 3 Data center ( 1 primary DC and 2 secondary DC) and  client expects us to use SSL communication between the Datacenter. I use a Heavy Forwarder ( HF) at those 2 secondary DC to do some custom monitoring . I have the solution finalized, with Indexer cluster and SH  to be at Primary DC.  I am planning to use a single indexer cluster to receive data from all forwarders. Now I am looking at following query i have. 1) Can i have my Universal forwarders in the primary DC talk without SSL certificates, while communication from UF agents in other DC is with SSL ? 2) Can i have the UF's in secondary DC talk to the HF in secondary DC without SSL and Data from HF's forwarder to Indexer over SSL.   Basically can I configure SSL and non SSL communication in a single set of Indexer cluster ?   SKR..
Hello, I have uploaded only 3 different files into the search and reporting app. I went to pivot, I selected sourcetype as 'Split by Rows'. I am getting a lot of fields which include audittrail, kv... See more...
Hello, I have uploaded only 3 different files into the search and reporting app. I went to pivot, I selected sourcetype as 'Split by Rows'. I am getting a lot of fields which include audittrail, kvstore, mongod, scheduler, splunkversion, splunk, splunkd_conf, etc. Why is that?  
Hi All, I am in the process of creating an app for AWS sources and one of the objectives is to alert when an account stops sending events. As there are a number of sources per account I have settled... See more...
Hi All, I am in the process of creating an app for AWS sources and one of the objectives is to alert when an account stops sending events. As there are a number of sources per account I have settled on the one source that all accounts provide, CloudTrail. There may be a flaw in my logic but I will continue for now. I have looked at the TA's data however we aggregate into a single account so this would not be representative. The feeds are via either the AWS TA or HEC.   The problem I am having is to alert when the source stops, I have created a similar app to DMC i.e. a list of accounts feed into a lookup but the problem I am having is that unlike DMC there is no avg__tcp_kbps etc to understand if the feed has stopped. I also looked at taking the date_second, date_mday etc but need a way of updating the lookup csv with the values.   My questions is, has anyone done anything similar with AWS feeds or are there some values around time that I could use?   Thanks
Hi Experts, I have data as shown below, Whenever we run the search, if the current time is greater than start time we need to show the status as "Not Started", similar end time less the current t... See more...
Hi Experts, I have data as shown below, Whenever we run the search, if the current time is greater than start time we need to show the status as "Not Started", similar end time less the current time then status as Completed, if the current time is in b/w start time and end time then status as Running DATA: job SLA_start_time SLA_end_time abs 16:00 17:00 abc 20:00 23:00 mlp 23:00 01:00 Expected output: if the current time hour is 18:00 job SLA_start_time SLA_end_time Status abs 16:00 17:00 completed abc 20:00 23:00 Not Started zxc 18:00 19:00 Running mlp 23:00 01:00 Not Started Note: here few of jobs are starting end of today and supposed to complete it next day early morning like 01:00 clock please help me on this, thanks in advance.
Hello, I am trying to create a pivot. I have chosen 'Date_Wday' for row splits. I have got the respective values for each action, but the respective sum does not add up. I have used column split and... See more...
Hello, I am trying to create a pivot. I have chosen 'Date_Wday' for row splits. I have got the respective values for each action, but the respective sum does not add up. I have used column split and selected the Totals to Yes. The count of values is different from the values under All. Kindly help me, what is the logic.   
Hi....I am relatively new to Splunk... So i am uploading a csv file as a input to splunk and trying to plot charts....The thing I have the contents of csv file in a specific format... Example: ... See more...
Hi....I am relatively new to Splunk... So i am uploading a csv file as a input to splunk and trying to plot charts....The thing I have the contents of csv file in a specific format... Example: so when i upload this csv to splunk it creates a table like this The problem is the Names are not shown for certain rows (even though the user knows from seeing the csv that first 5 rows belong to Raja and next 5 rows belong to Pragya) and only in some rows Names can be seen.. Is there anything we can do in splunk internally to solve this...Any help would be great!! Thanks!!
  The latest version is 20.5.0. You're asking the user to use 4.x |
Hi, I want to remove insecure tls cipher suites from indexpeer replication. The default setting in server.conf/[sslConfig] is:   cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-G... See more...
Hi, I want to remove insecure tls cipher suites from indexpeer replication. The default setting in server.conf/[sslConfig] is:   cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256   However, if I remove the insecure ciphers   AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256   From cipherSuite and deploy that configuration to our indexpeers, indexpeer replication won't work anymore.   splunkd.log of one of our indexpeers after the configuration change:   06-09-2020 13:41:08.732 +0200 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv2/v3 read server hello A', alert_description='handshake failure'. 06-09-2020 13:41:08.732 +0200 ERROR TcpOutputFd - Connection to host=10.10.10.10:9101 failed. sock_error = 0. SSL Error = error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure 06-09-2020 13:41:08.733 +0200 WARN BucketReplicator - Connection failed   We are using Splunk 8.0.4. Has anyone succeeded in securing Splunk?   Thanks!
I am working with an app that has Data Model in it, I am using this Data Model in many savedsearches with tstats command. I am getting below error no matter the Data Model is accelerated or not.  ... See more...
I am working with an app that has Data Model in it, I am using this Data Model in many savedsearches with tstats command. I am getting below error no matter the Data Model is accelerated or not.   TsidxStats - sid:summarize_1591771322.7666 Failed to contact the server endpoint https://127.0.0.1:8089 from touchSummary()  
Hi, I would like to understand how I would be able to setup an alert that must be sent via email only once. Eg. When an event is returned for a search an alert is raised an an email gets sent. I onl... See more...
Hi, I would like to understand how I would be able to setup an alert that must be sent via email only once. Eg. When an event is returned for a search an alert is raised an an email gets sent. I only want to be notified once about this event. In other words if the search runs every 5 minutes it should send the alert for the initial event it picks up, not every 5 minutes. Is the only way to prevent this by selecting Throttle and specifying a number of hours/days/minutes to suppress the alerts? In my search example I receive the usernames that are locked out with times etc in a table format. I am worried that if I set suppress I will not receive new locked out usernames during that period. So basically I only want an email event sent once, and only again when the search result changes from the initial result, but still conforms to the conditions setup in the alert. I hope this makes sense and if someone could assist me in understanding what I can do. Thanks
Hello There, I have got a search result as given below (without the highlighted row, i.e. Total): Analyst Month Total Count SLA (%) Working Days Daily Count (Total Count / Working Days) ... See more...
Hello There, I have got a search result as given below (without the highlighted row, i.e. Total): Analyst Month Total Count SLA (%) Working Days Daily Count (Total Count / Working Days) ABC May-20 68 97 18 3.77 DEF May-20 45 100 20 2.25 GHI May-20 25 94 15 1.66 JKL May-20 86 98 22 3.91 Total May-20 224 97.25 75 2.97   Data for all these columns (except "Working Days") is available through a DB Connect Database live feed. Data for column "Working Days" is stored in a csv lookup file (created through Lookup Editor). Search which gives me above output (without Total Values) is something like: | inputlookup TeamDetails.csv | addinfo | eval temp=strftime(info_min_time,"%B-%Y") | where Month_Year=temp | table Analyst Month_Year | join type=left Analyst [search index="idx_test" source="src_test" sourcetype="srctype_test" | sort -editTime | dedup id | lookup TeamDetails.csv Analyst OUTPUT WorkingDays | stats count as TotalCount by Analyst, WorkingDays | eval DailyCount=round(TotalCount/WorkingDays,2) | table Analyst DailyCount WorkingDays ] | join type=left Analyst [search index="idx_test" source="src_test" sourcetype="srctype_test" | sort -editTime | dedup id | eval inSLACount=if(SLA_Flag="1",1,0) | eval outSLACount=if(SLA_Flag="0",1,0) | stats sum(inSLACount) as insideSLA, sum(outSLACount) as outsideSLA, count(id) as TotalCount by Analyst | eval SLA=round(insideSLA/(TotalCount)*100,2) | table Analyst SLA TotalCount ] | table Analyst Month_Year TotalCount SLA WorkingDays DailyCount How can I change this search so that I get above given table output (including Total numbers). Thank you. Madhav
I have an event for example:   request="GET /?act=auth&url=auth&email=auth&type=auth&status=auth HTTP/1.1" status=403 reqid="xxxxxxxxxx"   I need status to bt 403, not auth. I am executing the q... See more...
I have an event for example:   request="GET /?act=auth&url=auth&email=auth&type=auth&status=auth HTTP/1.1" status=403 reqid="xxxxxxxxxx"   I need status to bt 403, not auth. I am executing the query index="abc" | eval status = mvindex(status,-1) | status count by status I need to return 403 with count 1 but it is returing auth with count 1 @to4kawa Please check
Some PPT are there for Spark connectivity with Splunk but all talks about the data exporting from Splunk. Need to ingest the data in Splunk from Spark
Requirement : Onclicking the single value of the panel "DonwloadCountExceeded ( DailyLimit - 1 )" , "MoreDetails" panel have to be populated and when clicked again should be hidden. Issue : "More ... See more...
Requirement : Onclicking the single value of the panel "DonwloadCountExceeded ( DailyLimit - 1 )" , "MoreDetails" panel have to be populated and when clicked again should be hidden. Issue : "More Details" panel would need a field value "DownloadLimit" ( value varies per panel - daily,weekly,monthly) - doesnt exist in the final output of the actual panel "DonwloadCountExceeded ( DailyLimit - 1 )".     <form> <fieldset submitButton="false"> <input type="time" token="time"> <label>Select TIME :</label> <default> <earliest>-5m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="homeOffice" searchWhenChanged="true"> <label>Select HomeOffice :</label> <choice value="*">ALL</choice> <default>*</default> <search> <query>source="http:datalog" sourcetype="datalog" homeOffice ="*"| search hrId= "$hrId$" email = "$email$" firstname = "$firstname$" lastname = "$lastname$" | dedup homeOffice | stats count by homeOffice</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <fieldForLabel>homeOffice</fieldForLabel> <fieldForValue>homeOffice</fieldForValue> </input> <input type="text" token="hrId" searchWhenChanged="true"> <label>HrID :</label> <default>*</default> </input> <input type="text" token="email" searchWhenChanged="true"> <label>Email :</label> <default>*</default> </input> <input type="text" token="firstname" searchWhenChanged="true"> <label>FirstName :</label> <default>*</default> </input> <input type="text" token="lastname" searchWhenChanged="true"> <label>LasttName :</label> <default>*</default> </input> </fieldset> <row> <panel> <title>DonwloadCountExceeded ( DailyLimit - 1 )</title> <single> <search> <query>source="http:datalog" sourcetype="datalog" hrId= "$hrId$" email = "$email$" firstname = "$firstname$" lastname = "$lastname$" homeOffice="$homeOffice$" | eval DownloadLimit = 1 | stats count by DownloadLimit | where count &gt; DownloadLimit | rename count as "DownloadCount" | fields DownloadCount</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051", "0x0877a6", "0xf8be34", "0xf1813f", "0xdc4e41"]</option> <option name="rangeValues">[0,30,70,100]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">0</option> <option name="useThousandSeparators">1</option> <drilldown> <condition> <set token="ShowDetails">true</set> <set token="selected_value">$row.DownloadLimit$</set> </condition> </drilldown> </single> </panel> </row> <row depends="$ShowDetails$"> <panel> <title>More Details</title> <table> <search> <query>source="http:datalog" sourcetype="datalog" hrId= "$hrId$" email = "$email$" firstname = "$firstname$" lastname = "$lastname$" homeOffice="$homeOffice$" | eval DownloadLimit = $selected_value$ | stats count by hrId email firstname lastname homeOffice | where count &gt; DownloadLimit | rename count as "DownloadCount"</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <unset token="ShowDetails"></unset> </drilldown> </table> </panel> </row> </form>