All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I have installed Splunk_TA_New_Relic v2.2.0 and configured the NR Insight Input. I did not use HTTP proxy and I could see the following error in the TA logs: AttributeError: module 'socks' ... See more...
Hi all, I have installed Splunk_TA_New_Relic v2.2.0 and configured the NR Insight Input. I did not use HTTP proxy and I could see the following error in the TA logs: AttributeError: module 'socks' has no attribute 'PROXY_TYPE_HTTP_NO_TUNNEL'  'http_no_tunnel': socks.PROXY_TYPE_HTTP_NO_TUNNEL, File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/cloudconnectlib/core/http.py", line 26, in <module> from .http import HttpClient File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/cloudconnectlib/core/engine.py", line 6, in <module> from .engine import CloudConnectEngine Hence the data collector failed with  2020-11-09 22:07:39,251 +0000 log_level=ERROR, pid=27340, tid=Thread-4, file=ta_data_collector.py, func_name=index_data, code_line_no=113 | [stanza_name="NR_Events"] Failed to index data Does anybody experienced the same issue and know how to resolve it? Thanks in advance.  
I'm new to Splunk and the the Microsoft 365 App.   On many of the screens it will show logs and include only the raw "UserID" (eg. 7eaa8557-21cc-4242-baba-123456789abc) which makes visual identificat... See more...
I'm new to Splunk and the the Microsoft 365 App.   On many of the screens it will show logs and include only the raw "UserID" (eg. 7eaa8557-21cc-4242-baba-123456789abc) which makes visual identification difficult. I just want to confirm that this is the expected behaviour, and is there an easy way to include the UserKey in addition/instead ?  Or is it a matter of setting up my own dashboards from a duplicate ? Appreciate any thoughts. J  
We currently have a Splunk deployment consisting of Splunk Enterprise and Splunk ES. We are thinking of incorporating Splunk Enterprise into our corporate enterprise SSO solution (i.e. ForgeRock), bu... See more...
We currently have a Splunk deployment consisting of Splunk Enterprise and Splunk ES. We are thinking of incorporating Splunk Enterprise into our corporate enterprise SSO solution (i.e. ForgeRock), but we're receiving some pushback from our Security Team on also incorporating Splunk ES as part of the enterprise SSO solution. For some context, access to ES is restricted to our Security Team while Enterprise is open to our Production Operations and DevOps Teams. Access is segregated based on LDAP groups and roles, so the Prod Ops and DevOps Teams can not access ES since they are not provisioned a Splunk ES role and neither are they part of an LDAP group associated to a Splunk ES group. I have a couple of questions: Is it generally considered a "bad practice" to allow SSO access to your SIEM (i.e. Splunk ES) Is it possible for Splunk to allow ForgeRock SSO access to Splunk Enterprise while retaining non-SSO access (i.e. input your Windows AD credentials) for Splunk ES? I'm assuming this should be case since a different LDAP strategy would be identified for Enterprise and ES?
I have installed an app on the forwarder with scripted input.  It's throwing an error while starting Splunk using 'Splunk' user      11/9/20 9:13:20.631 PM 11-09-2020 21:13:20.631 +0000 ERROR Ex... See more...
I have installed an app on the forwarder with scripted input.  It's throwing an error while starting Splunk using 'Splunk' user      11/9/20 9:13:20.631 PM 11-09-2020 21:13:20.631 +0000 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/AWSMData/bin/awsmeta.sh" /bin/sh: /opt/splunkforwarder/etc/apps/AWSMetaData/bin/awsmetarun.sh: Permission denied host = host.ip.internalindex = _internalsource = /opt/splunkforwarder/var/log/splunk/splunkd.logsourcetype = splunkd 11/9/20     I tried to set the permission to this script located under SPLUNK_HOME/etc/apps/<appname>/bin/<script> But whenever i restart Splunk daemon, the permission revokes 
Hello experts -  I'm scratching my head trying to figure out if there's something at the low level configuration side that needs to be done to allow lookup matches against a multi valued field.  I h... See more...
Hello experts -  I'm scratching my head trying to figure out if there's something at the low level configuration side that needs to be done to allow lookup matches against a multi valued field.  I have two environments, with relatively same data, where I'm able to run a lookup and get back data on one but not the other using a similar approach. See sample search below    | makeresults | eval uid= "1017" | lookup cust_uid.csv po_id as uid OUTPUT region, customer   The contents of the lookup files is relatively the same on both but when I run the same command I get a match on one splunk instance but not the other. Below is a snippet of the lookup contents - 3 sample entries   customer,region,po_id XXX,US - West,"0 19263 129888 locale-39488" YYY,US - East,"1299 3453" UUU,BRZ,1017   Again, I'm not quite sure why I can't do a simple lookup against the multi valued field po_id and get back the 2 fields I call out in the output, thoughts?
I am able to get results with this:   curl -k -u ${SPLUNK_USER}:${SPLUNK_PASSWORD} -X GET https://{$SPLUNK_URL}/servicesNS/-/-/saved/searches?search="\(eai:acl.owner='username'\)"\&count=0   but ... See more...
I am able to get results with this:   curl -k -u ${SPLUNK_USER}:${SPLUNK_PASSWORD} -X GET https://{$SPLUNK_URL}/servicesNS/-/-/saved/searches?search="\(eai:acl.owner='username'\)"\&count=0   but I would also like to return only those that are scheduled.  I tried to add:   ?search="\(eai:acl.owner='username'\) AND \(is_scheduled=1\)"\&count=0   but it returns 0. I also tried with only this:   ?search="\(is_scheduled=1\)"\&count=0   and it returns 0. I am not sure how to make the is_scheduled work. Thanks!
Hello Tem, I have log like below and I want to extract 3 fields and its values like below and do a line chart for top 20 tables which has higher numbers. TableName: test.table1 Ops:10 data:30 Lo... See more...
Hello Tem, I have log like below and I want to extract 3 fields and its values like below and do a line chart for top 20 tables which has higher numbers. TableName: test.table1 Ops:10 data:30 Log: INFO [Service Thread] 2020-11-09 19:22:13,294 StatusLogger.java:98 - Table Memtable ops,data INFO [Service Thread] 2020-11-09 19:22:13,294 StatusLogger.java:101 - test.table1 10,30 INFO [Service Thread] 2020-11-09 19:22:13,294 StatusLogger.java:101 -test.table2 10000,99999999
I've seen the documentation, but it doesn't really explain what or how it might be used.  I'm looking for a lightweight use case to help me understand it.     Thanks!
I have a blacklist.csv file that looks like the following, name description *vpn* VPN was found. *putty* Putty was found.   I'm trying this search and it's showing events match, but... See more...
I have a blacklist.csv file that looks like the following, name description *vpn* VPN was found. *putty* Putty was found.   I'm trying this search and it's showing events match, but not outputting the name/description fields from the lookup. index=os | lookup blacklist.csv name OUTPUT description | table name description   My goal is to search each event for every value in the name column, so the basic query I'm trying to match with the use of a lookup file is, index=os | search *vpn* OR *putty*   What lookup query do I need to implement this type of search and display the results?
HI, I have to create a drilldown for dashboard. I want to provide user with ability to click on a value in dashboard, then run a new query on selected (clicked) value and use the query result return... See more...
HI, I have to create a drilldown for dashboard. I want to provide user with ability to click on a value in dashboard, then run a new query on selected (clicked) value and use the query result returned (a external link url) to be opened on a new web page.  So basically following is the query, "drilldown_srch", that should work on value clicked "ID" and query result, "answer" which is a url link,  needs to be opened on new web page.  <condition field="ID">          <set token="drilldown_srch">|getreport __EXECUTE__ $row.ID$ | table answer</set> <link target="_blank">$click.answer|n$</link>  </condition>   I am looking for how to execute "drilldown_srch" query. Thanks.
We have data such as -     EVENT_TIMESTAMP="2020-11-09 11:12:30.617896 America/New_York",     How can I handle the America/New_York part?     TIME_FORMAT = %Y-%m-%d %H:%M:%S.%6N %Z     Th... See more...
We have data such as -     EVENT_TIMESTAMP="2020-11-09 11:12:30.617896 America/New_York",     How can I handle the America/New_York part?     TIME_FORMAT = %Y-%m-%d %H:%M:%S.%6N %Z     The %Z part doesn't seem right. Does this look correct?     TIME_PREFIX = .*EVENT_TIMESTAMP=\"                    
Just came across a scenario where a window server was writing to 2 diff indexes reported. What parameters needs to look into here and how to get the rid of that. Please suggest SMEs. Thanks in advance
Hi!  First Question - I'm having a little trouble with my XML that is trying to color a field based on value and blank value. I have a field called "Status" that has a date in this format "3/24/20... See more...
Hi!  First Question - I'm having a little trouble with my XML that is trying to color a field based on value and blank value. I have a field called "Status" that has a date in this format "3/24/2020" and if it doesn't have a date, it's blank.  If the field has a date the field will be green, if the field is blank it will be colored grey. My current XML is as follows: <format type="color" field="Status"> <colorPalette type="expression">if (like(value,""),"#00b33c","#A9A9A9")</colorPalette>   The above XML half works, it colors in the fields with dates Green, but the fields that are blank it leaves white as normal and doesn't give the grey color.  Any ideas how to solve this with simple XML?   Second Question - Is it possible to color a specific cell a color using simple XML?
OK I have been reading most of the morning and I have to just be missing something very simple. To explain what I am trying to do. 1. Lets take the simple query index=* host=*test*|dedup host|table... See more...
OK I have been reading most of the morning and I have to just be missing something very simple. To explain what I am trying to do. 1. Lets take the simple query index=* host=*test*|dedup host|table host This will obviously give me a unique list of hosts. 2. Second Query index=* host=*test* "Bleebles"|dedup host|table host This query will give me a unique list of hosts where the string "Bleebles" was found. (Obviously this is just example data) What I am trying (And failing) at is marrying these two queries up, and returning ONLY hostnames that DO NOT return records with the string "Bleebles" but of course issue #1 is when I invert the logic on search #2 I get EVERY record that has been splunked and doesn't match (Which is literally all the data)  Can anyone help with the logic I am missing here, using the two very basic queries above how would I first generate the full host list (That's the easy part) but then print a deduped list of hostnames that did NOT return a result in query #2, thereby giving me an exceptions list?    
Hi, I want to create a Splunk app that displays custom searches I get from the SplunkJS stack in a React UI component. I got the React UI working, but I still have issues importing the SplunkJS modu... See more...
Hi, I want to create a Splunk app that displays custom searches I get from the SplunkJS stack in a React UI component. I got the React UI working, but I still have issues importing the SplunkJS modules. I'm using Babel for source code compilation and Webpack for bundling my assets. The SplunkJS stack modules are initialized in the browser at runtime and I want to use those already initialized modules so that I don't have to bother with login, permissions, etc.. Adding the SplunkJS SDK as module is not an option. Basically, I'm trying to use the utils module like that:   require(["splunkjs/mvc/utils"], function(utils) { console.log(utils.getCurrentApp()); });     This code would generate an error in Webpack, because the module 'splunkjs/mvc/utils' is not defined/found of course. The module only exists on the Splunk appserver and I want webpack to ignore the import of this module at build time. So I thought that the externals configuration option in the webpack config is exacly what I'm looking for:   module.exports = { ... externals: { "splunkjs/mvc/utils": "commonjs2 splunkjs/mvc/utils", }, ... }     Webpack is now able to generate the bundle without build errors, because it completely ignores the SplunkJS module:   [splunkjs/mvc/utils] external "splunkjs/mvc/utils" 42 bytes {sirp-dashboard} [built]     That's totally what I wanted, but I receive the following error when executing the app:   Error: Module name "splunkjs/mvc/utils" has not been loaded yet for context: _. Use require([])     It looks like the splunkjs module is loaded after the bundle JS code is executed. I can execute the same code in the dev console as soon as the browser finished loading the page: I also tried to use different externals types in the webpack config like 'amd', 'umd', ... . They produce a different error:   ReferenceError: __WEBPACK_EXTERNAL_MODULE_splunkjs_mvc_utils__ is not defined     Is there a way to make webpack and the SplunkJS stack work? Any solution/hint is greatly appreciated   home-search.js (my react component)   import React, { Component } from 'react'; export default class HomeSearch extends Component { render() { require(["splunkjs/mvc/utils"], function(utils) { console.log(utils.getCurrentApp()); }); return ( <div> </div> ) } }     webpack.config.js   const path = require('path'); const fs = require('fs'); const appConfig = require('./appConfig'); const srcDir = path.join(__dirname, 'src'); const pages = fs.readdirSync(path.join(srcDir, 'pages')).reduce((entries, name) => { // eslint-disable-next-line no-param-reassign entries[name] = path.join(srcDir, 'pages', name, 'index.jsx'); return entries; }, {}); const jsBuildDir = path.join(__dirname, 'build', appConfig.id, 'appserver', 'static', 'build'); module.exports = { mode: process.env.NODE_ENV !== 'production' ? 'development' : 'production', entry: pages, externals: { "splunkjs/mvc/utils": "commonjs2 splunkjs/mvc/utils", }, output: { path: jsBuildDir, }, resolve: { extensions: ['.js', '.jsx'], }, module: { rules: [ { test: /\.jsx?$/, exclude: /node_modules/, use: ['babel-loader'], }, { test: /\.css$/i, use: ['style-loader', 'css-loader'], }, { test: /\.txt$/, use: ['raw-loader'], }, { test: /\.(png|jpg|jpeg|gif|svg|woff|woff2)$/, use: [ { loader: 'url-loader', options: { name: '[hash].[ext]', limit: 100000, }, }, ], }, { test: /\.(eot|ttf|wav|mp3)$/, use: [ { loader: 'file-loader', options: { name: '[hash].[ext]', }, }, ], }, ], }, };     ~ Julian
I am querying Nessus imported data and I would like to find old vulnerabilities still present today. More precisely, events where the publication patch date (patchPubDate in UNIX time) is older than... See more...
I am querying Nessus imported data and I would like to find old vulnerabilities still present today. More precisely, events where the publication patch date (patchPubDate in UNIX time) is older than the first day of last month. Example:     today is 9/11/2020 , so I would like to get the events older than 1/10/2020 I tried the query below but it gives 0 results which it is not possible because, unfortunately, I know I have dozens. So the query must be wrong somehow.   note: last_found is when the vulnerability was seen for last time in UNIX time.   index=nessus sourcetype="tenable:sc:vuln" patchPubDate!=-1 | eval today = round(relative_time(now(), "-0d@d")) | eval month = strftime(patchPubDate,"%m") | eval lastmonth = month - 1 | eval year = strftime(patchPubDate,"%Y") | eval dateLastMonth = "1/".lastmonth."/".year | eval dateLastMonthUnix = strptime(dateLastMonth, "%d/%m/%Y") | where last_found >= today AND pathPubDate<dateLastMonthUnix         any suggestion? many thanks.  
What is the appropriate way to calculate a sum of metric rates on counters and sum them, either for a single stat or for a timechart?  What does the rate() of a metric mean?  rate/sample or rate/seco... See more...
What is the appropriate way to calculate a sum of metric rates on counters and sum them, either for a single stat or for a timechart?  What does the rate() of a metric mean?  rate/sample or rate/second?   I am looking for guidance.   I am extracting bind9 stats from our dozen DNS recursive servers every 5 minutes.  The stats are counters. I am extracting the stats every 10 minutes so that I can get 2 samples each for rate calculations. Base search: | mstats rate(QrySuccess) as QrySuccess rate(QryFailure) as QryFailure rate(QrySERVFAIL) as QrySERVFAIL rate(QryFORMERR) as QryFORMERR     rate(QryNXDOMAIN) as QryNXDOMAIN rate(QryRecursion) as QryRecursion     prestats=false  WHERE index="test_network_metrics" AND host="*" span=10m by host | fields * SingleStat Panel | fields QrySuccess | eval Success=QrySuccess/300 | stats sum(Success) Timechart Panel | fields QrySuccess host | timechart span=10m latest(QrySuccess) as Success by host   The numbers don't exactly look right as at peak I am expecting traffic on the order of thousands per second.  I am thinking that I botched the stats.  System wide, I am running about 14M qph or about 3900 qps.  If I leave off the division by 300 convert 5min to 1sec, it looks closer to normal, or about 30% of what I am expecting.  Below is what I get from processing hourly summaries of DNS query transaction logs.   I experimented with summing the latest on the target field, but the numbers come out about the same.   | fields QrySuccess host | fillnull value=0.0 QrySuccess | stats latest(QrySuccess) as Success by host | addcoltotals labelfield=host fieldname=Success | tail 1 | fields Success
Hi I have a field name called report_name, it can have a number of status values associated with it, i.e. status=a or status=b or status=c etc. A complete report run would have (status = a & status... See more...
Hi I have a field name called report_name, it can have a number of status values associated with it, i.e. status=a or status=b or status=c etc. A complete report run would have (status = a & status = b) A failed report run would have (status = a & status = b & status = c) How do I group results where (status = a & status = b) = Good  and (status = a & status = b & status = c) = Failed for a particular report name? The issue I'm running into is when a report_name has both (status = a & status = b) &  a little while later (status = a & status = b & status = c) I am unable to group them correctly, and end up naming both the same.    thanks
I'm trying to extract multiple fields out of my log. my problem is that I do have multiplie ip adresses - one for the source, one from the webserver etc. so to counter having extracted four ip adress... See more...
I'm trying to extract multiple fields out of my log. my problem is that I do have multiplie ip adresses - one for the source, one from the webserver etc. so to counter having extracted four ip adresses in every event into the same field I want to use a positive lookahead to tell the regex "yes this ip but only if afterwards there comes x and y" this is an example log:   2014-03-27 23:54:58 1 10.5.6.121 304 TCP_HIT 422 501 GET http assets.razerzone.com 80 /eeimages/products/13785/razer-naga-2014-right-03.png - - - - 54.230.18.168 image/png http://imgur.com/gallery/u3o7l "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Iron/31.0.1700.0 Chrome/31.0.1700.0 Safari/537.36" OBSERVED "Technology/Internet;Shopping" - 163.252.254.203 - 54351 2014-03-28 23:54:59 90670 10.62.0.120 200 TCP_NC_MISS 601 693 GET http realtime.services.disqus.com 80 /api/2/thread/2221828111 ?bust=1780 - - - realtime.services.disqus.com application/json http://disqus.com/embed/comments/?base=default&disqus_version=6c05c0ca&f=bootsnipp&t_i=9WgD&t_u=http%3A%2F%2Fbootsnipp.com%2Fsnippets%2Ffeatured%2Fminimal-preview-thumbnails&t_d=Viewing%20snippet%20Minimal%20Preview%20Thumbnails%20%7C%20Bootsnipp.com&t_t=Viewing%20snippet%20Minimal%20Preview%20Thumbnails%20%7C%20Bootsnipp.com&s_o=default "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.154 Safari/537.36" OBSERVED "Newsgroups/Forums" - 163.252.254.203 184.173.90.195 52401     here I want  for example extract the source_ip (the first ip in the event) like this:   (\d+\.\d+\.\d+\.\d+)[[:blank:]](?=\d*)    still this gives me many other ips... so I think I have to use multiple lookaheads? but I really don't know how to use it in that case. Especially when I want to extract the ip and give it a field name. In my head this means that I have to encapsulate EVERYTHING including the lookaheads ?    thanks a lot for your help!
Hello, I installed Splunk Free a while back on a test laptop and at some point, ran into some licensing violations because of the indexing rate. At some point, I was unable to run searches. I reins... See more...
Hello, I installed Splunk Free a while back on a test laptop and at some point, ran into some licensing violations because of the indexing rate. At some point, I was unable to run searches. I reinstalled Splunk Free recently, switched from Trial to Free, and when checking the Licensing page, saw this message: 1 pool warning reported by 1 indexer (Correct by midnight to avoid warning) The problem is, according to "Volume used today", my indexing rate in the Monitoring Console, and the actual size of my indexes, I'm not anywhere remotely close to hitting 500 MB/day. I was able to obtain a Splunk Dev license, and the issue persists after installing that.  Now, I have pool: auto_generated_pool_enterprise, and volume used today (for example):   2 MB / 51,200 MB. I'm seeing the same message on a Linux VM I set up on the same laptop and installed Splunk Free on. Is this going to be something I need to worry about, or since my indexing rate is not anywhere close to 500 MB/day, I should be fine? If I need to address it, what is the best course of action? To me, it's difficult because there's not an index or data source I can narrow down and take action on. Perhaps it's because of the past issue with licensing violations. Thanks!