All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produ... See more...
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produce long run times and use more memory/cpu. Are there any types of searches, users or otherwise exceptions that should be allowed to use "All Time"?
Q: Given a "timechart span=1m sep='-" last(foo) as foo last( bar) as bar by  hostname", how would I get a unique value of the bar-* fields? This has to be a standard problem, but I cannot find any... See more...
Q: Given a "timechart span=1m sep='-" last(foo) as foo last( bar) as bar by  hostname", how would I get a unique value of the bar-* fields? This has to be a standard problem, but I cannot find any writeup of solving it... Background: I'm processing Apache Impala logs for data specific to a query, server, and pool (i.e., cluster). The data arrives on multiple lines that are easily combined with a transaction and rex-ed out to get the values. Ignoring the per-query values, I end up with: | fields _time hostname reserved max_mem The next step is to summarize the reserved and max_mem by minute, taking the last value by hostname and summing the reserved values, extracting a single max_mem value. I can get the data by host using: | timechart span=1m sep="-" last( reserved ) as reserved last( max_mem ) as max_mem by hostname which gives me a set of reserved-* and max_mem-* fields. The reserved values can be summed with: | addtotals fieldname=reserved reserved-* Issue: The problem I'm having is getting the single unique value of max_mem back out of it. The syntax "| stats values( max_mem-* ) as max_mem" does not work, but gives the idea of what I'm trying to accomplish. I've tried variations on bin to group the values with stats to post-process them, but gotten nowhere. I get the funny feeling that there may be a way to "| addcols [ values( max_mem-* ) as max_mem " but that doesn't get me anywhere either. A slightly different approach would be leaving the individual reserved values as-is, finding some way to get the single max_mem value out of the timechart, and plotting it as an area chart using max_mem as a layover  (i.e., the addtotals can be skipped). In either case, I'm still stuck getting the unique value from max_mem-* as a single field for propagation with the reserved values. Aside: The input to this report is taken from the transaction list which includes memory estimates and SQL statements per query. I need that much for other purposes. The summary here of last reserved & max_mem per time unit is taken from the per-query events because the are the one place that the numbers are available.
Hi All,   How can I optimize the below query? Can we convert it to tstats?   index=abc host=def* stalled | rex field=_raw "symbol (?<symbol>.*) /" | eval hourofday = strftime(_time, "%H") | w... See more...
Hi All,   How can I optimize the below query? Can we convert it to tstats?   index=abc host=def* stalled | rex field=_raw "symbol (?<symbol>.*) /" | eval hourofday = strftime(_time, "%H") | where NOT (hourofday>2 AND hourofday <= 4) | timechart dc(symbol) span=15m | eventstats avg("count") as avg stdev("count") as stdev | eval lowerBound=-1, upperBound=(avg+stdev*exact(4)) | eval isOutlier=if('count' < lowerBound OR 'count' > upperBound, 1, 0) | fields _time, "count", lowerBound, upperBound, isOutlier, * | sort -_time | head 1 | where isOutlier=1
Hello everyone, I am trying to follow this guide https://research.splunk.com/endpoint/ceaed840-56b3-4a70-b8e1-d762b1c5c08c/ and I created the macros that this guide is referencing, but I am unable ... See more...
Hello everyone, I am trying to follow this guide https://research.splunk.com/endpoint/ceaed840-56b3-4a70-b8e1-d762b1c5c08c/ and I created the macros that this guide is referencing, but I am unable to create the macro for windows_rdp_connection_successful_filter, because I am unsure how to create an empty macro in Splunk web. The guide says "windows_rdp_connection_successful_filter is a empty macro by default. It allows the user to filter out any results (false positives) without editing the SPL." What does this even mean? We are currently using Splunk Enterprise 9.0.5
I'm using script interface for custom REST endpoints, and it uses: from splunk.persistconn.application import PersistentServerConnectionApplication I understand it's a package inside splunk ente... See more...
I'm using script interface for custom REST endpoints, and it uses: from splunk.persistconn.application import PersistentServerConnectionApplication I understand it's a package inside splunk enterprise, but is there a chance it is uploaded to PyPI?
The Cisco Networks Add-on for Splunk Enterprise is licensed under Creative Commons. This license does not allow for commercial use...I have been unable to track down a way to "purchase" a license tha... See more...
The Cisco Networks Add-on for Splunk Enterprise is licensed under Creative Commons. This license does not allow for commercial use...I have been unable to track down a way to "purchase" a license that would allow me to utilize this Add-on legally. Is there any chance someone can point me in the right direction?
I have an issue with adding indexed fields to each of the new (splatted) sourcetype: Configuration that "duplicated" indexed fields for each sourcetype: Now I see fields: indexedfileds1, indexedfil... See more...
I have an issue with adding indexed fields to each of the new (splatted) sourcetype: Configuration that "duplicated" indexed fields for each sourcetype: Now I see fields: indexedfileds1, indexedfileds2 and indexedfileds3 as 200%, For example: indexedfields1 values: valuie1 150% value2 50% props.conf [MAIN SOURCE] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-changesourcetype = sourcetype1, sourcetype2 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3 [sourcetype1] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 [sourcetype2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 transforms.conf [indexedfield1] REGEX= FORMAT= WRITE_META= [indexedfield2] REGEX= FORMAT= WRITE_META= [indexedfield3] REGEX= FORMAT= WRITE_META= [sourcetype1] DEST_KEY-MetaData:Sourcetype REGEX = some regex FORMAT = sourcetype::sourcetype1 [sourcetype2] DEST_KEY-MetaData:Sourcetype REGEX = some regex FORMAT = sourcetype::sourcetype2   I thought to move the indexed fields to each of the new sourcetype but then I see no indexed fields. Check with | tstats count props.conf [MAIN SOURCE] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-changesourcetype = sourcetype1, sourcetype2 [sourcetype1] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3 [sourcetype2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3   What is the needed configuration to see indexed fields per sourcetype, w/o showing 200% Thanks
Hi, We have 3 indexers and 1 search head (replication factor = 3).I need to permanently remove one indexer  What is the correct procedure: 1. Change replication factor = 2 and then remove inde... See more...
Hi, We have 3 indexers and 1 search head (replication factor = 3).I need to permanently remove one indexer  What is the correct procedure: 1. Change replication factor = 2 and then remove indexer OR 1. Remove indexer and after that change the replication factor to 2   Thanks
I'm using Splunk Enterprise 9 on Windows Server 2019 and monitoring a simple log file that has CRLF lines endings and is encoded as UTF8. My inputs stanza is as follows:   [monitor://c:\windows\deb... See more...
I'm using Splunk Enterprise 9 on Windows Server 2019 and monitoring a simple log file that has CRLF lines endings and is encoded as UTF8. My inputs stanza is as follows:   [monitor://c:\windows\debug\test.log] disabled = 0 sourcetype = my_sourcetype index=test   Consider two consectuive lines in the log file   Some data 1 Some data 2   When indexed this creates a single event rather than my expectation of 2 events. Where am I going wrong?    
I need help in understanding that what sourcetype would be ideal to parse logs of this File type  
Hello All,   I am currently testing upgrading from Splunk Enterprise version 9.0.4 to 9.2.0.1 but get the below error.    Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-... See more...
Hello All,   I am currently testing upgrading from Splunk Enterprise version 9.0.4 to 9.2.0.1 but get the below error.    Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py",  line 39, in <module> from splunk.rcUtils import makeRestCall, CliArgError, NoEndPointError, InvalidStatusCodeError MemoryError Error running pre-start tasks.    I will add that there are a few more lines to the error but this is an air-gapped environment and hoping there is no need to manually type it all out   TIA Leon
Hi at all, I have to track Splunk modifications (Correlation Searches,, conf files, etc...). I tried to use the _configtracker index that is complete and answers to all my requirements, but it does... See more...
Hi at all, I have to track Splunk modifications (Correlation Searches,, conf files, etc...). I tried to use the _configtracker index that is complete and answers to all my requirements, but it doesn't track the user that does an action. How could do this? Thank you for your help. Ciao. Giuseppe
  Hi,  I'm receiving the following error message:  Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. I am tring to create the seach via REST A... See more...
  Hi,  I'm receiving the following error message:  Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. I am tring to create the seach via REST API. Is there something special that i need to know about API calls? Via the UI -  the search works. Thanks!
We have installed "Proofpoint TAP Modular Input" add-on on victoria search head and created input (api call) to fetch the logs. For the first run, it fetched one event and from next runs it is throwi... See more...
We have installed "Proofpoint TAP Modular Input" add-on on victoria search head and created input (api call) to fetch the logs. For the first run, it fetched one event and from next runs it is throwing an error: " pp_tap_input: When trying to retrieve the last poll time, multiple kvstore records were found". We tried creating new input and observed the same behavior. 
Hello! I have a log that shows locking/unlocking PCs: 1710320306,u09,unlocked 1710320356,u09,locked 1710320360,u10,unlocked 1710320363,u10,locked 1710320369,u11,unlocked 1710320374,u11,locke... See more...
Hello! I have a log that shows locking/unlocking PCs: 1710320306,u09,unlocked 1710320356,u09,locked 1710320360,u10,unlocked 1710320363,u10,locked 1710320369,u11,unlocked 1710320374,u11,locked 1710320379,u09,unlocked 1710320384,u09,locked 1710320389,u10,unlocked 1710321119,u10,locked 1710321126,u11,unlocked 1710322754,u11,locked 1710322760,u09,unlocked 1710324580,u09,locked 1710326550,u09,unlocked 1710328364,u09,locked The first field - unix timestamp, second - user, third - action. I need to get a statistics for PCs beeing unlocked by users. So it will the sum of seconds between unlocked-locked actions for each user. Please, help with search query
We are having a problem with maintenance windows in Splunk IT Service Intelligence. We have a common service that two other services are dependent on, on top of those two there are other services de... See more...
We are having a problem with maintenance windows in Splunk IT Service Intelligence. We have a common service that two other services are dependent on, on top of those two there are other services dependent on them. Service a                                  Service b Service in maintenance     Service not in maintenance                              Common Service   With the current implementation in ITSI, we are forced to put "Service in maintenance" and "Common Service" in maintenance mode to avoid getting wrong healthscores in "Service a". This creates a problem for us, if an error occurs in "Common Service" during the maintenance window, as it won't reflect correctly in "Service not in maintenance", hence we will not be able to detect failures that affect our users. We tried raising a ticket, that correctly stated the this is works as designed and documented. We have an idea ITSIID-I-359, but so far it hasn't been upvoted. Kind regards
Hi Im trying to follow this tutorial https://splunkui.splunk.com/Create/ComponentTutorial and i have a problem when i start the demo. The steps that im following are: Navigate to an empty dir... See more...
Hi Im trying to follow this tutorial https://splunkui.splunk.com/Create/ComponentTutorial and i have a problem when i start the demo. The steps that im following are: Navigate to an empty directory of your choice and invoke Create: mkdir -p ~/Code/MyTodoList && cd ~/Code/MyTodoList npx @splunk/create (I choose A monorepo with a React Component) Run setup and start the component in demo mode yarn run setup cd packages/react-todo-list yarn run start:demo This bring me back the following errors: ERROR in ../../node_modules/@splunk/splunk-utils/url.js 11:19-41 Module not found: Error: Can't resolve 'querystring' in 'c SPLUNK\Code\MyTodoList\node_modules\@splunk\splunk-utils' BREAKING CHANGE: webpack < 5 used to include polyfills for node.js core modules by default. This is no longer the case. Verify if you need this module and configure a polyfill for it. If you want to include a polyfill, you need to: - add a fallback 'resolve.fallback: { "querystring": require.resolve("querystring-es3") }' - install 'querystring-es3' If you don't want to include a polyfill, you can use an empty module like this: resolve.fallback: { "querystring": false } ow can i handle this? Thx in advance. node -v v20.11.1 npm -v 10.2.4 yarn -v 1.22.22 I did that: npm install querystring-es3 And this is the fallback on webpack.config.js: const path = require('path'); const { merge: webpackMerge } = require('webpack-merge'); const baseComponentConfig = require('@splunk/webpack-configs/component.config').default; module.exports = webpackMerge(baseComponentConfig, {     resolve: {         fallback: { "querystring": require.resolve("querystring-es3") }     },     entry: {         ReactTodoList: path.join(__dirname, 'src/ReactTodoList.jsx'),     },     output: {         path: path.join(__dirname),     } });   But the error is the same.
How to extract the two fields from the message ? In this need to extract after API: START: /v1/expense/extract/demand/ nagl as one field . demand _con.csv in another field I am extracting  |rex ... See more...
How to extract the two fields from the message ? In this need to extract after API: START: /v1/expense/extract/demand/ nagl as one field . demand _con.csv in another field I am extracting  |rex field=message max_match=0 "API: START: /v1/expense/extract/odemand/ (?<OnDemandFileName>[^\n]\w+\S+)"   API: START: /v1/expense/extract/demand/nagl/demand_con.csv    
When adding a Time Range Picker on Dashboard Studio the formatting for Date and Time range is month day year, how do I change this formatting to day month year?   How it shows: How I want it t... See more...
When adding a Time Range Picker on Dashboard Studio the formatting for Date and Time range is month day year, how do I change this formatting to day month year?   How it shows: How I want it to show:  
Is there feature which notifies the new release of Splunk version ? may be via email or subscribing a newsletter or something ?