All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I need to build a rule that alerts for specific activity by specific user past working hours. For example: I want to alert when user "Dani" logs in to the computer not between 7:00 - 18:00... See more...
Hi All, I need to build a rule that alerts for specific activity by specific user past working hours. For example: I want to alert when user "Dani" logs in to the computer not between 7:00 - 18:00. The problem is that the user is not in the same time zone as me. So login logs at 19:00 in my time zone can be actually at 14:00 in the other user time zone. -  Does anyone know what is the time zone that the rules go by? -  Is it set by the configuration of the user that the rule is running as?    Thanks ! 
Hi  looking for help , splunkforwarder service status is not running, but the splunkforwarder service is generating logs and we are able see the internal logs in cli  as well as in ui, need to kno... See more...
Hi  looking for help , splunkforwarder service status is not running, but the splunkforwarder service is generating logs and we are able see the internal logs in cli  as well as in ui, need to know which cause it's showing splunk service is not running.  splunk@:/opt/splunkforwarder/bin$ ./splunk status splunkd is not running. splunk@:/opt/splunkforwarder/bin$ ./splunk version Splunk Universal Forwarder 6.5.2 splunk@:/opt/splunkforwarder/bin$ ps -ef|grep splunkd splunk 15913 1 3 06:01 ? 00:11:54 splunkd -p 8089 start splunk 15914 15913 0 06:01 ? 00:00:00 [splunkd pid=15913] splunkd -p 8089 start [process-runner] splunk 18627 18439 0 11:13 pts/1 00:00:00 grep splunkd splunk@:/opt/splunkforwarder/bin$ date Thu Feb 18 11:14:20 UTC 2021 splunk@:/opt/splunkforwarder/bin$ cd /opt/splunkforwarder/var/log/splunk/ splunk@:/opt/splunkforwarder/var/log/splunk$ tail -5 splunkd.log 02-18-2021 11:14:31.788 +0000 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_XXXXXXX_8089_XXXXXXXXX_0D56BD7C-435F-43C3-9C3D-46732D507E0C 02-18-2021 11:14:40.477 +0000 INFO WatchedFile - Resetting fd to re-extract header. 02-18-2021 11:14:43.716 +0000 INFO TcpOutputProc - Connected to idx=XXXXXX:9998 using ACK. 02-18-2021 11:14:43.892 +0000 INFO TcpOutputProc - Connected to idx=XXXXXX:9999 using ACK. 02-18-2021 11:14:46.292 +0000 INFO TcpOutputProc - Connected to idx=XXXXXX:9999 using ACK. splunk@:/opt/splunkforwarder/var/log/splunk$
I understand that those files are part of a locking mechanism in the coldstorage. To de-fluff our directories I want to get rid of the stale ones. Is it safe to delete them after checking that ther... See more...
I understand that those files are part of a locking mechanism in the coldstorage. To de-fluff our directories I want to get rid of the stale ones. Is it safe to delete them after checking that there is no corresponding directory present, i.e. no directory named like the .rbsentinel file minus a leading dot and the appendix? Thanks in advance!
Some one changed my dashboard code in Splunk cloud. So, please help me to find who changed my dashboard code and where it is changed and what is the changed  code snippet.   Please help me out, 
I have two Splunk Enterprise. In one, have a visualization format. Please see screenshot below. i want to apply that to a dashboard panel in my other Splunk Instance but the only options is a Bu... See more...
I have two Splunk Enterprise. In one, have a visualization format. Please see screenshot below. i want to apply that to a dashboard panel in my other Splunk Instance but the only options is a Bubble.   I also checked splunk version but it's not that because some are still 7.2 but with the bubble chart option while some are 8.6 but has the bubble option. What may be causing the lacking Bubble Chart format option?
We have noticed that during the HFW shutdown procedure (e.g. caused by Parsing app deployment) there is a sequence of events for what seems to be closing the active TCP incoming connections. An examp... See more...
We have noticed that during the HFW shutdown procedure (e.g. caused by Parsing app deployment) there is a sequence of events for what seems to be closing the active TCP incoming connections. An example as follows:   09-08-2020 15:12:38.606 +0100 INFO TcpInputProc - Running shutdown level 1. Closing listening ports. 09-08-2020 15:12:38.606 +0100 INFO TcpInputProc - Done setting shutdown in progress signal. 09-08-2020 15:12:38.606 +0100 INFO TcpInputProc - Shutting down listening ports 09-08-2020 15:12:38.606 +0100 INFO TcpInputProc - Stopping IPv4 port 9997 09-08-2020 15:12:38.606 +0100 INFO TcpInputProc - Setting up input quiesce timeout for : 90.000 secs 09-08-2020 15:12:38.942 +0100 INFO TcpInputProc - Waiting for connection from src=172.18.18.185:64536, 172.30.194.1:58219, 172.16.57.76:49451, 172.16.218.6:52112, 172.30.50.1:34143, 172.16.36.20:50702, 172.18.13.28:39612, 172.30.66.2:47563, 172.16.57.79:54330, 172.16.165.70:57168 ... to close before shutting down TcpInputProcessor. ... 09-08-2020 15:14:19.103 +0100 WARN TcpInputProc - Could not process data received from network. Aborting due to shutdown 09-08-2020 15:14:20.123 +0100 WARN TcpInputProc - Could not process data received from network. Aborting due to shutdown 09-08-2020 15:14:21.138 +0100 WARN TcpInputProc - Could not process data received from network. Aborting due to shutdown 09-08-2020 15:14:22.172 +0100 WARN TcpInputProc - Could not process data received from network. Aborting due to shutdown     Now, what worries me is that the number of "TcpInputProc - Could not process data received from network. Aborting due to shutdown" events can vary anything from 20 (which in total takes 15-20 seconds) to 100s (which can be as long as 4 minutes). The more those events I have, the longer the shutdown procedure takes and the longer the HFW remains inactive (unable to process data). Eventually, the shutdown process is forced after the default 360s. Questions: 1. Why do we see different number of "TcpInputProc - Could not process data received from network. Aborting due to shutdown" events on different occasions? 2. Is there any way of limiting them and generally speeding up the shutdown procedure? Maybe there is some tuning we can do on the HFW nodes?
Hi Team, I have one requirement Below is my code for multiselect: <input type="multiselect" token="Teams" searchWhenChanged="true"> <label>Teams</label> <choice value="true()">All</choice> <cho... See more...
Hi Team, I have one requirement Below is my code for multiselect: <input type="multiselect" token="Teams" searchWhenChanged="true"> <label>Teams</label> <choice value="true()">All</choice> <choice value="like(parent_chain, &quot;%BLAZE%&quot;)">BLAZE</choice> <choice value="like(parent_chain, &quot;%Oneforce%&quot;)">Oneforce</choice> <choice value="like(parent_chain, &quot;%MerchantForce%&quot;)">MerchantForce</choice> <choice value="like(parent_chain, &quot;%MDT%&quot;)">MDT</choice> <choice value="like(parent_chain, &quot;%ConsumerForce%&quot;)">ConsumerForce</choice> <choice value="like(parent_chain, &quot;%MK-AUTO%&quot;)">MK-AUTO</choice> <choice value="like(parent_chain, &quot;%MarketingForce%&quot;)">MarketingForce</choice> <choice value="like(parent_chain, &quot;%PremiumProductPlatform%&quot;)">PremiumProductPlatform</choice> <fieldForLabel>Teams</fieldForLabel> <prefix>(</prefix> <delimiter> OR </delimiter> <suffix>)</suffix> <initialValue>All</initialValue> <default>All</default> </input> I want to convert it in dropdown. How can I do that.
Hi,   In my production environment, I have two Asterisk Servers installed where one of them caters to 95% of the data while the other caters only 5%. I successfully installed Splunk Universal Forw... See more...
Hi,   In my production environment, I have two Asterisk Servers installed where one of them caters to 95% of the data while the other caters only 5%. I successfully installed Splunk Universal Forwarders on my two Asterisks and was able to index data from the 5% server. Now, I want to index similar data from the 95% server as well but, I'm not sure how much quota has been consumed so far out of the 500MB and indexing the 95% server might exceed the limit.   Is there a way to figure out how much out of the 500MB is used and how much Is left?   Any help will be appreciated.
Hi all, I'm new to Splunk and couldn't find an answer to my problem in the docs. I configured _time extraction from a raw data and put the needed config in props.conf in Heave Forwarder. After tha... See more...
Hi all, I'm new to Splunk and couldn't find an answer to my problem in the docs. I configured _time extraction from a raw data and put the needed config in props.conf in Heave Forwarder. After that, I noticed that in some of the timestamps in the Splunk Server are not recognized correctly, therefore I need to correct a timestamps extraction. After changing the parameters in props.conf, what are the next steps that I have to do in order to have a new change to appear in Splunk server.   Thanks
I am using a query as below | inputlookup lookup_name where (Environment=PROD) AND sourcetype="name" | join type=inner Environment, host [| savedsearch "name" | stats count as Total by Environment... See more...
I am using a query as below | inputlookup lookup_name where (Environment=PROD) AND sourcetype="name" | join type=inner Environment, host [| savedsearch "name" | stats count as Total by Environment, App, host ] | fillnull value=NULL Total | search Total=NULL Lookup is having only 2 columns Environment & Host returning PROD & Server_name, the saved search returns approx 10000 rows. This alert runs every 15 minutes but many times gets triggered with NULL result even when the actual result is not NULL. Alert condition is that if Total=NULL comes up it will send email but 90% of them are false alarms. Is it because the saved search is taking time to output and splunk is having some threshold appending just NULL in case it exceeds.
Hi, I have a dataset about transactions, each event is a transaction detail about response code(success or not), their amount, channel in which they pay through, etc. I want to make a report about e... See more...
Hi, I have a dataset about transactions, each event is a transaction detail about response code(success or not), their amount, channel in which they pay through, etc. I want to make a report about each individual with the information like name, the total transactions, amount, channel. Each individual have an ID to identify them but I don't want it in the table. When I use code like this     index=xxx source=xxx |eval SUCCESS = if(RESPONSE_CODE="0",1,0) |stats count as Total, SUM(SUCCESS)AS Total_suc, SUM(AMMOUNT)AS Total_am BY ID, CHANNEL |search Total_suc>=20     The last line of code would be a table listing out individual with total transaction above 20 in each channel, with details as listing above, I want it to look like this:     NAME |Total_suc|Total_am|Channel Robert |20 |2000 |MOBAGE William|34 |1200 |RT Harry |23 |3000 |RT Harry |40 |4000 |VT     An individual might make transactions through many channel and their total transactions might be above 20, but they won't appear on the result, another individual can appear many times because they make >20 transactions through each channel. Further more, the field NAME is customer's input, the only thing indetify them  is their ID. Exp: Sarah might make 12 transaction in RT, 15 transaction at VT, their total transactions would be above 20, but still won't appear because this is listing /Channel. Harry, as above, appears two times because they make >20 transaction in both RT and VT. I have several question I would like to ask: 1. Because I use BY to identify individuals (and channel in which they pay through), I don't know if it correct or not, because the amount of data is too much. 2. Since (stats) can only took out the data that appears behind it, I can't show other details related to that individual, if I adding more data behind BY, I can get them but I afraid adding more constrain could make the result incorrect. Can someone help me with these problems? Thank you in advance and sorry if the post is too long.
I want to keep it in field A (or any other field) only if there is a matching column in field A and field B, as shown in the figure below. It seems good to use the "foreach" statement, but I don't kn... See more...
I want to keep it in field A (or any other field) only if there is a matching column in field A and field B, as shown in the figure below. It seems good to use the "foreach" statement, but I don't know how to implement it. No Field A Field B 1 100   2 200   3 300   4   100 5   4000 6   5000   Extract only the No. 1 column. No Field A Field B 1 100  
I have a search query that returns a list of transactions and their times. I've used that to create two kinds of visualizations: a timechart showing fluctuations over time and single value gauges sho... See more...
I have a search query that returns a list of transactions and their times. I've used that to create two kinds of visualizations: a timechart showing fluctuations over time and single value gauges showing summary values for the time period. In my dashboard, I have these as two separate panels, which I assume results in two separate queries for the same dataset, which seems like a waste of resources.... Is there a way for two visualizations to reference the same data set? BTW, I assume that something like this is possible using Splunk datasets, but I don't believe my admin has given me permission to generate them   So is there any way of doing this just within the dashboard itself without using datasets?
we have a two site Splunk environment and we have two deployment servers do they need to be assigned to a specific site under the general stanza as well as the multisite set to true under the cluster... See more...
we have a two site Splunk environment and we have two deployment servers do they need to be assigned to a specific site under the general stanza as well as the multisite set to true under the cluster stanza? Currently they are reporting this error: "The searchhead is unable to update the peer information. Error = 'Master has multisite enabled but the search head is missing the 'multisite' attribute.'"   [general] site = site1   [clustering] multisite = true    
Just installed Splunk Add-on for ServiceNow 6.4 on Splunk Enterprise 8.X, and the app's ui is not responsive.  Just keeps showing "loading" (spinning). In Chrome Developer Tools, I see several 500s a... See more...
Just installed Splunk Add-on for ServiceNow 6.4 on Splunk Enterprise 8.X, and the app's ui is not responsive.  Just keeps showing "loading" (spinning). In Chrome Developer Tools, I see several 500s and 503s for things like (Failed to load resource) /en-US/splunkd/__raw/servicesNS/nobody/Splunk_TA_snow/splunk_ta_snow_settings/logging?output_mode=json&_=1613605597716  If I try to browse to the url that failed to load, I get the following error: {"messages":[{"type":"ERROR","text":"Unable to xml-parse the following data: %s"}]}  
Hello all, Just updated the REST API Modular app to version 1.9.8 to take advantage of the ability to pass a token for multiple requests. I know I need to update tokens.py, but Im unsure of the synt... See more...
Hello all, Just updated the REST API Modular app to version 1.9.8 to take advantage of the ability to pass a token for multiple requests. I know I need to update tokens.py, but Im unsure of the syntax for my situation. Im looking to replace pass longitude & latitude values for multiple cities: https://api.openweathermap.org/data/2.5/onecall?lat=$lat$&lon=$lon$&exclude={part}&appid={API key} How would I configure the tokens.py?  Thanks in advance?
While on a mission to eradicate 'join', I was showing someone how to replace a join statement with stats. However, the use case was that the user wanted to know, in the result output, which fields h... See more...
While on a mission to eradicate 'join', I was showing someone how to replace a join statement with stats. However, the use case was that the user wanted to know, in the result output, which fields had come from the joined data set and which had come from the parent, which can be done quite efficiently with   index=a key="X" | join key [ search index=b key=X | rename * as joined_*, joined_key as key ]   In principle, it's simple to replace the join with a single stats, but the challenge is how to rename the fields that have come from index=b, so they can be identified by their field names. I came up with this theoretical example   index=_audit OR index=_internal | foreach * [ eval audit_<<MATCHSTR>>=if(index="_internal", null(), <<FIELD>>) ]    which will create new fields called audit_* for those fields in the _audit index, but it's not a rename. So I added   <<FIELD>>=if(index="_internal", <<FIELD>>, null())   to the eval, which didn't work as anticipated, i.e. the original _audit fields did not disappear. One solution is to rename the fields at ingestion, but not ideal. Aliases doesn't really solve the problem, as the original fields are still present.  Any other approaches to this?  
I’m building TA using Add-on Builder and trying to set up API call with a certain interval. It pulls data in once and never again. Somehow the interval settings are ignored. Did anyone experience thi... See more...
I’m building TA using Add-on Builder and trying to set up API call with a certain interval. It pulls data in once and never again. Somehow the interval settings are ignored. Did anyone experience this problem?
I have a dashboard that has dropdowns to pick a value for a certain field. For example a dropdown for gender there is male or female and another is a dropdown for age: old, middle, or young. The sear... See more...
I have a dashboard that has dropdowns to pick a value for a certain field. For example a dropdown for gender there is male or female and another is a dropdown for age: old, middle, or young. The search uses those values as prefilters like this: index=* gender=$gender_token$ age=$age_token$ | timechart count  If I wanted to accelerate this how would I do that since the gender and age fields could be so many different values. I can't just take the current search and accelerate it. I have accelerated searches like this before by using bucket and stats like this index=* | bucket _time | stats count by gender, age, _time and then in the dashboard add this part to the end | where gender=$gender_token$ AND age=$age_token$ | timechart sum(count) but the search size is quite big compared to the regular timechart search (10x). I was hoping that I could do something like index=* | timechart count by gender,age  and then have a where clause afterwards but that isn't an option.