All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , We are trying to extract  data from Splunk  and feed it to SAS/Data warehouse for reporting purpose. What is the best practice to follow . Any ideas will be helpful. Thanks
When I go to create an Cloud Pub/Sub input, after I select the credentials the project list is not populated. I have checked logs for any errors related to the Google credentials but I am not seeing ... See more...
When I go to create an Cloud Pub/Sub input, after I select the credentials the project list is not populated. I have checked logs for any errors related to the Google credentials but I am not seeing anything. Any ideas as to what the issue may be? Thx
Hi , Is this possible to hide a line (ex splunkd) in this linechart by clicking on it? Thanks in advance Huong
Hi I am trying to integrate dynatrace logs with splunk and i could find the Add on in splunkbase but not the  supporting documents. As per my understanding the add on should be installed on HF \Ind... See more...
Hi I am trying to integrate dynatrace logs with splunk and i could find the Add on in splunkbase but not the  supporting documents. As per my understanding the add on should be installed on HF \Indexer and the app in SH. i am new to Splunk. if anyone could help with supporting documents or any guidance is highly appreciated.  Thanks in Advance
Hi All, I would to know one information. Do you think is possible send splunk data to another splunk instance with HEC? and how? Thanks In advance Alessandro
I could not find a definitive answer/way for updating splunk db connect V3.1.1. to a version to a later V3.X.X version e.g v3.5.0.  How can I accomplish that? Thanks in advance, Aris
The Message field of wineventlog is being handled by the default configurations or of the TA and I would like to change it but I can't find out which props/transforms do the current extractions. The... See more...
The Message field of wineventlog is being handled by the default configurations or of the TA and I would like to change it but I can't find out which props/transforms do the current extractions. The Message field is of multiple lines and the extraction, at the moment, is applied on each line, extracting the name, value pairs separated by a colon. In the case of Avecto, we see within one line multiple name value pairs and the pairs are separated by commas.
HI Team, we are trying to implement splunk in the azure functions .But we dont have any idea ,how to implement it. I  read we cant pass the custom logs from functions to Splunk.Can u pls helps us her... See more...
HI Team, we are trying to implement splunk in the azure functions .But we dont have any idea ,how to implement it. I  read we cant pass the custom logs from functions to Splunk.Can u pls helps us here with sample code and approach ? Thanks, R.Sagar
I'm looking for a way to numerically sort a multivalue field without expanding the field, sorting and then recombining. Essentially something like the following with a dedup. Before I post an idea I ... See more...
I'm looking for a way to numerically sort a multivalue field without expanding the field, sorting and then recombining. Essentially something like the following with a dedup. Before I post an idea I wanted to see if anyone has any ideas on how to accomplish this. Essentially I'm trying to find a way to run conversion functions from here against a multivalue field, ideally on the same eval.   | makeresults | eval line_list=split("4,3,2,1,7,2,21,9,12,19,7",",") | eval line_list=mvsort(mvdedup(tonumber(line_list)))     Instead I have to do this:   | makeresults | eval line_list=split("4,3,2,1,7,2,21,9,12,19,7",",") | mvexpand line_list | eval line_list=tonumber(line_list) | sort line_list | dedup line_list | mvcombine line_list    
I have this result and  would like to just pull out the accountNumber 12345678 021-05-19_09:36:25.459 ERROR c.r.r.m.m.o.CreateAccountResponseProcessingOrchestratorImpl-c6de96 No Batch ID found in ... See more...
I have this result and  would like to just pull out the accountNumber 12345678 021-05-19_09:36:25.459 ERROR c.r.r.m.m.o.CreateAccountResponseProcessingOrchestratorImpl-c6de96 No Batch ID found in XYZ response. Context:[CommunicationContext{accountKey=AccountKey{accountNumber='12345678', providerCode='BKRBO1'}, correlationId='f1683b92-32ee-4b5e-8b9f-27cb98c6de96', properties={COMMUNICATION_TYPE=CREATE_NEW_ACCOUNT, ENRICH_XYZ_REQUEST=false}}]
Hi , we have 7 SH in cluster and out of which for one of the SH KV store replication status is showing as "recovering". I tried 1.  doing rolling restart 2. resnync the stale KV store member but ... See more...
Hi , we have 7 SH in cluster and out of which for one of the SH KV store replication status is showing as "recovering". I tried 1.  doing rolling restart 2. resnync the stale KV store member but unfortunately status s still "recovering". If I run  this command --> splunk clean kvstore --local what impact does this command is going to make. Am i going to lose any data? please suggest. Thanks
I am trying to fill the null values and using a datamodel. I want to use tstats and fill null values will "Null" using fillnull. How should I use it with tstats? 
I am onboarding some data using http tokens. In source field I can see source as http:Niam. Is there a way by which I can get exact source details instead of http:Niam on splunk indexer or HF.
Hi everyone, index=xyz source="something" |stats earliest(_time) as minTime latest(_time) as maxTime values(activityName) as activityName values(accessSeekerId) as accessSeekerId values(businessChan... See more...
Hi everyone, index=xyz source="something" |stats earliest(_time) as minTime latest(_time) as maxTime values(activityName) as activityName values(accessSeekerId) as accessSeekerId values(businessChannel) as businessChannel values(status) as status values(ttStatus) as ttStatus values(feature) as feature by requestId | eval duration = maxTime - minTime | stats avg(duration) AS "AvgResponseTime" perc95(duration) AS "P95ResponseTime" If I run this script for a day  then I get some non-zero value for AvgResponseTime and P95ResponseTime also get different values for minTime and maxTime.   But if i run following script for same time, I get same values for minTime  and maxTime. index=xyz source="something" |bin _time span=1d |stats earliest(_time) as minTime latest(_time) as maxTime values(activityName) as activityName values(accessSeekerId) as accessSeekerId values(businessChannel) as businessChannel values(status) as status values(ttStatus) as ttStatus values(feature) as feature by requestId _time | eval duration = maxTime - minTime || eval Time=strftime(_time , "%d/%m/%Y %H:%M") | stats avg(duration) AS "AvgResponseTime" perc95(duration) AS "P95ResponseTime" by Time At the end I get 0 for AvgResponseTime and P95ResponseTime which is not matching with above query as value of duration gets as 0. I want output of based on each day AvgResponseTime and P95ResponseTime  I hope I give clear idea about my issue. What would I do to resolve this?    
Hello, I have been searching for hours but I have yet to come across to an answer to my question: - How does Splunk SE impact the performance of my existing infrastructure since it will ingest and ... See more...
Hello, I have been searching for hours but I have yet to come across to an answer to my question: - How does Splunk SE impact the performance of my existing infrastructure since it will ingest and process a lot of data? (I'm talking CPU performance of switches, virtual machines etc. and general bandwith)   If there's a general answer to this question then please let me know. If there's a specific answer to this question and lots more information is needed: Which steps can my organization undertake to get a better view of the performance situation? Thanks!
I recently came across creating the dashboards in SPLUNk that has json as the content (not XML) using Dashboard studio. But I am not able to create these dashboards using the rest end point that we h... See more...
I recently came across creating the dashboards in SPLUNk that has json as the content (not XML) using Dashboard studio. But I am not able to create these dashboards using the rest end point that we have to create the dashboards with XML as it's data i.e  splunk_server + '/servicesNS/' + app_author + '/Development/data/ui/views/   Please suggest me if there is way to automate creating these kinds of dashboards.   Thank you.
I'm having 3 indexes A(SUPER-SET),  B(SUBSET-1),  C(SUBSET-2). I'm having 2 radio button groups: Group1 and Group2   Group1 has 2 options: YES and NO When "YES" is selected then it performs A int... See more...
I'm having 3 indexes A(SUPER-SET),  B(SUBSET-1),  C(SUBSET-2). I'm having 2 radio button groups: Group1 and Group2   Group1 has 2 options: YES and NO When "YES" is selected then it performs A intersection B and "NO" does not perform any search.   Group2 has 2 options: YES and NO When "YES" is selected then it performs A intersection C "NO" does not perform any search.   I'm having the following SOURCE CODE for it :       <input type="radio" token="field1" searchWhenChanged="true"> <label>Present in AB</label> <choice value="Yes">Yes</choice> <choice value="No">No</choice> <change> <condition value="Yes"> <set token="mysearch"> index=a ..................... | join <common_column> type=outer [| search index= B] | where check_column_value="BBBBBBBBBBB" | table <list of columns> </set> </condition> <condition value="No"> <set token="mysearch"></set> </condition> </change> </input> <input type="radio" token="field2" searchWhenChanged="true"> <label>Present in AC</label> <choice value="Yes">Yes</choice> <choice value="No">No</choice> <change> <condition value="Yes"> <set token="mysearch"> index=a ..................... | join <common_column> type=outer [| search index= C] | where check_column_value="CCCCCCCCCCCC" | table <list of columns> </set> </condition> <condition value="No"> <set token="mysearch"></set> </condition> </change> </input> <table> <search> <query>$mysearch$</query> </search> </table>           When i click on YES buttons for the two groups :   When I select both the YES options the result I get is from the index which is last selected as YES option only, i.e. in any case i'm not getting the result from both the sources even after selecting "YES" option. I feel there is some issue with the token value in the last part of the code shared above.(Not sure!!)   Can anyone help me with this please? Thanks
Our Phantom's DECIDED process often crashes for performance reasons. We suspect this is caused by the low number of runners. So if phantom server is 16 cores 256GB of memory, how many Runners s... See more...
Our Phantom's DECIDED process often crashes for performance reasons. We suspect this is caused by the low number of runners. So if phantom server is 16 cores 256GB of memory, how many Runners should we set here?
We setup the Splunk cluster on cloud via Ansible scripts. ( cluster is also configured via Ansible) I have two questions. 1) in case we want to upgrade the Splunk to a new version. Instead, upgradi... See more...
We setup the Splunk cluster on cloud via Ansible scripts. ( cluster is also configured via Ansible) I have two questions. 1) in case we want to upgrade the Splunk to a new version. Instead, upgrading the existing system, we would like to create new cluster via Ansible scripts from scratch and deploy the old Splunk app into new system. What kind of problems may we encounter in such an update scenario?   2)in case we do it. Which configuration files are needed to be updated from old setup?    Thanks
I've been searching and trying options for a couple of days now with this search and cannot find a solution. I am using DB Connect to interrogate a database to get events that show me the start and ... See more...
I've been searching and trying options for a couple of days now with this search and cannot find a solution. I am using DB Connect to interrogate a database to get events that show me the start and end times for a suite of jobs.  This works fine.  Each day I have a single event detailing start and end time for each job.  However, one of the jobs runs twice.  I am trying to create a timechart showing the run time for each job.  The job that runs twice gives two results for one day so when the timechart runs, the stats are there, but not the visualisation.  I'm sure there must be a simple solution to this, but I can't work it out.  Is there a way to get the results to show in a timechart?  Thanks in advance My search -  index=foo sourcetype=bar JobName="BIDOFF" earliest=-7d@d latest=-0d@d+7h | eval s=strptime(TimeStarted, "%Y-%m-%d %H:%M:%S.%Q") | eval e=strptime(TimeCompleted, "%Y-%m-%d %H:%M:%S.%Q") | eval r=(e - s) | timechart values(r) by JobName Results -  _time                    BIDOFF 2021-05-12 32.940000 33.000000 2021-05-13 33.013000 33.034000 2021-05-14 32.907000 33.110000 2021-05-15   2021-05-16   2021-05-17 32.936000 33.030000 2021-05-18 33.077000 34.547000