All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am testing the trial version of AppD, and today I tried running `pip install appdynamics` which tries to install the latest version `appdynamics (20.9.0.2430)`. The installation fails on depende... See more...
I am testing the trial version of AppD, and today I tried running `pip install appdynamics` which tries to install the latest version `appdynamics (20.9.0.2430)`. The installation fails on dependency `appdynamics-bindeps-osx-x64` looking for version 20.9.0 which is not available in the distribution as of now. ``` (sample-django) bash-3.2$ pip search appdynamics appdynamics (20.9.0.2430)                         - Python Agent for AppDynamics appdynamics-lambda-tracer (20.9.1.393)            - AppDynamics AWS Python Tracer appdynamics-proxysupport-linux-x64 (1.8.0.252.1)  - Proxysupport for AppDynamics Python agent appdynamics-bindeps-linux-x86 (20.9.0)            - Dependencies for AppDynamics Python agent appdynamics-proxysupport-osx-x64 (1.8.0.212.1)    - Proxysupport for AppDynamics Python agent appdynamics-bindeps-osx-x64 (11.0)                - Dependencies for AppDynamics Python agent appdynamics-bindeps-linux-x64 (20.9.0)            - Dependencies for AppDynamics Python agent appdynamics-proxysupport-linux-x86 (1.8.0.252.1)  - Proxysupport for AppDynamics Python agent AppDynamicsRESTx (0.4.20)                         - AppDynamics REST API Library ``` It works for me when I force the installation to version 20.3.0 which in turn picks up 11.0 version for appdynamics-bindeps-osx-x64.   I believe the dependency packages are not updated on the pip distribution for latest version of python agent. Or am I missing something here?  
Hey,  Currently we are working on connecting our tableau environment to splunk, but wondering does anyone : 1.know of any considerations, or modifications that need to be made for the ODBC driver f... See more...
Hey,  Currently we are working on connecting our tableau environment to splunk, but wondering does anyone : 1.know of any considerations, or modifications that need to be made for the ODBC driver for desktop to work with the tableau server? 2. Any guides to make this work with the server? 3. Other possible connection methods with tableau server? Thanks  
Hello I use the search below in order to display the list of HOSTNAME which have a SITE field that matches     | inputlookup lookup_cmdb | search HOSTNAME= aaa OR HOSTNAME= bbb OR HOSTNA... See more...
Hello I use the search below in order to display the list of HOSTNAME which have a SITE field that matches     | inputlookup lookup_cmdb | search HOSTNAME= aaa OR HOSTNAME= bbb OR HOSTNAME= ccc OR HOSTNAME= dddd | stats values(SITE) as SITE by HOSTNAME | table HOSTNAME   Instead the host which have a SITE field that matches, I would like to display the host list that have no SITE field How to do please?
Hello, I have some difficulties to configure my first connection in "Splunk_TA_jmx": Our WEB administrator says he can use the URL "service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi" in "jconsole"... See more...
Hello, I have some difficulties to configure my first connection in "Splunk_TA_jmx": Our WEB administrator says he can use the URL "service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi" in "jconsole" to connect to a java machine. So I am trying to configure a server in "Splunk_TA_jmx" with the following parameters: Destination App : Splunk_TA_jmx Name            : test JVM Description : my_test Connection Type : rmi Stub Source     : jndi Host            : <host> Port            : <port> Lookup Path     : /jmxrmi Username        : <user> Password        : <password> Server Interval : 30 But I get the message "Invalid parameter found in connection URL. Verify the provided connection configurations." when I try to add it. Any idea what I'm doing wrong ? Thanks Christian  
Hello, I was wondering if it was possible to rename splunk installation directory. I have several splunk instances named like that /opt/splunk/splunk-search-head/ /opt/splunk/splunk-indexer/ /op... See more...
Hello, I was wondering if it was possible to rename splunk installation directory. I have several splunk instances named like that /opt/splunk/splunk-search-head/ /opt/splunk/splunk-indexer/ /opt/splunk/splunk-deployer/ In order to automate the install process I would like to rename all those path by : /opt/splunk/splunk/   Could you please tell me if it is possible and how to do so without breaking everything ?   Regards Matthieu
Hello, I am coding a custom command and I am using splunklib to interact with Splunk SDK. I am also using the Splunk Platform Upgrade Readiness App that allows me to spot if my code is compatible w... See more...
Hello, I am coding a custom command and I am using splunklib to interact with Splunk SDK. I am also using the Splunk Platform Upgrade Readiness App that allows me to spot if my code is compatible with Python 3. My issue is that this app tells me that some python files in the splunklib are only compatible with Python 2 runtime. I've updated the Splunklib to the very latest release... You can see the result of the app scaning below: Is this normal ? is there a way to make it compatible with Python 3?
I have file(text file) in my UF .My requirement is that i want to know the word count of the file but i should not index the data due to business reasons. Is there any way where i can get to know th... See more...
I have file(text file) in my UF .My requirement is that i want to know the word count of the file but i should not index the data due to business reasons. Is there any way where i can get to know the word count of my file without actually indexing the whole data? Currently i am indexing only filename .Now how can i proceed further? Thank you  
What are the best practices in collecting job statuses in Splunk via an external API? (I am not sure I am asking the right question, or asking the question correctly - so please bear with me.) With... See more...
What are the best practices in collecting job statuses in Splunk via an external API? (I am not sure I am asking the right question, or asking the question correctly - so please bear with me.) With a log file, Splunk only ingests what's been appended to the file since the last ingest, and not the entire file. With API polling it's a little trickier as even if the last record is unchanged, prior records (job statuses) may still refer to jobs that are in progress; their statuses needing to be ingested into Splunk... My initial impulse is write the Python polling script (as part of a "Scripted Input") as follows: Poll the API, capture states of all job statuses and write them to a file During the next poll, poll the API again, then read the "states" file, determine what's changed, and send only the updated records to Splunk Update the "states" file with new data. Is there a simpler way? Thanks! P.S. Sample data that a Python script collects via an API call:   [{"id":"1","fileName":"257158727.mpg","scheduledAt":"Jul 31, 2020 6:51:17 AM","status":"Finished","result":"Failure","correct":"Run correction|10058","progress":"0|00000173a5242","startTime":"Jul 31, 2020 6:51:20 AM","completionTime":"Jul 31, 2020 7:07:45 AM",}, {"id":"2","fileName":"257164625.ts","scheduledAt":"Jul 31, 2020 6:11:50 AM","status":"Finished","result":"Failure","correct":"Correction in Progress||00000173a5000","progress":"86|843|00000173a5000","startTime":"Jul 31, 2020 6:11:53 AM","completionTime":"Jul 31, 2020 6:53:35 AM"}, {"id":"3","fileName":"257166304.ts","scheduledAt":"Jul 31, 2020 5:03:05 AM","status":"Finished","result":"Failure","correct":"correction completed|00000173a4c11","progress":"100|00000173a4c11","startTime":"Jul 31, 2020 5:03:07 AM","completionTime":"Jul 31, 2020 6:44:23 AM"}]   Note that "status" and "result" fields are rather meaningless when determining if the job has finished. Instead I must extract the first stanza in the "correct" field and make the determination based on its value: if it contains "Correction in Progress", the job is in progress; anything else - it's done. P.P.S. The sample data is from Interra Systems' Baton Content Corrector. The data format (job or task UUID, status, timestamps, other metadata) is very common across most job and session tracking systems (transcoding farms, file transfer platforms, etc.) with the goal of detecting anomalies, issues, stuck jobs. P.P.P.S. I am assuming the best practice is to follow the "Example script that polls a database" except modify it for my purposes; my hope is that there's yet another "best practice" on top of it as polling job statuses is conceptually different from "tailing" a database.
I am looking to monitor Disk IO error, is there any way to monitor it.. Currently we have filtered disk related hardware error message like below format and same output is  redirected into splunk ... See more...
I am looking to monitor Disk IO error, is there any way to monitor it.. Currently we have filtered disk related hardware error message like below format and same output is  redirected into splunk readable log file. we are monitoring  error message  if that log file contains “I/O error” string. Command which we used to convert hardware message into splunk readable format: # dmesg -L -T| grep -iE "I/O error"|tr -d '['| awk -F']' '{print $1 "," $2}' Thu Oct  1 00:01:00 2020, blk_update_request: I/O error, dev fd0, sector 0 Fri Oct  2 00:01:00 2020, blk_update_request: I/O error, dev fd0, sector 0 Fri Oct  2 00:01:00 2020, blk_update_request: I/O error, dev fd0, sector 0 But this is not the feasible way to monitor as this command don't work on all linux version, so is there any default app available to monitor Disk I/O error.
Hello splunkers, While checking some use cases I found out one that I am interested of "Detect Spike in Network ACL activity", my question is about the formula that it contains to detect the suspici... See more...
Hello splunkers, While checking some use cases I found out one that I am interested of "Detect Spike in Network ACL activity", my question is about the formula that it contains to detect the suspicious activity, here is the query it is based on: sourcetype=aws:cloudtrail `network_acl_events` [search sourcetype=aws:cloudtrail `network_acl_events` | spath output=arn path=userIdentity.arn | stats count as apiCalls by arn | inputlookup network_acl_activity_baseline append=t | fields - latestCount | stats values(*) as * by arn | rename apiCalls as latestCount | eval newAvgApiCalls=avgApiCalls + (latestCount-avgApiCalls)/720 | eval newStdevApiCalls=sqrt(((pow(stdevApiCalls, 2)*719 + (latestCount-newAvgApiCalls)*(latestCount-avgApiCalls))/720)) | eval avgApiCalls=coalesce(newAvgApiCalls, avgApiCalls), stdevApiCalls=coalesce(newStdevApiCalls, stdevApiCalls), numDataPoints=if(isnull(latestCount), numDataPoints, numDataPoints+1) | table arn, latestCount, numDataPoints, avgApiCalls, stdevApiCalls | outputlookup network_acl_activity_baseline | eval dataPointThreshold = 15, deviationThreshold = 3 | eval isSpike=if((latestCount > avgApiCalls+deviationThreshold*stdevApiCalls) AND numDataPoints > dataPointThreshold, 1, 0) | where isSpike=1 | rename arn as userIdentity.arn | table userIdentity.arn] | spath output=user userIdentity.arn | stats values(eventName) as eventNames, count as numberOfApiCalls, dc(eventName) as uniqueApisCalled by user I understand pretty much all but I do not understand what this part does: | eval newAvgApiCalls=avgApiCalls + (latestCount-avgApiCalls)/720 | eval newStdevApiCalls=sqrt(((pow(stdevApiCalls, 2)*719 + (latestCount-newAvgApiCalls)*(latestCount-avgApiCalls))/720)) Specifically where the 720 and 719 come from, so my question for you is does anybody have worked with it before or any similar one because I noticed there are others which use the same formula. I am using Splunk ES version 6.1.1  Thanks so much,
Hi Everyone, I have one requirement. My query is like this index="_internal" EventLogFiles | eval DashboardName=if(like(uri, "%EventLogFiles%"), "EventLogFiles", "Unknown Dashboards") | stats co... See more...
Hi Everyone, I have one requirement. My query is like this index="_internal" EventLogFiles | eval DashboardName=if(like(uri, "%EventLogFiles%"), "EventLogFiles", "Unknown Dashboards") | stats count by DashboardName user |append[search index="_internal" Extract | eval DashboardName=if(like(uri, "%Extract%"), "Extract", "Unknown Dashboards") | stats count by DashboardName user ]|sort -count I am getting result like: DashboradName                                  User                         count Extract                                                      ma                                   1 I want to display the three fields  in Bar chart form. Can someone guide me how can I do this.  
I apologize if the title isn't very descriptive of the question I have, was not sure how to best frame it. For a setup with numerous splunkforwarders forwarding to two indexing servers, and getting ... See more...
I apologize if the title isn't very descriptive of the question I have, was not sure how to best frame it. For a setup with numerous splunkforwarders forwarding to two indexing servers, and getting inputs/outputs from a deployment server, how is the network flow? splunkforwarder -> splunk-index1/2 - is this forwarder-initiated? splunk-master (deployment+cluster master) -> splunkforwarder - master or forwarder initiated for deployment of config/splunkd restarts? I believe I found some information on this at some point, but that was for an older version and possibly outdated.
My initial log looks something like: The quick brown fox jumps over the lazy dog, and it jumped in 23092 seconds. Trying to extract the number value and get an average. I have a query which extract... See more...
My initial log looks something like: The quick brown fox jumps over the lazy dog, and it jumped in 23092 seconds. Trying to extract the number value and get an average. I have a query which extracts the 14th value, essentially a time field. This query works, but I am trying to get an average of the times per host. | rex field=_raw "(\S+\s+){13}(?<processTime>\S+)\s" | stats count by processTime, host processTime                host 23092                             host123 45098                             host088 98987                             host238 23092                             host123 23092                             host123 98656                             host088 54545                             host238 I need an average for host123, host088, host238 The above query is also grouping the same times and displaying the counts, which is not preferred.   
Hi, Does anyone else see these errors with Stream 7.3 and custom vocabularies? 2020-10-12 03:42:50 FATAL [140491587887936] (main.cpp:1150) stream.main - Failed to start streamfwd, the process will ... See more...
Hi, Does anyone else see these errors with Stream 7.3 and custom vocabularies? 2020-10-12 03:42:50 FATAL [140491587887936] (main.cpp:1150) stream.main - Failed to start streamfwd, the process will be terminated: Incompatible configuration file (run pupgrade.py): /opt/splunk/etc/apps/Splunk_TA_stream/default/vocabularies/procera.xml
I am using Splunk app for LOGbinder to display AD Changes in Splunk. All events are getting collected in the Event viewer correctly. But when I search index=main in Splunk, I see "Message=Microsoft W... See more...
I am using Splunk app for LOGbinder to display AD Changes in Splunk. All events are getting collected in the Event viewer correctly. But when I search index=main in Splunk, I see "Message=Microsoft Windows security auditing" to all events. Can you help me with this please?   
Hi @gcusello , I want to check if in our environment splunk receives data/logs into azure firewall. if it doesn't receives is there a way we can ingest data into azure firewall. Can you please guid... See more...
Hi @gcusello , I want to check if in our environment splunk receives data/logs into azure firewall. if it doesn't receives is there a way we can ingest data into azure firewall. Can you please guide us how to check above query? Regards, Rahul
Hi everyone, i'm trying to write query for this issue since long time but not able to get proper answer. Appreciate your help in advance! i have a dashboard in specific app of splunk which contains ... See more...
Hi everyone, i'm trying to write query for this issue since long time but not able to get proper answer. Appreciate your help in advance! i have a dashboard in specific app of splunk which contains multiple scripts as panel. dashboard contains a submit button. when clicked on that button all scripts of panel start its execution. i want to know: total time require to the dashboard for each time to load data after clicking submit button to the executing all scripts  total number of queries run  total time require to run each individual script of dashboard in X days how many time the dashboard was loaded  
Hi when i go to log into my web front end it says log in failed licence expired
Hi, I am trying to assign colors to the bars in my chart.  I have 3 distinct values under variable "Change Type" and I am using charting.fieldColors but for some reason all the bars in the chart app... See more...
Hi, I am trying to assign colors to the bars in my chart.  I have 3 distinct values under variable "Change Type" and I am using charting.fieldColors but for some reason all the bars in the chart appear with default blue color.  Am I doing something wrong here ?  <panel> <chart> <title>Change Type Analysis</title> <search base="Base"> <query> |stats count by "Change Type"|sort -count </query> </search> <option name="charting.axisTitleX.text">Change Type</option> <option name="charting.axisTitleY.text">Number of Changes</option> <option name="charting.chart">bar</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">all</option> <option name="charting.fieldColors">{"Standard":0x9BE8AB,"Emergency":0xE87D6F,"Normal":0x5A4575}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisEnd</option> <option name="height">200</option> <option name="refresh.display">progressbar</option> </chart> </panel>   - Rohan
Hi everyone, I'm having some trouble and really need your help. Currently, I'm deploying ITSI Splunk service and using Add-on for Unix and Linux on Splunk. The problem is that when I send data to I... See more...
Hi everyone, I'm having some trouble and really need your help. Currently, I'm deploying ITSI Splunk service and using Add-on for Unix and Linux on Splunk. The problem is that when I send data to ITSI, ITSI didn't receive any Entity        Down here is my configuration on Add-on for Unix and Linux :        Also, my Splunk Enterprise has collected Linux log by Universal Forwarder . I don't know what is the problem with my ITSI. Please help me.