All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello splunk community. As on today we have two queries that are running  Count of api grouped by apiName and status     index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | c... See more...
Hello splunk community. As on today we have two queries that are running  Count of api grouped by apiName and status     index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | chart count BY apiName "api.metaData.status" | multikv forceheader=1 | table apiName success error NULL   Which displays a table something like shown below ===================================== | apiName            || success || error              || NULL.   | ==================================== | Test1                   || 10            || 20.                  || 0            | | Test2                   || 10            || 20.                  || 0            | | Test3                   || 10            || 20.                  || 0            | | Test4                   || 10            || 20.                  || 0            | | Test5                   || 10            || 20.                  || 0            | | Test6                   || 10            || 20.                  || 0            | latency of api grouped by apiName   index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | rename api.measures.tt as Response_Time | chart min(Response_Time) as RT_fastest max(Response_Time) as RT_slowest by apiName | table apiName RT_fastest RT_slowest   which displays a table something like below ================================== | apiName            || RT_fastest || RT_slowest               ================================== | Test1                   || 10                  || 20.                  | | Test2                   || 10                  || 20.                  | | Test3                   || 10                  || 20.                  | | Test4                   || 10                  || 20.                  | | Test5                   || 10                  || 20.                  | | Test6                   || 10                  || 20.                  | Question: If you see the above tables, both tables are grouped with apiName. Is there a way to combine these queries so that i get a single result something like this |=============================================== | apiName || success || error || NULL || RT_fastest. || RT_slowest | =============================================== | Test1       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test2       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test3       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test4       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test5       || 10            || 20.     || 20.       || 20.                  || 20.                  ||   I could not find any documentation regarding combining multiple chart query into one. Could someone please help me with this. Thanks
Hello,  for a project I'm working on I would need to print (somehow) the outcome of | collect in order to see if the command was successful or not. The dashboard is basically manipulating some da... See more...
Hello,  for a project I'm working on I would need to print (somehow) the outcome of | collect in order to see if the command was successful or not. The dashboard is basically manipulating some data and then the updated version of the event is collected using an HTML button and javascript. It would be useful for the user to see the result of such action in order to understand when (and if) the command was completed successfully. Do you think this is feasible? Could it be a way using somehow the $job.messages$ token? Thanks in advance for your kind support.
I have an event with multiple levels of nested objects and lists, that I need to break down into individual events. For example, a single event can look like:   And I need to conver that eve... See more...
I have an event with multiple levels of nested objects and lists, that I need to break down into individual events. For example, a single event can look like:   And I need to conver that event into a table like this: Group_name Sub_group Subsubgroup Some other info … alpha alpha1 beta   alpha alpha1 gamma   alpha alpha2 a   alpha alpha2 b   alpha alpha3 uno     I've tried multiple combinations of mvexpand, table, and stats, but I keep getting erroneous results. The command flatten doesn't seem to work, and I fear I might need some crazy regex to parse all the embedded objects and list of objects, not to mention this is only one event, in reality I would have multiple other groups with their corresponding subgroups and stuff. 
Does anyone have the debian installer version 7.2.x? I need to update an older 6.5 installation and version 7 is no longer available in the official channels? Thank in advance
I need guidance on which interface on a particular  Cisco router to monitor in Splunk. The goal is to only monitor the necessary interfaces to cut down on alerts that are not as meaningful. Please ... See more...
I need guidance on which interface on a particular  Cisco router to monitor in Splunk. The goal is to only monitor the necessary interfaces to cut down on alerts that are not as meaningful. Please advise.
Hello Splunk community. I have a query that is running currently as shown below:   index=myIndex* api.metaData.pid="my_plugin_id" | rename api.p as apiName | chart count BY apiName "api.metaData.... See more...
Hello Splunk community. I have a query that is running currently as shown below:   index=myIndex* api.metaData.pid="my_plugin_id" | rename api.p as apiName | chart count BY apiName "api.metaData.status" | multikv forceheader=1 | table apiName success error NULL | eval line=printf("%-85s% 10s% 10s% 7s",apiName, success, error, NULL) | stats list(line) as line | eval headers=printf("%-85s% 10s% 10s% 7s","API Name","Success","Error", "NULL") | eval line=mvappend(headers,line) | fields - headers Which displays a table with "API Name","Success","Error", "NULL" counts.   This works as expected. Now i want to add a new column in the table which displays the latency value (tp95 and tp99) for each apiName . The time taken by each api is in the field api.metadata.tt. How can i achieve this ? I am new to splunk and I am literally stuck at this point. Could someone please help me. Thank you Info: Just to let you guys know, my query has these additional logic to format things because of related question here
Hi,   I have installed splunk 8.1.8 on my linux. On login to splunk UI, I have gone through apps and installed splunk_db_connect by uploading the zip file splunk-db-connect_380.zip.    Then w... See more...
Hi,   I have installed splunk 8.1.8 on my linux. On login to splunk UI, I have gone through apps and installed splunk_db_connect by uploading the zip file splunk-db-connect_380.zip.    Then when I go to splunk_db_connect and setup. It is giving errors like  "cant communicate with task server, please check your settings" "str object have no attribute decode"   etc error messages in the UI. On restarting splunk and on running btool I see the below message  *************** Checking: /opt/splunk/etc/apps/splunk_app_db_connect/default/inputs.conf Invalid key in stanza [server] in /opt/splunk/etc/apps/splunk_app_db_connect/default/inputs.conf, line 2: run_only_one (value: false). Invalid key in stanza [dbxquery] in /opt/splunk/etc/apps/splunk_app_db_connect/default/inputs.conf, line 5: run_only_one (value: false). ***************   In logs, I see only one log file related to splunk_db_connect. The file name is splunk_app_db_connect_dbx.log. I cant see other files. In this log file I see below errors  ******************** 2022-02-18T06:45:18-0600 [ERROR] [settings.py], line 89 : Throwing an exception Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 76, in handle_POST self.validate_java_home(payload["javaHome"]) File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 215, in validate_java_home is_valid, reason = validateJRE(java_cmd) File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/jre_validator.py", line 73, in validateJRE output = output.decode('utf-8') AttributeError: 'str' object has no attribute 'decode' ************** Below is the snapshot of splunk_db_connect settings in the UI   Please help me how to resolve the issues and setup splunk DB connect succesfully.   Thanking you.
We are using SAP Business Technology Platform(Cloud Foundry) as PAAS and our java and node.js applications are deployed on Cloud Foundry Platform. we want to drain application logs to Splunk Observa... See more...
We are using SAP Business Technology Platform(Cloud Foundry) as PAAS and our java and node.js applications are deployed on Cloud Foundry Platform. we want to drain application logs to Splunk Observability Cloud. Please provide implementation steps. Currently we are using Kibana service for log monitoring on SAP Business Technology Platform(Cloud Foundry). Now we want to drain syslog and application log to Splunk Observability Cloud from SAP Business Technology Platform(Cloud Foundry). We need all necessary steps to set-up integration from SAP Business Technology Platform(Cloud Foundry) to Splunk Observability Cloud. We want to use Infrastructure Monitoring, Application Performance Monitoring, Application Log monitoring(Splunk Log Observer), Splunk Synthetic Monitoring and Splunk Real User Monitoring features of Splunk Observability Cloud. We like features of Splunk Observability Cloud but we don't know about integration set-up of Splunk Observability Cloud with Cloud foundry Platform application. We are new to splunk and want to do simple PoC on it with integration set-up. We need your help here so that we can take decision to use Splunk Observability Cloud in our all products for monitoring. If Splunk Observability Cloud integration with Cloud foundry Platform not possible then give us alternate ways to do PoC. Let us know if some one have done integration of Splunk Observability Cloud with Cloud foundry Platform.  
Hi , We are making a service availibility dashboard based on the below formula . Could you please help me implement this as a SPL ?   Availability Calculation of a service will be as follows- ... See more...
Hi , We are making a service availibility dashboard based on the below formula . Could you please help me implement this as a SPL ?   Availability Calculation of a service will be as follows-   Availability = (Total Availability hours – [ (End time of first P1 -Start Time of first  P1) + (End time of second P1 -Start Time of second  P1)+………])*100 /Total Availability Hours
Hello, we have 3 SHC, could it be possible to add 1 SH dedicated to a special team and give admin rights only to this last one? Thanks.
Props.conf [mysourcetype] EVAL-field1=trim(field1) Field1 must contain all fields for that source type. Is there a way?    
Bonjour à tous s’il vous plaît je suis un étudiant et c’est la première fois que j’utilise splunk J’ai installé splunk enterprise sur mon windows 10 Je dois surveiller mon Active Directory (ser... See more...
Bonjour à tous s’il vous plaît je suis un étudiant et c’est la première fois que j’utilise splunk J’ai installé splunk enterprise sur mon windows 10 Je dois surveiller mon Active Directory (serveur), mais je ne trouve pas l’application et le module complémentaire que je recherche J’ai essayé quelques ajouts et je peux recevoir des données du serveur pour rechercher sur le Web, mais j’ai besoin d’un tableau de bord comme l’application splunk pour l’infrastructure Windows (fin de vie) ou l’application MS WINDOWS AD (j’ai un problème avec elle) s’il vous plaît qui peut m’aider?? Je dois terminer mon projet dès que possible
Hey guys. So i have a search which created a bar chart     | rex field=_raw "(.Net Version is)\s+(?<DotNetVersion>.+)" | stats latest(DotNetVersion) as DotNetVersion by host | fillnull value... See more...
Hey guys. So i have a search which created a bar chart     | rex field=_raw "(.Net Version is)\s+(?<DotNetVersion>.+)" | stats latest(DotNetVersion) as DotNetVersion by host | fillnull value="-" | eval status=case(match(DotNetVersion,"Not!"),"noncompliant",1=1,"Compliant") | chart count by status         I have tried most options in the xml but i cant get it to be green/red =/   Any ideas? Thought the ".fieldColors" would do the trick, but i think maybe my field is called "count" instead of Compliant/noncompliant
Hi, is it possible to roll specific buckets to frozen? I have some buckets which the customer wants to be deleted (don't ask why), and I would kindly ask if this is possible without stopping Splunk... See more...
Hi, is it possible to roll specific buckets to frozen? I have some buckets which the customer wants to be deleted (don't ask why), and I would kindly ask if this is possible without stopping Splunk. br Tom
I pack an splunk app by tar command in an linux host, running as a root user. As a result the owner and group owner are both 'root'. After I installed to Splunk Enterprise, I found that the depressed... See more...
I pack an splunk app by tar command in an linux host, running as a root user. As a result the owner and group owner are both 'root'. After I installed to Splunk Enterprise, I found that the depressed directory and its files are all owned by 'root['. However, other installed app directories and files are belong to 'splunk'.  So, should I su to splunk first and then pack the app file?
Honored Splunkodes, I am trying to keep track of the manpower in each of my legions, so that if any legion loses too many troops at once, I know which one to reinforce. However, I have many legion... See more...
Honored Splunkodes, I am trying to keep track of the manpower in each of my legions, so that if any legion loses too many troops at once, I know which one to reinforce. However, I have many legions, and thus I track all of their manpower without knowing which ones will be important each day. I can't leave my myrmidons without reinforcements! I'd like to generate statistical information about them at the time of graph generation. Currently I'm doing this, it's dirty but it works. I get my legion manpower by querying that index, dropping any that don't fall in the top 50. index=legions LegionName=* | timechart span=1d limit=50 count by LegionName | fields - OTHER | untable _time LegionName ManPower | outputlookup append=f mediterranean_legions.csv Then I load up my lookup: | inputlookup mediterranean_legions.csv | convert timeformat="%Y-%m-%dT%H:%M:%S" mktime(_time) as _time | bucket _time span=1d | timechart avg(ManPower) by LegionName | fields - OTHER | untable _time LegionName ManPower | streamstats global=f window=10 avg(ManPower) AS avg_value by LegionName | eval lowerBound=(-avg_value*1.25) | eval upperBound=(avg_value*1.25) | eval isOutlier=if('ManPower' < lowerBound OR 'ManPower' > upperBound, "XXX".ManPower, ManPower) | search isOutlier="XXX*" | table _time, LegionName, ManPower, * This gives me a quick idea which legions have lost (or gained) a lot of manpower each day. Now ideally, I'd like to generate standard deviation and determine if they are outliers based on z score rather than just guessing with the lower and upper bound values. If this worked, I'd get what I want. Is there a way to accomplish this? | streamstats global=f window=10 avg(ManPower) AS mp_avg by LegionName, stdev(ManPower) as mp_stdev by LegionName, max(ManPower) as mp_max by LegionName, min(ManPower) as mp_min by LegionName
How can I display _time in my results using stats command I get this field when I use "table _time" Just like the image above, I want to get the time field using stats and/or eval command The... See more...
How can I display _time in my results using stats command I get this field when I use "table _time" Just like the image above, I want to get the time field using stats and/or eval command The image below is how my time events look like.   
Hi, For our cloud-hosted API monitoring, we've implemented Error and Performance (response time) based HRs for each of our APIs and mobile app Network Requests.  The reason for the granular level ... See more...
Hi, For our cloud-hosted API monitoring, we've implemented Error and Performance (response time) based HRs for each of our APIs and mobile app Network Requests.  The reason for the granular level of monitoring is so we can tie the HRs to health status indicators on our dashboards and get a granular view of exactly which APIs are experiencing issues in a single glance in the one dashboard view. For our performance-based health rules, we have two alerting criteria - response time vs an AI established baseline (set to alert over a set number of standard deviations) as well as a static threshold (a "must not exceed" response time threshold) which is used to monitor slow performance degradation over a long period of time and in case there are response time spikes that the baseline features just think are normal. Is this a recommended approach or does the appd community/appd team think that only baseline-based thresholds are recommended for BT/Network Request perf monitoring? My concern is using static thresholds requires more maintenance over time and will be an operational burden. ^ Edited by @Ryan.Paredez for a more searchable title
Hello, If we currently have 2 local clustered indexers on 2 local sites and 1 remote same country (mainly in case of disaster), 3 shc : 2 load balanced for the 2 local sites and 1 in remote (not ... See more...
Hello, If we currently have 2 local clustered indexers on 2 local sites and 1 remote same country (mainly in case of disaster), 3 shc : 2 load balanced for the 2 local sites and 1 in remote (not accessible from users) RF=3 SF=3 Manager node is on 1st site.   Would it be interesting to use multisite clustering and to configure it with origin/total settings with same data safety?   Thanks.        
Hey community We are using Universal forwarder as a sidecar in K8S following github introduction. But the document is not clear enough and cannot guide us to integrate with server. env: ... See more...
Hey community We are using Universal forwarder as a sidecar in K8S following github introduction. But the document is not clear enough and cannot guide us to integrate with server. env: - name: SPLUNK_START_ARGS value: --accept-license - name: SPLUNK_USER value: root - name: SPLUNK_GROUP value: root - name: SPLUNK_PASSWORD value: helloworld - name: SPLUNK_CMD value: add monitor /var/log/ - name: SPLUNK_STANDALONE_URL value: splunk.company.internal   Some questions for about configurations: 1. splunk user and password:  where can we get this user and password? shall we allocate an account from splunk enterprise server? 2. SPLUNK_STANDALONE_URL:   is this splunk enterprise server URL?  is it possible to get this URL from splunk server?