All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I've just installed the Splunk Add-on for Microsoft Windows and I will be collecting data from UFs that forward first to a HF and then to an indexing cluster.  The app will be deployed to mul... See more...
Hello, I've just installed the Splunk Add-on for Microsoft Windows and I will be collecting data from UFs that forward first to a HF and then to an indexing cluster.  The app will be deployed to multiple UFs via deployment server.  I only want to collect data from the machines that the UFs are installed on. I see that there is no way to specify within inputs.conf which index to send the data to.  I've read the documentation but I still don't understand how.  I've even found this post which discusses the same topic but doesn't really provide me with an answer that I understand (sends me to documentation for older version of the add-on). Could somebody please give me a push in the right direction? Thank you and best regards, Andrew
I created a custom alert action, but btool is flagging it as wrong. The script is in /opt/splunk/etc/apps/<app>/bin  
The "Splunk Add-on for NetApp Data ONTAP" is only fetching performance information for the first 50 volumes on a cluster.  Changing the "perf_chunk_size_cluster_mode" value in ta_ontap_collection.con... See more...
The "Splunk Add-on for NetApp Data ONTAP" is only fetching performance information for the first 50 volumes on a cluster.  Changing the "perf_chunk_size_cluster_mode" value in ta_ontap_collection.conf will vary the number -- if I set it to 53, I'll get performance data for the first 53 alphabetic volumes on the cluster.   You can't set this arbitrarily large, if I put it to 10000, I get a failure on the data collection.   The chunking mechanism is part of OntapPerf.py, and normally would iterate over multiple queries until it collected data for all volumes.  This worked for years, but has been broken for several months now.  May or may not align with our upgrade to Splunk Enterprise 8.2.2 and Python 3. Add-on is latest version (3.0.2).  Filers are running ONTAP 9.7.  I went back through the install manual and verified all the steps and add-on data.  Inventory/capacity information works without issue, it is just performance metrics that are a problem. OntapPerf.py throws log warnings "No instances found for object type volume"... from line 461 in the script.   It seems like the "next_tag" mechanism in the script is failing, but I can't work out how to run OntapPerf.py from command line, I don't know how to troubleshoot any further. Splunk_TA_ontap shows as "unsupported" and developed by "Splunk Works".   Last release was June 2021.  I could really use some pointers on how to resolve this, or how I could move forward troubleshooting it myself.     
I have a field with the following values. How can I calculate the product i.e multiply all values with each other? There is a sum function but no multiplication. The output should be = 0.1*0.03*0.34*... See more...
I have a field with the following values. How can I calculate the product i.e multiply all values with each other? There is a sum function but no multiplication. The output should be = 0.1*0.03*0.34*0.32. Thanks  0.1 0.03 0.34 0.32
Hi All, I'm new to Splunk,  please let me know if  it possible to integrate with SaaS application which hosed on 3rd party cloud platform (Tencent Cloud). We have never used Splunk to integra... See more...
Hi All, I'm new to Splunk,  please let me know if  it possible to integrate with SaaS application which hosed on 3rd party cloud platform (Tencent Cloud). We have never used Splunk to integrate SaaS application for logging, so I have no idea how Splunk gathering log files through internet.  Any APIs or guidance can share us for reference? Thanks in advance.
Hey everyone, Currently making a report for my team that requires to have two X-Axis values based on the excel sheet shared with me. Below are some screenshots of the desired output, my progress so... See more...
Hey everyone, Currently making a report for my team that requires to have two X-Axis values based on the excel sheet shared with me. Below are some screenshots of the desired output, my progress so far, and search query based on what I have learned so far. The goal: What I am familiar with using chart in the search: My search string   | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | eval _time=strptime(DATE." ","%Y-%m-%d") | where _time >= strptime("$from$", "%m/%d/%Y") AND _time <= strptime("$to$", "%m/%d/%Y") | eval epochtime=strptime(TIME, "%H:%M:%S")| eval desired_time=strftime(epochtime, "%I:%M:%S %p") | chart sum(VIO_PAGING_SEC) as "$lpar$ Sum of VIO_PAGING_SEC" sum(SYSTEM_PAGEFAULTS_SEC) as "$lpar$ SYSTEM_PAGEFAULTS_SEC" sum(SWAP_PAGIN_SEC) as "$lpar$ SWAP_PAGIN_SEC" sum(LOCAL_PAGEFAULTS_SEC) as "$lpar$ LOCAL_PAGEFAULTS_SEC" over desired_time  
Hello, I need help to understand how I can install the Nodejs API Agent.    Thanks.
Hi, I seem to be stuck with something pretty trivial. I have events with users and corresponding hostnames, eg: User Hostname user1 hostA user1 hostB user2 hostA ... See more...
Hi, I seem to be stuck with something pretty trivial. I have events with users and corresponding hostnames, eg: User Hostname user1 hostA user1 hostB user2 hostA user2 hostC user3 hostD   I want to count unique user-hostname values and show the contributing hostnames for users that have used more that 1 hostname like this: User Hostnames used user1 hostA hostB user2 hostA hostC   This seems to take care of the first part of the task:     | stats dc(Hostname) as uh by User | search uh > 1       How can I add the contributing Hostnames? Formatting is not so important - it may be one field with all the hostnames like in the example above, or multiple fields, or one field together with the User field.   Thank you.    
I am trying now for 16 hours now to get Splunk to send an email to a development mail server to test mail notifications from a custom python script. Now that i put a fair amount of time into it, i am... See more...
I am trying now for 16 hours now to get Splunk to send an email to a development mail server to test mail notifications from a custom python script. Now that i put a fair amount of time into it, i am almost there, which means, i actually get the smtplib to contact the Dev-Mailserver that does not require SSL or TLS. Now if i hardcode the password the email gets through. But since i would like to use the email settings from Splunk, i retrieve the auth_username and the clear_password from the /services/configs/conf-alert_actions/email REST-endpoint. Fortunately Splunk takes security very seriously and handles this information carefully as it should. But this means, in the clear_password I only get the SHA256 hash of the password which will let my poor little Dev-Mailserver go "boohoo! wrong credentials." My question is: Is there a way to send an email from my custom python script, just the way that the sendemail.py script does it? (The standard alerts from splunk can already be received by my Dev-Mailserver.) P.S.: I looked up the script, but the credentials seem to be passed as parameters, which again is a very nice and secure way to handle sensitive information but it leaves me with no other option but to ask.
Hi, I have a dashboard where one of the panels is generated from a |loadjob command which produces a table. This part works as normal. Now, I need to add an extra column onto the panel's tabl... See more...
Hi, I have a dashboard where one of the panels is generated from a |loadjob command which produces a table. This part works as normal. Now, I need to add an extra column onto the panel's table and as a result, I need to incorporate another index onto the panel's query. As result, the panel's query has both the |loadjob and also an appendcols piece that searches an index.  The extra clause is as follows: | appendcols [search index=fraud_glassbox sourcetype="gb:sessions" | stats count as email_count by Global_EmailID_CSH | eval score_email = case(email_count<2, 40, 1=1, 100) ] The issue is .... my dashboard SIGNIFICANTLY slows down in how the results are generated on the dashboard. Why would this be? Would I need an alterative to appendcols command? Many thanks,
Hello, I have this query:     | mstats avg(_value) as packets WHERE index=metrics_index sourcetype=network_metrics (metric_name=*.out) ((metric_name="InterfaceEthernetA.*" OR metric_name="Inter... See more...
Hello, I have this query:     | mstats avg(_value) as packets WHERE index=metrics_index sourcetype=network_metrics (metric_name=*.out) ((metric_name="InterfaceEthernetA.*" OR metric_name="InterfaceEthernetB.*") AND (host="hostA" OR host="hostB")) span=1m by metric_name,host | rex field=metric_name ".*InterfaceEthernet(?<mn>\d_\d*)" | eval kbits=packets*8/1000 | timechart span=30m sum(kbits) by mn       It returns this results:     From those results, I would like to make operations generating another column with those results...   For example: (ColumnA - ColumnB) / ColumnA * 100   How could I do that?
Hi, am trying to find list of ip's from search1 which are missing in search2 and get all the ip from search1 and calculate the percentage of missing ip's. This will help to identify the number of i... See more...
Hi, am trying to find list of ip's from search1 which are missing in search2 and get all the ip from search1 and calculate the percentage of missing ip's. This will help to identify the number of ip's.   Thanks in advance!
Here is my xml code so far: <form version="1.1" theme="dark"> <init> <set token="none">None</set> <set token="tokTypeInputVisible">Yes</set> <unset token="user_tok"></unset> <unset token="descr... See more...
Here is my xml code so far: <form version="1.1" theme="dark"> <init> <set token="none">None</set> <set token="tokTypeInputVisible">Yes</set> <unset token="user_tok"></unset> <unset token="description_tok"></unset> <unset token="revisit_tok"></unset> <unset token="dropdown_tok"></unset> <unset token="add"></unset> <unset token="remove"></unset> <unset token="reauthorize"></unset> </init> <label>USB</label> <fieldset submitButton="false" autoRun="false"> <input type="text" token="user_tok" searchWhenChanged="false"> <label>User</label> <default></default> </input> <input type="text" token="description_tok" searchWhenChanged="false"> <label>Description</label> <default></default> </input> <input type="dropdown" token="revisit_tok" searchWhenChanged="false"> <label>Revisit</label> <choice value="select Month">Select</choice> <choice value="1 month">1 Month</choice> <choice value="2 month">2 Month</choice> <choice value="3 month">3 Month</choice> <choice value="4 month">4 Month</choice> <choice value="5 month">5 Month</choice> <choice value="6 month">6 Month</choice> <change> <condition value="1 month"> <set token="1 month"></set> <unset token="2 month"></unset> <unset token="3 month"></unset> <unset token="4 month"></unset> <unset token="5 month"></unset> <unset token="6 month"></unset> </condition> <condition value="2 month"> <unset token="1 month"></unset> <set token="2 month"></set> <unset token="3 month"></unset> <unset token="4 month"></unset> <unset token="5 month"></unset> <unset token="6 month"></unset> </condition> <condition value="3 month"> <unset token="1 month"></unset> <unset token="2 month"></unset> <set token="3 month"></set> <unset token="4 month"></unset> <unset token="5 month"></unset> <unset token="6 month"></unset> </condition> <condition value="4 month"> <unset token="1 month"></unset> <unset token="2 month"></unset> <unset token="3 month"></unset> <set token="4 month"></set> <unset token="5 month"></unset> <unset token="6 month"></unset> </condition> <condition value="5 month"> <unset token="1 month"></unset> <unset token="2 month"></unset> <unset token="3 month"></unset> <unset token="4 month"></unset> <set token="5 month"></set> <unset token="6 month"></unset> </condition> <condition value="6 month"> <unset token="1 month"></unset> <unset token="2 month"></unset> <unset token="3 month"></unset> <unset token="4 month"></unset> <unset token="5 month"></unset> <set token="6 month"></set> </condition> </change> </input> <input type="dropdown" token="dropdown_tok" depends="$tokTypeInputVisible$"> <label>Action</label> <choice value="none">None</choice> <choice value="add">Add</choice> <choice value="remove">Remove</choice> <choice value="reauthorize">Reauthorize</choice> <change> <condition value="none"> <set token="none"></set> <unset token="add"></unset> <unset token="remove"></unset> <unset token="reauthorize"></unset> </condition> <condition value="add"> <set token="add"></set> <unset token="remove"></unset> <unset token="reauthorize"></unset> <unset token="none"></unset> </condition> <condition value="remove"> <unset token="add"></unset> <set token="remove"></set> <unset token="reauthorize"></unset> <unset token="none"></unset> </condition> <condition value="reauthorize"> <unset token="add"></unset> <unset token="none"></unset> <unset token="remove"></unset> </condition> </change> <default>none</default> </input> </fieldset> <row> <panel depends="$none"> <title>USb_BAU</title> <table> <search> <query> | inputlookup USB.csv | table _time, user, category, department, description, revisit, status | lookup lookup user as user OUTPUT category department </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$add$"> <title>Add User</title> <table> <search> <query> | inputlookup USB.csv | append [ | makeresults | eval user="$user_tok$", description="$description_tok$", revisit="$revisit_tok$", Action="$dropdown_tok$"] | table _time, user, category, department, description, revisit, status | lookup lookup user as user OUTPUT category department | outputlookup USB.csv</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <unset token="add"></unset> <unset token="remove"></unset> <unset token="reauthorize"></unset> </done> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$remove$"> <title>Remove User</title> <table> <search> <query>| inputlookup USB.csv | where user != "$user_tok$" | table _time, user, category, department, description, revisit, status | outputlookup USB.csv </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <unset token="remove"></unset> </done> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$revisit_tok$"> <title>Revisit User</title> <table> <search> <query> | inputlookup USB.csv | eval 1 month="$1 month$", 2 month="$2 month$", 3 month="$3 month$", 4 month="$4 month$", 5 month="$5 month$", 6 month="$6 month$" | eval status = IF((now() &lt; 1 month), "Expired","Valid") | table _time, user, category, department, description, revisit, status | outputlookup USB.csv </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> basically I am trying to figure out when a user is being added to the lookup table and you click on add, I need to check the date they were added to the month selection and if it is past the month they selected then that user is inactive and there is a reauthorize option to reactivate them on the lookup table.
Hi. I have two panel dashboard. One is general status of cluster and another one details for selected cluster.  --------- Panel 1 -------- `myapp_get_index` sourcetype="myapp:pce:metadata" mya... See more...
Hi. I have two panel dashboard. One is general status of cluster and another one details for selected cluster.  --------- Panel 1 -------- `myapp_get_index` sourcetype="myapp:pce:metadata" myapp_type="myapp:pce:health" |stats values(status) as status by fqdn |rename fqdn as FQDN | eval "Cluster Status" = upper(substr(status,0,1)) + lower(substr(status,2)) |fields - status ----------------------- Panel 2 -------------- `myapp_get_index` sourcetype="myapp:pce:metadata" myapp_type="myapp:pce:health" fqdn=$fqdn$ | head 1 |spath path="nodes{}" output=nodes | mvexpand nodes |table nodes |spath input=nodes |eval "Uptime Day"=round(uptime_seconds/60/60,0) |table hostname, ip_address, type, cpu.percent, disk{}.location, disk{}.value.percent,memory.percent,services.running{},services.status, "Uptime Day" ----- This code works. I added transpose command to panel 1 -------------- `myapp_get_index` sourcetype="myapp:pce:metadata" myapp_type="myapp:pce:health" |stats values(status) as status by fqdn |rename fqdn as FQDN |eval "Cluster Status" = upper(substr(status,0,1)) + lower(substr(status,2)) |fields - status |transpose 5 |fields - column | rename column as FQDN,"row 1" as "FQDN 1", "row 2" as "FQDN 2", "row 3" as "FQDN 3", "row 4" as "FQDN 4", "row 5" as "FQDN 5" ------------------ It shows data as I wanted in panel1, but in panel 2 it shows details only for first FQDN regardless what I click on. ----------------------- I could not pinpoint what is missing. Thank you in advance.
How to create a custom command to delete a particular default entity type  from environment? Also how we clone the object to customize?
A search query in Dashobard Classic when split by Trellis in Visualization tab i  gives 4 pie charts  index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction la... See more...
A search query in Dashobard Classic when split by Trellis in Visualization tab i  gives 4 pie charts  index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | eval accuracy_category = case( abs(lastEndDelta) <= 600, 10, (abs(lastEndDelta) > 600 and abs(lastEndDelta) <= 1200), 20, (abs(lastEndDelta) > 1200 and abs(lastEndDelta) <= 1800), 30, 1==1,40) | eval timeDistance_category = case(timeDistance < 3600, 1, (timeDistance>3600 and timeDistance<7200),2,(timeDistance>7200 and timeDistance<10800),3,1==1,4) | chart count by accuracy_category   But if the same is embedded in Dashboard Studio I have to add a where clause to create the query result in 4 parts to get 4 pie charts becuase I cannot find Trellis option. How to get 4 piecharts ( split by ... Trellis ) in Dashboard Studio ? | where timeDistance_category=1
Hi all , I got this search query which checks the time difference between two events and it works great but I would like also to see the milliseconds of that calculation but at the moment it just ... See more...
Hi all , I got this search query which checks the time difference between two events and it works great but I would like also to see the milliseconds of that calculation but at the moment it just shows H:MM:SS "Duration" is which shows me the output from a toString eval but I would like it to show also milliseconds , anyone could help me out on this one ?       index="0200-pio_numb3r5_support-app" "HumanResourceImportJob" AND "transitioning from state 'Processing' to 'Succeeded'. Reason:" OR "transitioning from state 'Enqueued' to 'Processing'. Reason:" AND NOT OnStateUnapplied | where host="AUDIINSA4919" OR host="AUDIINSA4304" | stats earliest(_time) AS Start_time latest(_time) AS Finished_time by host | eval Latency=tostring(Finished_time-Start_time, "duration")<----- here | table Start_time , Finished_time , Latency , host | fieldformat Finished_time=strftime(Finished_time,"%d/%m/%y %H:%M:%S.%3N") | fieldformat Start_time=strftime(Start_time,"%d/%m/%y %H:%M:%S.%3N")       Output is (latency should be H:MM:SS:milliseconds) :   Start_time                                   Finished_time                            Latency           host 1 19/05/22 03:30:03.000 19/05/22 03:42:02.000 00:11:59 AUDIINSA4919
How to find the duration in minutes between two events from _time ?   index=log-13120-nonprod-c laas_appId=qbmp.prediction* "pushed to greenplum for predictionId" 2022-05-19 03:37:30,108 jobRu... See more...
How to find the duration in minutes between two events from _time ?   index=log-13120-nonprod-c laas_appId=qbmp.prediction* "pushed to greenplum for predictionId" 2022-05-19 03:37:30,108 jobRunStats INFO Current Predictions, total=1659262 pushed to greenplum for predictionId = fe387967-2f11-4358-8b27-c51a45042e79 2022-05-19 03:26:29,085 jobRunStats INFO Current Predictions, total=1659262 pushed to greenplum for predictionId = 473866d5-c7b1-4156-90a0-de978b260e8d   I simply want diff between the above two and then show a line graph of cycle time length in minutes. So then output will be  11mins 14mins 7 mins  And then I want to plot a line graph that will tell me length of my cycle time    I do not want to use transation
i would like to setup email alert which should run on Mon , Tue , wed , thur, and Friday  everyweek @ 04:00 AM IST
Hello Splunkers, Can somebody here tell me what the easiest way is to get MuleSoft data into Splunk if the MuleSoft data is on-prem? For example, would I be able to use a forwarder and monitor the ... See more...
Hello Splunkers, Can somebody here tell me what the easiest way is to get MuleSoft data into Splunk if the MuleSoft data is on-prem? For example, would I be able to use a forwarder and monitor the directory, or is an integration via HEC or API possible? I also found an add-on and an app, but it doesn't say whether they're intended for MuleSoft running on-prem or in the cloud. Any help would be greatly appreciated!!