All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunkers, I am working on creating custom alerts using JavaScript in Splunk. I have created the SPL for the alert and configured the definition in macros.conf as follows: index=XXXX sourcetyp... See more...
Hi Splunkers, I am working on creating custom alerts using JavaScript in Splunk. I have created the SPL for the alert and configured the definition in macros.conf as follows: index=XXXX sourcetype=YYYY earliest=-30d latest=now | eval ds_file_path=file_path."\\".file_name | stats avg(ms_per_block) as avg_processing_time by machine ds_file_path | eval avg_processing_time = round(avg_processing_time, 1) ." ms" From the JS part, I gave the alert condition: | search avg_processing_time >= 2 When I run the SPL in Splunk, it returns about 5 to 6 rows of results: machine       ds_file_path                                                          avg_processing_time Machine01  C:\Program Files\Splunk\etc\db\gdb.ds      5.3 ms Machine02  C:\Program Files\Splunk\etc\db\rwo.ds      7.6 ms Machine03  C:\Program Files\Splunk\etc\db\gdb.ds      7.5 ms Machine04  C:\Program Files\Splunk\etc\db\rwo.ds      8.3 ms   However, when I send the conditions result as an alert email, it includes only one row of the result, like the one mentioned below: machine       ds_file_path                                                          avg_processing_time Machine01  C:\Program Files\Splunk\etc\db\gdb.ds      5.3 ms   I need to send all the populated results ( avg_processing_time >= 2) in the email. How can I achieve this? I need some guidance. Here are the alert parameters I used: { 'name': 'machine_latency_' + index, 'action.email.subject.alert': 'Machine latency for the ' + index + ' environment has reached an average processing time ' + ms_threshold + ' milliseconds...', 'action.email.sendresults': 1, 'action.email.message.alert': 'In the ' + index + ' environment, the machine $result.machine$ has reached the average processing time of $result.avg_processing_time_per_block$ .', 'action.email.to': email, 'action.logevent.param.event': '{"session_id": $result.machine$, "user": $result.avg_processing_time_per_block$}', 'action.logevent.param.index': index, 'alert.digest_mode': 0, 'alert.suppress': 1, 'alert.suppress.fields': 'session_id', 'alert.suppress.period': '24h', 'alert_comparator': 'greater than', 'alert_threshold': 0, 'alert_type': 'number of events', 'cron_schedule': cron_expression, 'dispatch.earliest_time': '-30m', 'dispatch.latest_time': 'now', 'description': 'XXXXYYYYYZZZZ', 'search': '|`sw_latency_above_threshold(the_index=' + index + ')`' } I suspect that the email action is not configured to include the entire result set. Could someone help me with how to modify the alert settings or the SPL to ensure that all results are included in the alert email? Thanks in advance for your help!
Hello Everyone, I have built a Splunk query (shared below) recently & I noticed that when apply search condition App_Name IN (*) its actually drop number of event scanned.  Like if I execute follo... See more...
Hello Everyone, I have built a Splunk query (shared below) recently & I noticed that when apply search condition App_Name IN (*) its actually drop number of event scanned.  Like if I execute following query it has 2515 events (not to confused with statistics)...  index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* App_Name IN (*) | stats count But in case I comment App_Name IN (*) condition in line no. 4 then it produced 4547 events.   index=msad_hcv NOT ("forwarded") | spath output=role_name path=auth.metadata.role_name | mvexpand role_name | rex field=role_name "(\w+-(?P<App_Name>[^\"]+))" | search Environment=* type=* request.path=* ```App_Name IN (*)``` | stats count My question is how can I save the log events from getting dropped when App_Name IN (*) is in force ? Please note that events which are being dropped, are the ones who don't have "App_Name" in their events. 
I would like to add a button in a each row of a table. Once I click on the button it should create an incident from service now. How can we achive it by using javascript , html, css and xml.    Tha... See more...
I would like to add a button in a each row of a table. Once I click on the button it should create an incident from service now. How can we achive it by using javascript , html, css and xml.    Thanks in advance
Hi, In my project, we are using Synthetic monitoring. But as more and more Applications are moving to MFA (TOTP) via MS Authenticator, we need to be able to capture these TOTP and enable Synthetic m... See more...
Hi, In my project, we are using Synthetic monitoring. But as more and more Applications are moving to MFA (TOTP) via MS Authenticator, we need to be able to capture these TOTP and enable Synthetic monitoring. Can someone help/guide how we can enable MFA in synthetic monitoring.
Finally! I just got the license.
Hello Splunk Community,  I'm encountering an issue with ingesting data from a Prometheus remote_write_agent into Splunk Enterprise – this solution utilises the ‘Prometheus Metrics for Splunk and is ... See more...
Hello Splunk Community,  I'm encountering an issue with ingesting data from a Prometheus remote_write_agent into Splunk Enterprise – this solution utilises the ‘Prometheus Metrics for Splunk and is within a Test Environment. Problem Summary: Despite ensuring that the 'inputs.conf' file matches the configuration specifications defined in the 'inputs.conf.spec' file, the Prometheus data is not being ingested and I am receiving errors, e.g port: Not found in "btool" output (btool does not list the stanza [prometheusrw]) when viewing the inputs.conf file in the config explorer application. Details: Splunk Version: Splunk Enterprise 9.2 (Trial License) Operating System: Ubuntu 22.04 Splunk Application: Prometheus Metrics for Splunk (Latest Version 1.0.1)   inputs.conf.spec  /opt/splunk/etc/apps/modinput_prometheus/README/inputs.conf.spec (Full inputs.conf.spec - https://github.com/lukemonahan/splunk_modinput_prometheus/blob/master/modinput_prometheus/README/inputs.conf.spec As seen in image, the inputs.conf.spec file states there is a port  and maxClients configuration parameters. In the inputs.conf I updated the  /opt/splunk/etc/apps/modinput_prometheus/local/inputs.conf file to include the details below which meet the required formatting above: The inputs.conf file was saved, and the Splunk Server rebooted. After rebooting the input.conf was checked to ensure the config specification where being accepted using the Config Explorer App –  These errors where received for the following configuration parameters: However, other configuration parameters such as index, sourcetype & whitelist  Returned: 'Found in "btool" output. Exists in spec file (Stanza=[prometheusrw]) - and were accepted by Splunk. For some unknown reason, Splunk is not recognising some of the configuration parameters above that are listed within the inputs.conf.spec file, even when formatted accordingly.   Other Information: Prometheus remote-write-exporter details: Splunk Index: skyline_prometheus_metrics Any assistance is appreciated, thank you Splunk Community          
This is where you need to be extra diligent in problem statement.  Yes, it is doable but volunteers are not mind readers. | tstats min(_time) as start max(_time) as end where index=* by index | fiel... See more...
This is where you need to be extra diligent in problem statement.  Yes, it is doable but volunteers are not mind readers. | tstats min(_time) as start max(_time) as end where index=* by index | fieldformat start = strftime(start, "%F %T") | fieldformat end = strftime(end, "%F %T")  
Hi Hrawat, It's 21st June, I don't see the release on https://hub.docker.com/r/splunk/splunk/. When will this be released? Thanks, DG
I assume that each CSV file is ingested as one source.  In that case, a search by source would retrieve all entries in that file.  All you need is to add back headers.  But keep in mind: You cannot... See more...
I assume that each CSV file is ingested as one source.  In that case, a search by source would retrieve all entries in that file.  All you need is to add back headers.  But keep in mind: You cannot get the original order of headers back.  Headers will be in ASCII order. You cannot get the original row order back.  Rows will be in matrix ASCII order. If your original headers contain patterns incompatible with Splunk header standards, those headers will be changed to Splunk headers. Here, I will use an ordinary CSV as example:   c, b, a 1,2,3 4,,5 ,6,7 8,,   You can use the following   index=myindex sourcetype=mysourcetype source=mycsv | tojson | appendpipe [foreach * [eval header = mvappend(header, "<<FIELD>>")]] | fields header | where isnotnull(header) | stats values(header) as header values(_raw) as json | foreach json mode=multivalue [ eval csv = mvappend(csv, "\"" . mvjoin(mvmap(header, if(isnull(spath(<<ITEM>>, header)),"", spath(<<ITEM>>, header))), "\",\"") . "\"")] | eval csv = mvjoin(header, ",") . " " . mvjoin(csv, " ")   The mock data will give a csv field with the following value:   a,b,c "3","2","1" "5","","4" "7","6","" "","","8"   As explained above, much depends on the original CSV.  For example, I took precaution to quote each "cell" even though your original CSV may not use quotation marks. I did not quote headers, even though your original CSV may have used quotation marks. With certain column names and/or cell values, additional coding is needed. You can play with the following data emulation and compare with real data.   | makeresults format=csv data="c, b, a 1,2,3 4,,5 ,6,7 8,," | foreach * [eval _raw = if(isnull(_raw), "", _raw . ",") . if(isnull(<<FIELD>>), "", <<FIELD>>)] ``` the above emulates index=myindex sourcetype=mysourcetype source=mycsv ```  
At the very basic level you are producing a timechart with a span=1h and you want a time chart with a daily set of numbers, so that's wrong. However, it's impossible to say what is going on here, yo... See more...
At the very basic level you are producing a timechart with a span=1h and you want a time chart with a daily set of numbers, so that's wrong. However, it's impossible to say what is going on here, you have so much going on in the search. You have to go back to basics, which means start with the basic data and at EACH step of your SPL make sure the data is giving you what you expect. So, take a very small sample set of data and run the SPL line by line, check the output after each line. When you are happy that the data is giving you the correct output from 1 line of SPL, add in the next line.  
@bowesmana  In x-axis I am having time.
I want to integrate commvault with splunk  to create a dashboard  using addon. I dont find how to procced further .so please provide with the steps.
Thank You for your reply , but I want this information for all indexes  at once with their respective names is that possible?
Hello,   I have a dashboard with radio button + text input field.      <form version="1.1" theme="light"> <label>mnj1809_radio</label> <init> <set token="tokradiotext">$tokradio$="$tokte... See more...
Hello,   I have a dashboard with radio button + text input field.      <form version="1.1" theme="light"> <label>mnj1809_radio</label> <init> <set token="tokradiotext">$tokradio$="$toktext$"</set> </init> <fieldset submitButton="false"> <input type="radio" token="tokradio"> <label>Field</label> <choice value="category">Group</choice> <choice value="severity">Severity</choice> <default>category</default> <change> <set token="tokradiotext">$value$="$toktext$"</set> </change> </input> <input type="text" token="toktext"> <label>Value</label> <default>*</default> <change> <set token="tokradiotext">$tokradio$="$value$"</set> </change> </input> </fieldset> <row> <panel> <event> <title>tokradiotext=$tokradiotext$</title> <search> <query>| makeresults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </event> </panel> </row> </form>   'A radio input to select field names and a text input to enter field values, you can define and update a separate field when either token changes' Radio button is allowed one choice but I'd like to use checkbox that select multiple option. So if I choose multiple checkbox I'd like to see the search is looking for the results in those fields what were choose.   Could you please help me?   Thanks in advance!
Do you really want to know the times in the entire index?  If so, tstats is usually the way to go. | tstats min(_time) as start max(_time) as end where index=myindex | fieldformat start = strftime(s... See more...
Do you really want to know the times in the entire index?  If so, tstats is usually the way to go. | tstats min(_time) as start max(_time) as end where index=myindex | fieldformat start = strftime(start, "%F %T") | fieldformat end = strftime(end, "%F %T") Something like that.
One of Splunk's biggest taboo is join.  SQL is designed to make join efficient.  But Splunk is NoSQL.  If you feel there is a need for SQL like join, it is usually because the search strategy is wron... See more...
One of Splunk's biggest taboo is join.  SQL is designed to make join efficient.  But Splunk is NoSQL.  If you feel there is a need for SQL like join, it is usually because the search strategy is wrong. It is much better if you describe your dataset and the search used to obtain those two tables, and describe the desired output.  There is usually a more Splunk way to get the result and avoid join.
First, can you confirm that transaction grouped the correct events? Second, do you mean to say that even though one of the events in a transaction is 2024-06-13 09:22:49,101 INFO [com.mysite.core.... See more...
First, can you confirm that transaction grouped the correct events? Second, do you mean to say that even though one of the events in a transaction is 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp&name=wagnac%20%20slide%20coverage%20b&description=20% rule&contentCreator=JOSEY FALCON&mimeType=application/pdf&accountNum=09693720&policyNum=13068616 Splunk does not give you  location with value \\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp?  This is nearly impossible but you can try add extract command after index search.  If you look at the emulation I listed above, I used extract to emulate Splunk's default action.
Hi  I am getting some events from a csv which contains the below format and would like to drop such events using transforms.  null,null,0,null,null,null,null,null,null,  ---- to be dropped null,... See more...
Hi  I am getting some events from a csv which contains the below format and would like to drop such events using transforms.  null,null,0,null,null,null,null,null,null,  ---- to be dropped null,null,0,null,null,null,null,null,null,  ---- to be dropped null,null,0,null,null,null,null,null,null,  ---- to be dropped null,null,0,null,null,null,null,null,null,  ---- to be dropped null,null,0,null,null,null,null,null,null,  ---- to be dropped 52376,null,0,test,87387,2984,22,abc,99  ----- to be kept Below is what i have done so far and is not working Props.conf [Reports5min] TRANSFORMS-null = setnull transforms.conf [setnull] REGEX = ^null,null\,0,null,null,null,null,null,null,$ DEST_KEY = queue FORMAT = nullQueue
Hello , How can I know the start time and the latest time  coming of data of all index . meaning that when was the first time data came in that index and when is the latest time data have came in th... See more...
Hello , How can I know the start time and the latest time  coming of data of all index . meaning that when was the first time data came in that index and when is the latest time data have came in that index.
  index=ABC sourcetype="stalogmessage" | fields _raw | spath output=statistical_element "StaLogMessage.StatisticalElement" | spath output=statistical_subject "StaLogMessage.StatisticalElement.Sta... See more...
  index=ABC sourcetype="stalogmessage" | fields _raw | spath output=statistical_element "StaLogMessage.StatisticalElement" | spath output=statistical_subject "StaLogMessage.StatisticalElement.StatisticalSubject" | fields - _raw | mvexpand statistical_element | mvexpand statistical_subject | spath input=statistical_element output=statistical_item "StatisticalItem" | spath input=statistical_item output=StatisticalId "StatisticalId" | spath input=statistical_item output=Value "Value" | spath input=statistical_subject output=SubjectType "SubjectType" | where SubjectType="ORDER_RECIPE" | stats count by StatisticalId Value SubjectType _time | lookup detail_lfl.csv StatisticalID as StatisticalId SubjectType as SubjectType OUTPUTNEW SymbolicName | mvexpand SymbolicName | where SymbolicName="UTILISATION" | strcat "raw" "," SymbolicName group_name | stats min(Value) AS min_value, max(Value) AS max_value, sum(Value) AS sum_value, count AS count BY SymbolicName group_name StatisticalId _time | eval min_value=coalesce(min_value,value), max_value=coalesce(max_value,value), sum_value=coalesce(sum_value,value), count=coalesce(count,1) | fields StatisticalId min_value max_value sum_value count group_name _time | dedup StatisticalId _time group_name | fields - _virtual_ _cd_ | fillnull value="" | timechart span=1h minspan=3600s eval(round(min(min_value),2)) AS "Minimum", eval(round(max(max_value),2)) AS "Maximum", eval(round(sum(sum_value),2)) AS summed, eval(round(sum(count),2)) AS counted | eval "Average" = round(summed/counted, 2) | fields - summed counted     As I am using above query to visualize the graph in Maximum , minimum and average. But my values are looking different.    Expected result I want : @bowesmana Please help me what I need to fix in the query to achieve expected results.