All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After installing Splunk Add-on for Nagios it's showing "page not found". Can anyone tell me why this might be happening?
I have data in below format  Data Input 1 :  index=abc Time (YYYY-MM-DD HH24) Count1 2020-09-30 00 10 2020-09-30 01 20 2020-09-30 02  40   Data Input 2 : index=xyz Time (YY... See more...
I have data in below format  Data Input 1 :  index=abc Time (YYYY-MM-DD HH24) Count1 2020-09-30 00 10 2020-09-30 01 20 2020-09-30 02  40   Data Input 2 : index=xyz Time (YYYY-MM-DD HH24) Count2 2020-09-30 00 30 2020-09-30 01 10 2020-09-30 02  25   I am looking for output like this : Time (YYYY-MM-DD HH24) Count1 Count2 2020-09-30 00 10 30 2020-09-30 01 20 10 2020-09-30 02 40 25   And create a timechart out of this for both values
Have a list of JSONs that needs to be ingested as separate events (a separate event for each "id"): [ {"id":"1","fileName":"267663776.mpg","testPlan":"QC - TS Files (Partner A)","priority":"Normal"... See more...
Have a list of JSONs that needs to be ingested as separate events (a separate event for each "id"): [ {"id":"1","fileName":"267663776.mpg","testPlan":"QC - TS Files (Partner A)","priority":"Normal","scheduledAt":"Sep 26, 2020 12:56:32 PM","status":"Finished","result":"Failure","correct":"correction completed|00000174cbfd0a7ba724bdbd000a006500810058","progress":"100|00000174cbfd0a7ba724bdbd000a006500810058","openInBaton":"https://bvm:443/Baton/@@home.html#Tasks/Report/00000174cbfd0a7ba724bdbd000a006500810058","startTime":"Sep 26, 2020 12:56:33 PM","completionTime":"Sep 26, 2020 1:45:20 PM","checker":"bcc@9000"}, {"id":"2","fileName":"267664759.ts","testPlan":"QC - TS Files (Partner A)","priority":"Normal","scheduledAt":"Sep 26, 2020 12:36:51 PM","status":"Finished","result":"Failure","correct":"correction completed|00000174cbeb047f5ab7565f000a006500810058","progress":"100|00000174cbeb047f5ab7565f000a006500810058","openInBaton":"https://bvm:443/Baton/@@home.html#Tasks/Report/00000174cbeb047f5ab7565f000a006500810058","startTime":"Sep 26, 2020 12:36:52 PM","completionTime":"Sep 26, 2020 1:16:00 PM","checker":"bcc@9000"}, {"id":"3","fileName":"267660544.mpg","testPlan":"QC - TS Files (Partner A)","priority":"Normal","scheduledAt":"Sep 26, 2020 11:52:22 AM","status":"Finished","result":"Failure","correct":"correction completed|00000174cbc24d2c370e7c19000a006500810058","progress":"100|00000174cbc24d2c370e7c19000a006500810058","openInBaton":"https://bvm:443/Baton/@@home.html#Tasks/Report/00000174cbc24d2c370e7c19000a006500810058","startTime":"Sep 26, 2020 11:52:23 AM","completionTime":"Sep 26, 2020 12:16:40 PM","checker":"bcc@9000"}, {"id":"4","fileName":"267703040.ts","testPlan":"QC - TS Files (Partner A)","priority":"Normal","scheduledAt":"Sep 26, 2020 10:58:49 AM","status":"Finished","result":"Failure","correct":"correction completed|00000174cb9144a36b0312c5000a006500810058","progress":"100|00000174cb9144a36b0312c5000a006500810058","openInBaton":"https://bvm:443/Baton/@@home.html#Tasks/Report/00000174cb9144a36b0312c5000a006500810058","startTime":"Sep 26, 2020 10:58:52 AM","completionTime":"Sep 26, 2020 11:52:08 AM","checker":"bcc@9000"}, ... {"id":"4999","fileName":"267686238-73abc3c1-359e-4468-8355-d4e8da927661.ts","testPlan":"QC - TS Files (Partner A)","priority":"Normal","scheduledAt":"Sep 26, 2020 10:12:06 AM","status":"Finished","result":"Failure","correct":"correction completed|00000174cb668100c2e5c765000a006500810058","progress":"100|00000174cb668100c2e5c765000a006500810058","openInBaton":"https://bvm:443/Baton/@@home.html#Tasks/Report/00000174cb668100c2e5c765000a006500810058","startTime":"Sep 26, 2020 10:12:08 AM","completionTime":"Sep 26, 2020 10:37:55 AM","checker":"bcc@9000"} ] The list may contain thousands of entries (events); each JSON could be spread over multiple lines and be nested - i.e. the above example isn't the only type of such list of JSONs we have to ingest. What is the best practice to ingest this? P.S. A more general question is, how does one ingest the following file format, with field extractions?   [ {"optional_timestamp": "2020-09-26 15:16", "field1": "value1"}, {"optional_timestamp": "2020-09-26 15:17", "field1": "value2"} ]   ...assuming the file may contain thousands of events? Thanks! P.P.S. Fairly certain I've seen an answered question about this - but now I can't find it... Apologies for the duplicate...
Hi I have following LARGE lookup with over 1000 entries |host | type | |host1 |            | |host2 |            | |host3 |            | I have SPL query which returns in json format:  ... See more...
Hi I have following LARGE lookup with over 1000 entries |host | type | |host1 |            | |host2 |            | |host3 |            | I have SPL query which returns in json format:   { type: big tags: { address: host1 } }     I need to append my lookup table if host=tags.address then append type=big.  Result of my lookup: |host | type | |host1 |   big   | |host2 |            | |host3 |            | Note: SPL query is very big and i need to lookback data at least 1 year back. I only care about filling my lookup table.  Some of the hosts latest entry maybe 2 months ago and some 8 months ago.
Hi I have created below dummy sample data-   |makeresults|eval a="1328,1345" |append[|makeresults| eval state="added", add_field="1855"] |append[|makeresults| eval state="added", add_field="1860"]... See more...
Hi I have created below dummy sample data-   |makeresults|eval a="1328,1345" |append[|makeresults| eval state="added", add_field="1855"] |append[|makeresults| eval state="added", add_field="1860"] |append[|makeresults| eval a="1855,1328,1860,1345"] |append[|makeresults| eval state="removed", remove_field="1855"] |append[|makeresults| eval a="1855,1328,1860,1345"]   Now If you look at data whenever state is added it should add the number to previous `a` field and if it has not added then it should show error field as =1. similarly for state=removed, it should remove that number from previous a field and if not able to removed then  it should show error field as =1. In above case it has successfully added to a field but unsuccess in removing from field a hence last event should show error field as =1. Thanks
I want to run a javascript in all pages in the dashboard in all the apps in Splunk. The purpose of the JS is to update the KV store. Please let me know if you have any idea how to implement this so... See more...
I want to run a javascript in all pages in the dashboard in all the apps in Splunk. The purpose of the JS is to update the KV store. Please let me know if you have any idea how to implement this solution. 
Hi, I would like to create a metric that returns the amount of browser applications being monitored on the controller. I have tried to do this using a distinct count of the app key, although my... See more...
Hi, I would like to create a metric that returns the amount of browser applications being monitored on the controller. I have tried to do this using a distinct count of the app key, although my query is returning 1 instead of the amount of browser apps. Here are the queries I have tried: SELECT distinctcount(appkey) FROM browser_records SELECT distinctcount(appkey) FROM web_session_records If anyone knows of a way or can point me in the right direction, I would greatly appreciate it. Thanks, Jesse
Hi Experts,   Please suggest where I can downloaded Splunk Universal forwarder for Window server 2008 (Window version 6.0 Build 6003 Service pack 2).   In https://www.splunk.com/en_us/download/un... See more...
Hi Experts,   Please suggest where I can downloaded Splunk Universal forwarder for Window server 2008 (Window version 6.0 Build 6003 Service pack 2).   In https://www.splunk.com/en_us/download/universal-forwarder.html the oldest version is Windows 8.1, and 10 Windows Server 2008 R2, 2012, 2012 R2, and 2016   Thanks in advance. 
i have created a report and scheduled it. i have added my email address to get result in csv format. but when the count of result rows are greater than 10000, only first 10000 rows are present in csv... See more...
i have created a report and scheduled it. i have added my email address to get result in csv format. but when the count of result rows are greater than 10000, only first 10000 rows are present in csv file. i want all data (more  than 10000 rows) in my csv file. is there any way to resolve this?  
Hi everyone. We have a single Splunk entreprise instance. We are planning to set frozentimeperiodinsecs in a legacy index that has not been used any more. For example, I have the setup below. ... See more...
Hi everyone. We have a single Splunk entreprise instance. We are planning to set frozentimeperiodinsecs in a legacy index that has not been used any more. For example, I have the setup below. name size event count oldest data newest data indexA 420GB 3.06B 6 years ago 3 months ago indexB 370GB 1.22B 7 months ago few seconds ago main 270GB above 200M 6 years ago 3 months ago   We are indexing data on IndexB now and are not using indexA & main any more, so we are planning to shrink its size to maintain disk space. We don't want to delete it at once since we have compliance requirements and have to keep data for one year. My question is, if I set frozentimeperiodinsecs(in local indexs.conf 's indexA, main stanza) to 1 year(31536000) on, when Splunk manages to delete it? Do I have to reboot Splunk? or delete it with search commands manually? someone please help me.
Hello all, I have a requirement to forward events from a search result to an API and store the response from the API call made by an alert action back to a custom index. How can I achieve this. Plea... See more...
Hello all, I have a requirement to forward events from a search result to an API and store the response from the API call made by an alert action back to a custom index. How can I achieve this. Please help. Regards, Naresh
I have a web application where each incoming request is given a unique requestID so we can see all the logs for that particular 'request'.  This isn't currently a field, but I could/probably should m... See more...
I have a web application where each incoming request is given a unique requestID so we can see all the logs for that particular 'request'.  This isn't currently a field, but I could/probably should make it one. I am looking for particular events where we log a problem.  I want to pull the requestID for all of these events and show the entire 'request' for all of them.  So far looking around it seems the 'map' command is the way to do this.  What I haven't seen or figured out is how to do this for multiple requestIDs at once exactly. index=foo  REQUEST_TIME>2000 | rex field=_raw "^\[\((?<REQID>[^\)]*)" | map search="index=foo $REQID$" That's the best I've come up with.  The regex works because if I pass it to something else like stats count I can see the value with a count of 1. index=foo REQUEST_TIME>2000 | rex field=_raw "^\[\((?<REQID>[^\)]*)" | stats count by REQID So I am close I think.  I'd like to make it work before I change the log output to make reqid a field.  
Hi All,  I'm trying to create a alert/event when a regex field count is above 30. I however cannot save as event "that includes pipe operator". The output show exactly the values I want, but I'm no... See more...
Hi All,  I'm trying to create a alert/event when a regex field count is above 30. I however cannot save as event "that includes pipe operator". The output show exactly the values I want, but I'm not able to create a alert/event. Is there any alternatives or better ways to create events/alerts for these ?  Regex - Public_IP_Test (?s)from\s+(?P<Public_IP_Test>\d+\.\d+\.\d+\.\d+)\s+via\s+ssh Search Query host="192.168.68.1" Public_IP_Test="*" failure | stats count as MyTestCount by Public_IP_Test | where MyTestCount > 30 Output  141.98.10.209 32 141.98.10.210 32 141.98.10.211 32 141.98.10.212 32 141.98.10.213 32 Example of the logs: system,error,critical user: login failure for user pi from 141.98.10.210 via ssh Public_IP_Test = 141.98.10.210 host = 192.168.68.1index = mainlinecount = 1source = udp:514sourcetype = syslogtimestamp = none  Any help would be greatly appreciated  P  
I have installed Status Matrix Custom Viz on my stand-alone search head, radius part is not working as expected. Your help would be highly appreciable.
We've setup the Event hub input according to the instructions included in the app and are not getting data into the index. We are also not getting any errors in the internal logs.  Here's what I do ... See more...
We've setup the Event hub input according to the instructions included in the app and are not getting data into the index. We are also not getting any errors in the internal logs.  Here's what I do see in the internal logs.  index=_internal host=<heavy forwarder> source=*hub* 2020-09-29 16:41:40,799 DEBUG pid=31407 tid=MainThread file=__init__.py:initialize:157 | Initializing platform. 2020-09-29 16:41:40,799 DEBUG pid=31407 tid=MainThread file=client.py:open:234 | Opening client connection. 2020-09-29 16:41:40,798 DEBUG pid=31407 tid=MainThread file=message.py:__init__:109 | Destroying 'AMQPValue' 2020-09-29 16:41:40,797 DEBUG pid=31407 tid=MainThread file=message.py:__init__:109 | Deallocating 'AMQPValue' 2020-09-29 16:41:40,797 INFO pid=31407 tid=MainThread file=client_abstract.py:__init__:161 | u'eventhub.pysdk-843ec71b': Created the Event Hub client 2020-09-29 16:41:40,797 INFO pid=31407 tid=MainThread file=setup_util.py:log_info:114 | Proxy is not enabled! 2020-09-29 16:41:40,797 DEBUG pid=31407 tid=MainThread file=base_modinput.py:log_debug:286 | _Splunk_ Getting proxy server. 2020-09-29 16:41:39,464 INFO pid=31407 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-09-29 16:41:38,196 INFO pid=31407 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-09-29 16:41:37,448 INFO pid=31407 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-09-29 16:41:36,413 INFO pid=31407 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-09-29 16:40:40,778 DEBUG pid=28651 tid=MainThread file=__init__.py:initialize:157 | Initializing platform. 2020-09-29 16:40:40,778 DEBUG pid=28651 tid=MainThread file=client.py:open:234 | Opening client connection. 2020-09-29 16:40:40,777 DEBUG pid=28651 tid=MainThread file=message.py:__init__:109 | Destroying 'AMQPValue' 2020-09-29 16:40:40,776 DEBUG pid=28651 tid=MainThread file=message.py:__init__:109 | Deallocating 'AMQPValue' 2020-09-29 16:40:40,776 INFO pid=28651 tid=MainThread file=client_abstract.py:__init__:161 | u'eventhub.pysdk-4adf6449': Created the Event Hub client 2020-09-29 16:40:40,776 INFO pid=28651 tid=MainThread file=setup_util.py:log_info:114 | Proxy is not enabled! 2020-09-29 16:40:40,776 DEBUG pid=28651 tid=MainThread file=base_modinput.py:log_debug:286 | _Splunk_ Getting proxy server. 2020-09-29 16:40:39,481 INFO pid=28651 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1 2020-09-29 16:40:38,240 INFO pid=28651 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1   @jconger  Any help is greatly appreciated!
Hi, i am trying to do a search which can shows which internal client accessed the web but i have a proxy to access the web on behalf. So i have a internal client X.X.X.X  my proxy internal IP is IP... See more...
Hi, i am trying to do a search which can shows which internal client accessed the web but i have a proxy to access the web on behalf. So i have a internal client X.X.X.X  my proxy internal IP is IP.IP.IP.IP my proxy external IP is EP.EP.EP.EP   so i have a search  index=* 8.8.8.8   The above search will show that my proxy(EP.EP.EP.EP) access this IP. So from here i would like to based on this result i need to search index=proxy where my IP is IP.IP.IP.IP to see which internal client access this 8.8.8.8   Can anyone guide me on how should i write my splunk search?
I've been using MLTK 5.x as well as 4.x with the respected version of Python that works those versions. The "mlspl.conf" file has been highly altered to try and max out the MLTK app capabilities with... See more...
I've been using MLTK 5.x as well as 4.x with the respected version of Python that works those versions. The "mlspl.conf" file has been highly altered to try and max out the MLTK app capabilities without crashing the Splunk instance. I work in a MSSP that deals with a lot of customers and custom modifications of that file were performed on each. Having done that though, I've noticed things across all of those customer's Splunk environments: ML models that become large but are needed to be large will always have bundle replication issues. All ML models are just csv files so anything usually above 200MB doesn't do well with replication. Plus, it is extremely slow to query with "apply". Blacklisting the "__mlspl_*" lookups from replicating helps, but it's still very slow to query. Just using the "fit" command without creating a model always runs faster, even with a large amount of data. The algos used for this statement: TFIDF RandomForestClassifier SVM DecisionTreeClassifier LogisticRegression PCA DensityFunction So, knowing these 2 items, what is the advantage vs. disadvantage of just using the "fit" command everytime and not training a model to use later?
Hi Everyone, I am working on an addon to collect event result based for an an alert and send it to an API endpoint. Once the response is success the endpoint returns a success message in a json form... See more...
Hi Everyone, I am working on an addon to collect event result based for an an alert and send it to an API endpoint. Once the response is success the endpoint returns a success message in a json format and I Want to store it in a custom index and sourcetype. I tired using below code but the data is written to Main index instead of my custom index. Is there way to write the event in to custom index for an alert action build via Splunk Addon builder. helper.addevent("hello", sourcetype="customsource") helper.addevent("world", sourcetype="customsource") helper.writeevents(index="mycustomindex", host="localhost", source="localhost") Regards, Naresh
Hi Everyone, I have one dashboard which consists of two Panels Failure and Failure Trend. I have one date range Indicator for which I have set Earliest time as -7d@d and latest time as @d. The Iss... See more...
Hi Everyone, I have one dashboard which consists of two Panels Failure and Failure Trend. I have one date range Indicator for which I have set Earliest time as -7d@d and latest time as @d. The Issue I am facing is when I load my dashboard for the first its taking the correct time (Earliest time as -7d@d and latest time as @d) and showing the data accordingly. But when I select some other presets say "Yesterday" and then again selecting last 7 days its picking time range as  Earliest Time -7d@h and latest time as now due to which my data  is decreasing. Can someone guide me on that. Below is my XML Code: <input type="time" token="field1" searchWhenChanged="true"> <label>Date/Time</label> <default> <earliest>-7d@d</earliest> <latest>@d</latest> </default> </input> <panel> <chart> <title>FAILURE TREND</title> <search> <query>index="ABC" sourcetype=XYZ FAILURE $OrgName$ | bin span=1d _time |stats count by _time</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">line</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> <panel> <single> <title>FAILURE</title> <search> <query>index="ABC" sourcetype=XYZ FAILURE $OrgName$ | bin span=1d _time | stats count by _time | eventstats first(_time) as firsttime last(_time) as lasttime | where _time = firsttime OR _time = lasttime | fields _time count </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="colorBy">trend</option> <option name="drilldown">all</option> <option name="height">100</option> <option name="numberPrecision">0</option> <option name="trendDisplayMode">percent</option> <option name="unit"></option> <option name="rangeValues">[0,10,25,40]</option> <option name="rangeColors">["0xFF0000","0xFF0000","0xFF0000","0xFF0000","0xFF0000"]</option> <option name="trendColorInterpretation">inverse</option> <option name="useColors">1</option> <option name="showSparkline">1</option> <option name="trendDisplayMode">percent</option> </single> </panel>
In Splunk Enterprise when looking at the metrics.log with the searchscheduler group there is a metric for "eligible" but I can't find out what this indicates.   index=_internal source=*metrics.log ... See more...
In Splunk Enterprise when looking at the metrics.log with the searchscheduler group there is a metric for "eligible" but I can't find out what this indicates.   index=_internal source=*metrics.log group=searchscheduler   For context this is on Splunk Enterprise 8.0.2006 Cloud and we have 3 search heads in a cluster. I was able to find this documentation but in the section where it talks about groups there is nothing mentioning the searchscheduler group Here's an example event:   09-29-2020 19:06:28.281 +0000 INFO Metrics - group=searchscheduler, eligible=9, delayed=0, dispatched=0, skipped=0, total_lag=0, max_lag=0, window_max_lag=0, window_total_lag=0, max_running=3, actions_triggered=0, completed=3, total_runtime=21.251, max_runtime=19.212     The reason I am interested in this value is that when looking at all of the other metrics in this group (delegated, delegated_scheduled, delegated_waiting, dispatched, eligible, skipped, delayed, completed, actions_triggered) there was a noticeable dip in only the "eligible" metric during some periods where our alerts were not triggering actions. The dip in this metric affected only 2 out of 3 search heads.