All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Just because events have multiple indexes and source types does not mean you can't use stats to correlate events in the events pipeline. In addition to @richgalloway's request, please also share some... See more...
Just because events have multiple indexes and source types does not mean you can't use stats to correlate events in the events pipeline. In addition to @richgalloway's request, please also share some sample representative anonymised events showing how you would like these events to correlated.
Your first error is deploying Splunk on Windows.   See https://community.splunk.com/t5/Getting-Data-In/What-are-the-pain-points-with-deploying-your-Splunk-architecture/m-p/650011 Please elaborate on... See more...
Your first error is deploying Splunk on Windows.   See https://community.splunk.com/t5/Getting-Data-In/What-are-the-pain-points-with-deploying-your-Splunk-architecture/m-p/650011 Please elaborate on "the search head is not working".  What about it is not working?  An error on an indexer does not necessarily mean there's a problem with the SH. One workaround is to rename the TA so it resides in a directory with a shorter name (by at least 8 characters).  Of course, you will have to maintain that forever.
First you need to identify what the format is it JSON/syslog/CEF etc is supposed to be. Then you need to install the TA which you have on a HF and in cloud SH.  Then you need to ensure the TA on ... See more...
First you need to identify what the format is it JSON/syslog/CEF etc is supposed to be. Then you need to install the TA which you have on a HF and in cloud SH.  Then you need to ensure the TA on the HF has been configured with the correct options, you will most likely need to ensure the Guardicore system is configured for the format your require,  or the default options - Speak to the Guardicore admin.  From the Splunk base there appears not to be any detailed documentation but it does state the TA uses REST API and processes events received from the Syslog exporter. So, sounds like the TA app will have the config options to pull data.
Hey everyone,  I currently have a use case for which I set up a Splunk Enterprise environment in an Ubuntu VM (VMware) and want to build an app with the Add-on Builder, which uses a Python Script as... See more...
Hey everyone,  I currently have a use case for which I set up a Splunk Enterprise environment in an Ubuntu VM (VMware) and want to build an app with the Add-on Builder, which uses a Python Script as Input Method to make an API-Call to get my data into Splunk. That's the goal at least.   The VM communicates with the Internet just fine (even if via proxy) and my python script gets the data from the API-Endpoint. However, if I try to enter the proxy credentials from my VM into the Configuration of the Add-on Builder I get the following Error: "There was a problem connecting to the App Certification service. The service might not be available at this time, or you might need to verify your proxy settings and try again."  Now, assuming that I did not mess up the proxy credentials, my next best bet would be that I need to give my Splunk environment a certificate to adequately communicate with the proxy. So we finally reach my question:  Where would I need to place such a certificate file in the directory structure, so that the the Splunk add-on app can find it?  
Thank you for the reply
Yes it bit painful,  if you have made lots of /local/ based configs in your apps backup the /opt/splunk/etc/apps folder at minimum, this way you ar least have your app config's backed up and can rest... See more...
Yes it bit painful,  if you have made lots of /local/ based configs in your apps backup the /opt/splunk/etc/apps folder at minimum, this way you ar least have your app config's backed up and can restore those apps after your re-install Splunk to keep it clean. 
@renjith_nair  not exactly Currently,  I am using checkbox type to filter out error log events and those need to be pre-defined already see the whole dashboard <form theme="light"> <label... See more...
@renjith_nair  not exactly Currently,  I am using checkbox type to filter out error log events and those need to be pre-defined already see the whole dashboard <form theme="light"> <label>LDP Apps monitoring</label> <fieldset submitButton="false" autoRun="false"> <input type="dropdown" token="app" searchWhenChanged="true"> <label>Application</label> <choice value="app_1">App 1</choice> <choice value="app_2">App 2</choice> <choice value="app_3">App 3</choice> <default>App 1</default> <initialValue>App 1</initialValue> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Environment</label> <choice value="qa">QA</choice> <choice value="uat">UAT</choice> <choice value="prod">PROD</choice> <default>prod</default> <initialValue>prod</initialValue> </input> <input type="time" token="time_range"> <label>Time Period</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="text" token="search_input" id="search_input" searchWhenChanged="true"> <label>Search for a certain log message</label> </input> <html> <style> div[id^="search_input"]{ width: 1000px !important; } </style> </html> <input type="checkbox" token="selected" searchWhenChanged="true" id="checkboxes"> <label>Filter out frequent errors:</label> <choice value="AND NOT &quot;Error Log Message 1 to filter out&quot;">Error Log Message 1 to filter out</choice> <choice value="AND NOT &quot;Error Log Message 2 to filter out&quot;">Error Log Message 2 to filter out</choice> <choice value="AND NOT &quot;Error Log Message 3 to filter out&quot;">Error Log Message 3 to filter out</choice> <choice value="AND NOT &quot;Error Log Message 4 to filter out&quot;">Error Log Message 4 to filter out</choice> <delimiter> </delimiter> <default></default> </input> <html> <style> div[id^="checkboxes"]{ width: 1000px !important; } </style> </html> </fieldset> <row> <panel> <title>$app$ Access logs - status code</title> <chart> <title>**hardcoded time period</title> <search> <query>index="$app$-$env$" access_log status_code!="20*" | timechart span=10m count by status_code</query> <earliest>-3d</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.chart">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">1</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>$app$ Error Frequency</title> <chart> <search> <query>index="$app$-$env$" logLevel="ERROR" $selected$ | multikv | eval ReportKey="error rate" | timechart span=30m count by ReportKey</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.chart">area</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">1</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>$app$ Specific Error Logs</title> <table> <search> <query>index="$app$-$env$" logLevel="ERROR" $selected$ | rex mode=sed "s:&lt;1512&gt;:\n:g" | bucket _time span=5m | table _time, logName, logLevel, _raw | sort -_time</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <row> <panel> <title>$app$ WARN Frequency</title> <chart> <search> <query>index="$app$-$env$" logLevel="WARN" $selected$ | multikv | eval ReportKey="warn rate" | timechart span=30m count by ReportKey</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.chart">area</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>$app$ Warn Messages</title> <table> <search> <query>index="$app$-$env$" logLevel="WARN" $selected$ | rex mode=sed "s:&lt;1512&gt;:\n:g" | bucket _time span=5m | table _time, logName, logLevel, _raw | sort -_time</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="logLevel"> <colorPalette type="list">[#53A051,#006D9C,#F8BE34,#F1813F,#DC4E41]</colorPalette> <scale type="threshold">0,30,70,100</scale> </format> </table> </panel> </row> <row> <panel> <title>Specific log event search</title> <chart> <title>**Copy a log message to search for an error log history, hardcoded time period</title> <search> <query>index="$app$-$env$" "$search_input$" | eval search_input="$search_input$" | where isnotnull(search_input) AND search_input!="" | multikv | eval ReportKey="searched_event" | timechart span=30m count by ReportKey</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form> I want to use text box input type to add a specific error message string in to multiselect and that multiselect will be placed to each query provided above  so can filter out a certain events without having them specified in checkboxes  Also I want to use an empty multiselect as default each time a dashboard is loaded
Have you carefully installed and deployed this add on within your Splunk deployment architecture Follow the instructions https://splunkbase.splunk.com/app/4564 - click on the link and look for where... See more...
Have you carefully installed and deployed this add on within your Splunk deployment architecture Follow the instructions https://splunkbase.splunk.com/app/4564 - click on the link and look for where to install this add on section first. You would typically be install this onto a heavy forwarder if you are using one and set the inputs up, this would forward the data to the indexers and data will be parsed. The add is required on the Search Heads for parsing (Knowledge Objects) so needs to be installed there, into the correct path. So Install everythings as required, configure it and then look at the logs. If you have already configured as required then this log message indicates something else. It states "The system cannot find the path specified" Have you installed it correctly?
@vananhnguyen , Here is a run anywhere example. Are you looking for something like this ? You can change the value in the dropdown and the colors will be reset { "visualizations": { "vi... See more...
@vananhnguyen , Here is a run anywhere example. Are you looking for something like this ? You can change the value in the dropdown and the colors will be reset { "visualizations": { "viz_NJsTjQl4": { "type": "splunk.singlevalue", "options": { "majorColor": "> majorValue | matchValue(majorColorEditorConfig)" }, "dataSources": { "primary": "ds_275I8YNY" }, "context": { "majorColorEditorConfig": [ { "match": "Running", "value": "#118832" }, { "match": "Stopped", "value": "#d41f1f" } ] } } }, "dataSources": { "ds_275I8YNY": { "type": "ds.search", "options": { "query": "| makeresults\n| eval value=\"$status$\"" }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_BHJAbWl2": { "options": { "items": [ { "label": "Running", "value": "Running" }, { "label": "Stopped", "value": "Stopped" } ], "token": "status", "selectFirstSearchResult": true }, "title": "Status", "type": "input.dropdown" } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_NJsTjQl4", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 400 } } ], "globalInputs": [ "input_global_trp", "input_BHJAbWl2" ] }, "description": "", "title": "single_panel_studio" }   Reference : https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/visualEditDynamic 
Hi, Thank you for your suggestion. When I use a scheduled report, and see the recent search, Splunk appends "| summaryindex" at the end of the search, not "collect" command.   So, I thought "collec... See more...
Hi, Thank you for your suggestion. When I use a scheduled report, and see the recent search, Splunk appends "| summaryindex" at the end of the search, not "collect" command.   So, I thought "collect" always refers to "manual" push versus "summary index" in a scheduled report.  My understanding you have 2 scheduled reports. Is the following accurate? 1)  roll forward existing data (from and to the same summary index - say index A)  2) push new data (from a different index to summary index - say from index X to index A) In my case, the data from DBXquery is always get re-write, so I only need the latest data, but I may use your method in the future. Thanks for this. Based on the link that you sent and the following post, it looks like I still need the CSV file. (See below it does inputlookup to CSV first, then outputlookup to KV) Is this correct?    My goal is to avoid having CSV file since there's a limit in size   https://community.splunk.com/t5/Getting-Data-In/How-to-transfer-existing-CSV-data-to-kvstore/m-p/144641 | inputlookup filename.csv | outputlookup lookup_name Thank you again.
Huh.  Guess I was just assuming that it needed both, and that's the way I've always done it.  Now I'll have to play around in the lab and see what happens when I remove it.   Thanks!
@Splunkerninja , Can we add another row only for the html panel and hide the result row/panel with a "depends" token? Something like <row> <panel depends="$hide_this_always$"> <table> <search> <do... See more...
@Splunkerninja , Can we add another row only for the html panel and hide the result row/panel with a "depends" token? Something like <row> <panel depends="$hide_this_always$"> <table> <search> <done> <eval token="date">strftime(now(), "%d-%m-%Y")</eval> <set token="sid">$job.sid$</set> </done> <query>index=test</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <row> <html> <a href="/api/search/jobs/$sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=test_$date$.csv&amp;outputMode=csv" class="button js-button">Download</a> <style> .button { background-color: steelblue; border-radius: 5px; color: white; padding: .5em; text-decoration: none; } .button:focus, .button:hover { background-color: #2A4E6C; color: White; } </style> </html> </row>
You can make use of the KV store to store information and retrieve https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/  https://dev.splunk.com/enterprise/docs/developapps/ma... See more...
You can make use of the KV store to store information and retrieve https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/  https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usetherestapitomanagekv/ https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/uselookupswithkvstore/ Splunk JS can be then used to control the button actions and processing the record. There were some dev materials available explaining the overall process but looks like its removed. There are some third party documentation available in public for splunk KVStore CRUD  
Hi So far, I didn't see any missing data, so it's not causing a problem, except for the error message. I am not sure why Splunk has to throw the error message.  Thank you for your help.
If the step has neither Success nor Failure, the counts for these columns will be zero
@pevniacik , Are you looking for something like this ? Test by selecting few projects and add a text "Error" to the text box to filter <form version="1.1" theme="light"> <label>MultiSelect_Text<... See more...
@pevniacik , Are you looking for something like this ? Test by selecting few projects and add a text "Error" to the text box to filter <form version="1.1" theme="light"> <label>MultiSelect_Text</label> <fieldset submitButton="false"> <input type="multiselect" token="Project"> <label>Project</label> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>,</delimiter> <fieldForLabel>Project</fieldForLabel> <fieldForValue>Project</fieldForValue> <search> <query>|makeresults count=5|streamstats count |eval Project="Project".count|eval Record="Some records "|eval Record=if(count%2==0,Record,Record."Error")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="text" token="text_filter" searchWhenChanged="true"> <label>Text to Filter</label> <default>*</default> </input> </fieldset> <row> <panel> <table> <search> <query>|makeresults count=5|streamstats count |eval Project="Project".count|eval Record="Some records "|eval Record=if(count%2==0,Record,Record."Error") |where Project in ($Project$) AND NOT like (Record,"%$text_filter$%")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>    
Please share the query. Please tell us what "not working" means.  What results do you get and how do those results not meet expectations?
In that case you could rework your search so that it has either zero or 1 row depending on whether the condition is met, and set your token based on the number of results returned.
Your choices are to work on the F5 LB, speak to your network team for the VIP/pool/failover/irules config and test it out as to work works best.(Its not my area of experteise, I'm just concept aware)... See more...
Your choices are to work on the F5 LB, speak to your network team for the VIP/pool/failover/irules config and test it out as to work works best.(Its not my area of experteise, I'm just concept aware) Note:The Splunk UF is not a load balancer in the networking sense (It contains Auto Data Loadbalancing function to spray the data across multiple indexers if you have miltiples of them, and to even the data out, its not desinged failover based on load to another UF). The UF is an agent to collect data and send it to Splunk.