All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When monitoring Windows systems which logs do you find to give the best information for finding security events and then tracking down the event from start to finish?
I am trying to convert a dashboard from Simple XML to Dashboard Studio. In the original dashboard there is a token that uses "$click.name2$ that links to the corresponding name of the field in anothe... See more...
I am trying to convert a dashboard from Simple XML to Dashboard Studio. In the original dashboard there is a token that uses "$click.name2$ that links to the corresponding name of the field in another dashboard. To my understanding, the equivalent of $click.name2$ in XML should be "$name" in Dashboard Studio; however, when I use "$name" the correct value is not returning. What would be the equivalent of "$click.name2" in Dashboard Studio? This is for a single value.
I need to look for an incoming email and if an email matches a certain subject, I need to check another source type to see if within an hour of that email coming through there was a hit on that sourc... See more...
I need to look for an incoming email and if an email matches a certain subject, I need to check another source type to see if within an hour of that email coming through there was a hit on that sourcetype. 
Hi All, I am almost a starter in Splunk but my org uses this tool as a log management utility. I need help in getting a direction so as to how to filter data from logs in a distributed a sync loggi... See more...
Hi All, I am almost a starter in Splunk but my org uses this tool as a log management utility. I need help in getting a direction so as to how to filter data from logs in a distributed a sync logging product. Problem statement: There are multiple log files on multiple Linux boxes getting generated every second. I need to search for ids created and relevant creation timestamps and the batches under which these ids exists. Filter the ids based on passed batches (this is another line in the same log file) Calculate the E2E timestamp for the id processing by searching the processed id in step-1 and substracting the timestamp of step 3 and step 1(this is again printed in the log files). I have been doing this using oracle external tables and Linux shell but need to do it in a better way using Splunk and need help,  opinion's are highly appreciated   
Hi, I have below SPL, which return todays count vs yesterday count and difference between them. I want to see, if i run this search on monday, then the "yesterday" should be last Friday data instea... See more...
Hi, I have below SPL, which return todays count vs yesterday count and difference between them. I want to see, if i run this search on monday, then the "yesterday" should be last Friday data instead of weekend. could you pls help ? SPL: base search earliest=@d latest=now | append [ base search earliest=-1d@d latest=-1d ] | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | chart count by Name, Day | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Today Yesterday percentage_variance
I have a use-case where a Splunk end-user should only be allowed to search on a subset of events in an index. For example, restrict the end-user to only be able to search for customer's data which th... See more...
I have a use-case where a Splunk end-user should only be allowed to search on a subset of events in an index. For example, restrict the end-user to only be able to search for customer's data which the end-user has authorisation to. Is there a smart way of doing this in Splunk? I looked into different solutions like Splunk Apps, External Lookup, Custom parameters in OAuth... Building a new front-end app and use the Splunk search API is one way, however, that is probably not the smartes ways of doing it.  I guess that I'm not the first one that has this use-case.
Have a dashboard where have a timer input, drop down 1 (DD1) depends on timer input, multi-select drop down 2 (DD2) depends on  DD1. Once all the input is provided, user hits on "Submit" button and t... See more...
Have a dashboard where have a timer input, drop down 1 (DD1) depends on timer input, multi-select drop down 2 (DD2) depends on  DD1. Once all the input is provided, user hits on "Submit" button and the resulting chart should be displayed. All of this works well as long as separate search queries are used in each one of them. Once the timer changes, DD1 is searched and values are displayed. Once DD1 is selected, DD2 search starts, and corresponding values are displayed. All goes well.  Here's a working example:   <form version="1.1" theme="light"> <label>Technical - HTTP Metrics</label> <fieldset submitButton="true" autoRun="false"> <input type="time" token="time_duration_token" searchWhenChanged="false"> <label>Select a time range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="service_name_token" searchWhenChanged="false"> <label>Select microservice</label> <fieldForLabel>source</fieldForLabel> <fieldForValue>source</fieldForValue> <search> <query> index="cloud_world" | spath source | search event.logger="*CustomLoggingMeterRegistry*" | rex field=event.message "(?&lt;metric_name&gt;[a-z.]+)" | search metric_name="http.server.requests" | dedup source </query> <earliest>$time_duration_token.earliest$</earliest> <latest>$time_duration_token.latest$</latest> </search> </input> <input type="multiselect" token="http_uri_multiselect_token" searchWhenChanged="false"> <label>Select URI</label> <fieldForLabel>http_uri</fieldForLabel> <fieldForValue>http_uri</fieldForValue> <search> <query> index="cloud_world" | spath source | search source=$service_name_token|s$ event.logger="*CustomLoggingMeterRegistry*" | rex field=event.message "(?&lt;metric_name&gt;.*){.*,status=(?&lt;http_status&gt;[\d]{3}),uri=(?&lt;http_uri&gt;.*)}.*mean=(?&lt;mean_time&gt;[\d.]+)s\smax=(?&lt;max_time&gt;[\d.]+)" | search metric_name="http.server.requests" | top http_uri </query> <earliest>$time_duration_token.earliest$</earliest> <latest>$time_duration_token.latest$</latest> </search> <delimiter>,</delimiter> <valueSuffix>"</valueSuffix> <valuePrefix>"</valuePrefix> </input> <input type="checkbox" token="http_status_token"> <label>Select HTTP status</label> <choice value="&quot;200&quot;, &quot;201&quot;">2xx</choice> <choice value="&quot;400&quot;, &quot;401&quot;">4xx</choice> <delimiter> </delimiter> </input> </fieldset> <row> <panel> <title>Mean time by URI</title> <chart> <title>Mean time</title> <search> <query> index="cloud_world" | spath source | search source=$service_name_token|s$ event.logger="*CustomLoggingMeterRegistry*" | rex field=event.message "(?&lt;metric_name&gt;.*){.*,status=(?&lt;http_status&gt;[\d]{3}),uri=(?&lt;http_uri&gt;.*)}.*mean=(?&lt;mean_time&gt;[\d.]+)s\smax=(?&lt;max_time&gt;[\d.]+)" | search metric_name="http.server.requests" | where http_uri in($http_uri_multiselect_token$) AND http_status in($http_status_token$) | chart max(mean_time) over _time by http_uri usenull=f useother=false </query> </search> <option name="charting.axisTitleX.text">Time</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.text">Time (in ms)</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.chart">line</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">minmax</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.placement">none</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </form>   If you see, the same searches are used everywhere, hence I decided to use base search in input dropdown as below: <form version="1.1" theme="light"> <label>Technical - HTTP Metrics</label> <search id="httpMetricsBaseSearch"> <query> index="cloud_world" | spath source | search event.logger="*CustomLoggingMeterRegistry*" | rex field=event.message "(?&lt;metric_name&gt;[a-z.]+){(?&lt;metric_dimensions&gt;.*)}\s(?&lt;metric_measurements&gt;.*)" | search metric_name="http.server.requests" | rex field=metric_dimensions "status=(?&lt;http_status&gt;[\d]{3}),uri=(?&lt;http_uri&gt;.*)" | rex field=metric_measurements "mean=(?&lt;mean_time&gt;[\d.]+)s\smax=(?&lt;max_time&gt;[\d.]+)" | table source, http_uri, http_status, max_time, mean_time, _time </query> <earliest>$time_duration_token.earliest$</earliest> <latest>$time_duration_token.latest$</latest> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="time_duration_token" searchWhenChanged="false"> <label>Select a time range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="service_name_token" searchWhenChanged="false"> <label>Select microservice</label> <fieldForLabel>source</fieldForLabel> <fieldForValue>source</fieldForValue> <search base="httpMetricsBaseSearch"> <query> | dedup source </query> </search> </input> <input type="multiselect" token="http_uri_multiselect_token" searchWhenChanged="false"> <label>Select URI</label> <fieldForLabel>http_uri</fieldForLabel> <fieldForValue>http_uri</fieldForValue> <search base="httpMetricsBaseSearch"> <query> | where source=$service_name_token|s$ | dedup http_uri </query> </search> <delimiter>,</delimiter> <valueSuffix>"</valueSuffix> <valuePrefix>"</valuePrefix> </input> <input type="checkbox" token="http_status_token"> <label>Select HTTP status</label> <choice value="&quot;200&quot;, &quot;201&quot;">2xx</choice> <choice value="&quot;400&quot;, &quot;401&quot;">4xx</choice> <delimiter> </delimiter> </input> </fieldset> </form> In this case, if I change the time from time picker, the "Select service" dropdown is not researched. This used to happen, when searches were different. But after using base search, this just doesn't work. It actually starts to search, once the "Submit" button is clicked, but I want to reserve that for "final" submit, i.e. when user has provided all his inputs.  Is there a way to fix this, or is it that base search are not supposed to be used in input searches when submitButton="true" ?  
Why oneidentity override dnslookup transform   changing the parameters name ?  from clientip to ip , from clienhost to host 
Hello I have few services that today sends data some index via code. We are going to remove this index and create new one but cannot change the code so i want to change the point with transforms.co... See more...
Hello I have few services that today sends data some index via code. We are going to remove this index and create new one but cannot change the code so i want to change the point with transforms.conf + props.conf using regex that extract the service name from source field and the environment from _raw this is my transforms.conf file :   [service_extraction] SOURCE_KEY = source REGEX = \/var\/log\/pods\/(.+?)_ FORMAT = complaince_int_front::@service_$environment DEST_KEY = _MetaData:Index LOOKAHEAD = 40000 [environment_extraction] SOURCE_KEY = sourcetype::kube:container:mockapiservice REGEX = "Region":"(.+?)" FORMAT = complaince_int_front::@service_$1 DEST_KEY = _MetaData:Index LOOKAHEAD = 40000 i guess i did something wrong since its not working
Good day, First I want to say that this add-on is an absolute lifesaver when it comes to getting structured data into Splunk, and if you ever put it up on GitHub please let me know - I'd be happy to... See more...
Good day, First I want to say that this add-on is an absolute lifesaver when it comes to getting structured data into Splunk, and if you ever put it up on GitHub please let me know - I'd be happy to contribute. I have found a few minor issues.  I'll be using the following json in my examples:     {"total":52145,"rows": [ {"discoverable_guid":"94937859-A157-4C43-94AC-290172D50C4D","component_cpe":{"cpe23":"cpe:2.3:a:oracle:java_runtime_environment:1.8.0_381"},"cve":[]}, {"discoverable_guid":"2B933591-6192-4E42-9DFC-32C361D32208","component_cpe":{"cpe23":"cpe:2.3:a:oracle:jdk\\/sdk:1.8.0_201"},"cve":[]}, {"discoverable_guid":"DD854B8C-5900-518C-B8B6-096285936816","component_cpe":{"cpe23":"cpe:2.3:o:microsoft:windows_defender:4.18.1909.6"},"cve":[{"name":"CVE-2006-5270"},{"name":"CVE-2018-0986"},{"name":"CVE-2021-24092"},{"name":"CVE-2021-1647"},{"name":"CVE-2020-1170"},{"name":"CVE-2020-1163"},{"name":"CVE-2020-0835"},{"name":"CVE-2017-8558"},{"name":"CVE-2017-8541"},{"name":"CVE-2017-8540"},{"name":"CVE-2017-8538"},{"name":"CVE-2017-0290"},{"name":"CVE-2019-1255"},{"name":"CVE-2013-0078"},{"name":"CVE-2011-0037"},{"name":"CVE-2020-1461"},{"name":"CVE-2020-1002"},{"name":"CVE-2019-1161"},{"name":"CVE-2017-8542"},{"name":"CVE-2017-8539"},{"name":"CVE-2017-8537"},{"name":"CVE-2017-8536"},{"name":"CVE-2017-8535"},{"name":"CVE-2008-1438"},{"name":"CVE-2008-1437"}]}, {"discoverable_guid":"ADF7E72A-4A72-4D92-B278-F644E27EA88F","component_cpe":{"cpe23":"cpe:2.3:a:microsoft:.net_framework:4.8.04084"},"cve":[{"name":"CVE-2020-0646"},{"name":"CVE-2020-0606"},{"name":"CVE-2020-0605"},{"name":"CVE-2020-1147"},{"name":"CVE-2022-26832"},{"name":"CVE-2021-24111"},{"name":"CVE-2020-1108"},{"name":"CVE-2019-1083"},{"name":"CVE-2019-1006"},{"name":"CVE-2019-0981"},{"name":"CVE-2019-0980"},{"name":"CVE-2019-0820"},{"name":"CVE-2023-36873"},{"name":"CVE-2022-41064"},{"name":"CVE-2020-16937"},{"name":"CVE-2020-1476"},{"name":"CVE-2019-0864"},{"name":"CVE-2022-30130"}]}, {"discoverable_guid":"2B933591-6192-4E42-9DFC-32C361D32208","component_cpe":{"cpe23":"cpe:2.3:a:oracle:jdk\\/sdk:1.8.0_261"},"cve":[]} ]}     1. There are certain cases where nested json is rendered in splunk with  single quotes (') instead of double-quotes("): which makes me have to use a      | rex mode=sed field=<field_with_nested_json> "s/\'/\"/g"     to make it compatible with spath. 2. The "autoextract=0" option when pulling down json does not put the contents into a _raw field (as stated in your docs), but instead seems to do first-level extraction -  So a page that contains the following json:    EDIT - covered in #3 below Renders looking like this when I use getwatchlist json <url> autoextract=0 3.  the "dataKey" parameter All of the parameters seem to be case-sensitive - "dataKey=rows" produces correct content (below) vs "datakey=rows", which seems to ignore the parameter entirely 4. your docs don't seem to match the feature set or version in all places - Splunkbase "details" tab still refers to 1.3.2 Add-on "About" tab (after install) refers to 1.3.3, but does not include details of the url parsing features that can only be found in your release notes on Splunkbase 5. The flattenJson parameter does not seem to be working at all.  I find references to it in the code, but if I put it into the search as a parameter Splunk does not recognize it as such, but it also does not treat it as a custom field either. As I said above, this add-on is great work, and literally the only things I could ask for "extra" are maybe xml parsing, and being able to perhaps pass URL parameters as an array. EDIT: A little more testing made me realize that a lot of my problems are specific to capitalization of the command parameters.  I've edited #3 above
Hello everyone Two parter. First of all, am I correct in assuming that /appname/metadata/local.meta takes precedence over /appname/metadata/local.meta The reason for this question is that while ... See more...
Hello everyone Two parter. First of all, am I correct in assuming that /appname/metadata/local.meta takes precedence over /appname/metadata/local.meta The reason for this question is that while applying changes in a SH cluster from a deployer, changes made to default.meta in an app has no effect and changes in local.meta are retained. Second, what is the best praxis for editing the local.meta file? I think I, at least, need to completely remove the read/write permissions so it falls back on the default.meta file [] access = read : [ ], write : [ ] export = none version = x.x.x.x modtime = tttttttttt.tttttttt Otherwise any future edits made and rolled out will not take effect as there is a local.meta file. Though I was hoping that I could just delete the entire local.meta file? To be clear, the actual question, can I     a) delete the entire local.meta file or do I have to     b) edit out the desired section in the local.meta file? I know I can edit access to alerts/dashboards etc via the GUI though I'd like to edit for everything in the app in one single move from the CLI. All the best // f
Hi, Can someone please assist me in setting up assets and identity from the scratch, and what prerequisites are necessary for this? Thanks in advance.
I am working in Classic dashboard. I have a gateway address (URL: abc23.com ) I want to check this value after every dashboard refresh. Either display the results of the URL and/or single value visua... See more...
I am working in Classic dashboard. I have a gateway address (URL: abc23.com ) I want to check this value after every dashboard refresh. Either display the results of the URL and/or single value visual with green and red colors. Green is for when the URL status is set to "OK", else is "Red".  Any ideas on how I can accomplish this task?   I created a python scrip that extracts the value into a log and then the dashboard checks the log but this doesn't seem like the best approach and not really what I want. 
I'm trying to create an admission rule in workload management with the following syntax: any search with "=*" in the index will return a predefined message. my intention is to block any search that... See more...
I'm trying to create an admission rule in workload management with the following syntax: any search with "=*" in the index will return a predefined message. my intention is to block any search that contains "=*" in any part of the index, such as: "index=splun*", "index=spl*", "index=_internal*", etc. I didn't find anything in the documentation that talked about it. Is there any way to create a general rule for this case?
Hello All, Recently we have migrated all our indexes to Splunk Smartstore with our remote storage being Azure blob. After that we noticed several problems with our environment. Buckets being st... See more...
Hello All, Recently we have migrated all our indexes to Splunk Smartstore with our remote storage being Azure blob. After that we noticed several problems with our environment. Buckets being stuck in fixup state more often. Indexing queues being full (No major spike in data indexation). Huge increase in number of buckets. And the list goes on. We are considering to revert back to the persistent disk for data storage, however, looking at the Splunk documentation, it is not possible to revert back an index configured with Splunk Smartstore perisitent disk. But, I'm looking at a way, if it would be still possible to do it, because of the above issues, the search performance is abysmal. We have around 6 indexers and each indexer has around 800k buckets and the current data on remote storage (Smartstore) is 50 TB.   Are there any ways to migrate back to persistent disk? Looking forward to any gray methods to try out as well.   Thanks
Please advise on the optimal solution for this business task. I have a set of events with the following fields:     city: Almaty country: KZ latitude: 43.2433 longitude: 76.8646 region: Almaty ... See more...
Please advise on the optimal solution for this business task. I have a set of events with the following fields:     city: Almaty country: KZ latitude: 43.2433 longitude: 76.8646 region: Almaty     What would be the best approach to obtain the field indicating the local time of these events using the provided information?
 Hello. I have a question about the captain selection process. Let me ask you a question using the example below. 1. In a clustering of four searchheads, the captain goes down. 2. Among the rem... See more...
 Hello. I have a question about the captain selection process. Let me ask you a question using the example below. 1. In a clustering of four searchheads, the captain goes down. 2. Among the remaining three, the search header whose timer expired earliest asks the remaining two to vote. 3. The remaining two cars vote for the search header whose timer ended early. 4. Although two votes were received, the captain election failed because three votes were required due to the majority rule. This is the process of captain selection in my opinion. However, when I practiced it myself, even if one of the four planes was down, another captain was automatically selected from the three remaining planes. How is this possible when there are not enough votes?
We are using /api base url, is that correct for .splunkrc as it asks for host and in our environment we use url? thanks for your help!   .splunkrc # Splunk host (default: localhost) host=splu... See more...
We are using /api base url, is that correct for .splunkrc as it asks for host and in our environment we use url? thanks for your help!   .splunkrc # Splunk host (default: localhost) host=splunkurl/api # Splunk admin port (default: 8089) port=443 # Splunk username username= # Splunk password password= # Access scheme (default: https) scheme=https # Your version of Splunk (default: 6.3) version=9.0.4  
Hi  We have a cloud instance , however we would like have predictive storage analysis for feature requirement. As part of the i was trying to look after the accurate options available. While look at... See more...
Hi  We have a cloud instance , however we would like have predictive storage analysis for feature requirement. As part of the i was trying to look after the accurate options available. While look at it ,i noticed that our daily ingestion data is around 450-500 GB but when i check the Searchable storage (DDAS) has increased around 60GB compared to previous day. Could you please let me know whether i'm missing anything while doing this calculations. Secondly, is there a way to do predictable SVC & Storage analysis (DDAS & DDAA) for future requirement.    
Hello! Tell me, is there a ready-made solution in splunk that makes is possible to save data from dashboards into excel. This functionality is needed for all existing dashboards. It would be nice if ... See more...
Hello! Tell me, is there a ready-made solution in splunk that makes is possible to save data from dashboards into excel. This functionality is needed for all existing dashboards. It would be nice if the xls item appeared on the export button. Thank you!