All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

And just to make sure, there is no problem just removing the file? I assume that you first check for any local changes you want to remain, but otherwise you can just delete the file and move on?
You are correct; local.meta trumps default.meta for a given app.  To get the default.meta to take effect again, the local.meta stanza or file must be removed.
yes definitely the parsing server!  and this is what i found out, the server was NOT set to UTC.  had them change and i'm gtg, however, I don't see how I can explicitly set the TZ on indexing, as it ... See more...
yes definitely the parsing server!  and this is what i found out, the server was NOT set to UTC.  had them change and i'm gtg, however, I don't see how I can explicitly set the TZ on indexing, as it is not part of the string being sent to us and I do not see where we can set the TZ in the source type. as always appreciate the education everyone, thank you!  
I have a use-case where a Splunk end-user should only be allowed to search on a subset of events in an index. For example, restrict the end-user to only be able to search for customer's data which th... See more...
I have a use-case where a Splunk end-user should only be allowed to search on a subset of events in an index. For example, restrict the end-user to only be able to search for customer's data which the end-user has authorisation to. Is there a smart way of doing this in Splunk? I looked into different solutions like Splunk Apps, External Lookup, Custom parameters in OAuth... Building a new front-end app and use the Splunk search API is one way, however, that is probably not the smartes ways of doing it.  I guess that I'm not the first one that has this use-case.
No. "Assuming local time if the server is UTC" (I assume you're talking about the parsing server, not the source) is OK only if the source sends the data in UTC.
Thanks for the answer, at least the indexers are in the same environment as the smartstore and not on-prem.
Have a dashboard where have a timer input, drop down 1 (DD1) depends on timer input, multi-select drop down 2 (DD2) depends on  DD1. Once all the input is provided, user hits on "Submit" button and t... See more...
Have a dashboard where have a timer input, drop down 1 (DD1) depends on timer input, multi-select drop down 2 (DD2) depends on  DD1. Once all the input is provided, user hits on "Submit" button and the resulting chart should be displayed. All of this works well as long as separate search queries are used in each one of them. Once the timer changes, DD1 is searched and values are displayed. Once DD1 is selected, DD2 search starts, and corresponding values are displayed. All goes well.  Here's a working example:   <form version="1.1" theme="light"> <label>Technical - HTTP Metrics</label> <fieldset submitButton="true" autoRun="false"> <input type="time" token="time_duration_token" searchWhenChanged="false"> <label>Select a time range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="service_name_token" searchWhenChanged="false"> <label>Select microservice</label> <fieldForLabel>source</fieldForLabel> <fieldForValue>source</fieldForValue> <search> <query> index="cloud_world" | spath source | search event.logger="*CustomLoggingMeterRegistry*" | rex field=event.message "(?&lt;metric_name&gt;[a-z.]+)" | search metric_name="http.server.requests" | dedup source </query> <earliest>$time_duration_token.earliest$</earliest> <latest>$time_duration_token.latest$</latest> </search> </input> <input type="multiselect" token="http_uri_multiselect_token" searchWhenChanged="false"> <label>Select URI</label> <fieldForLabel>http_uri</fieldForLabel> <fieldForValue>http_uri</fieldForValue> <search> <query> index="cloud_world" | spath source | search source=$service_name_token|s$ event.logger="*CustomLoggingMeterRegistry*" | rex field=event.message "(?&lt;metric_name&gt;.*){.*,status=(?&lt;http_status&gt;[\d]{3}),uri=(?&lt;http_uri&gt;.*)}.*mean=(?&lt;mean_time&gt;[\d.]+)s\smax=(?&lt;max_time&gt;[\d.]+)" | search metric_name="http.server.requests" | top http_uri </query> <earliest>$time_duration_token.earliest$</earliest> <latest>$time_duration_token.latest$</latest> </search> <delimiter>,</delimiter> <valueSuffix>"</valueSuffix> <valuePrefix>"</valuePrefix> </input> <input type="checkbox" token="http_status_token"> <label>Select HTTP status</label> <choice value="&quot;200&quot;, &quot;201&quot;">2xx</choice> <choice value="&quot;400&quot;, &quot;401&quot;">4xx</choice> <delimiter> </delimiter> </input> </fieldset> <row> <panel> <title>Mean time by URI</title> <chart> <title>Mean time</title> <search> <query> index="cloud_world" | spath source | search source=$service_name_token|s$ event.logger="*CustomLoggingMeterRegistry*" | rex field=event.message "(?&lt;metric_name&gt;.*){.*,status=(?&lt;http_status&gt;[\d]{3}),uri=(?&lt;http_uri&gt;.*)}.*mean=(?&lt;mean_time&gt;[\d.]+)s\smax=(?&lt;max_time&gt;[\d.]+)" | search metric_name="http.server.requests" | where http_uri in($http_uri_multiselect_token$) AND http_status in($http_status_token$) | chart max(mean_time) over _time by http_uri usenull=f useother=false </query> </search> <option name="charting.axisTitleX.text">Time</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.text">Time (in ms)</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.chart">line</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">minmax</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.legend.placement">none</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </form>   If you see, the same searches are used everywhere, hence I decided to use base search in input dropdown as below: <form version="1.1" theme="light"> <label>Technical - HTTP Metrics</label> <search id="httpMetricsBaseSearch"> <query> index="cloud_world" | spath source | search event.logger="*CustomLoggingMeterRegistry*" | rex field=event.message "(?&lt;metric_name&gt;[a-z.]+){(?&lt;metric_dimensions&gt;.*)}\s(?&lt;metric_measurements&gt;.*)" | search metric_name="http.server.requests" | rex field=metric_dimensions "status=(?&lt;http_status&gt;[\d]{3}),uri=(?&lt;http_uri&gt;.*)" | rex field=metric_measurements "mean=(?&lt;mean_time&gt;[\d.]+)s\smax=(?&lt;max_time&gt;[\d.]+)" | table source, http_uri, http_status, max_time, mean_time, _time </query> <earliest>$time_duration_token.earliest$</earliest> <latest>$time_duration_token.latest$</latest> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="time_duration_token" searchWhenChanged="false"> <label>Select a time range</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="service_name_token" searchWhenChanged="false"> <label>Select microservice</label> <fieldForLabel>source</fieldForLabel> <fieldForValue>source</fieldForValue> <search base="httpMetricsBaseSearch"> <query> | dedup source </query> </search> </input> <input type="multiselect" token="http_uri_multiselect_token" searchWhenChanged="false"> <label>Select URI</label> <fieldForLabel>http_uri</fieldForLabel> <fieldForValue>http_uri</fieldForValue> <search base="httpMetricsBaseSearch"> <query> | where source=$service_name_token|s$ | dedup http_uri </query> </search> <delimiter>,</delimiter> <valueSuffix>"</valueSuffix> <valuePrefix>"</valuePrefix> </input> <input type="checkbox" token="http_status_token"> <label>Select HTTP status</label> <choice value="&quot;200&quot;, &quot;201&quot;">2xx</choice> <choice value="&quot;400&quot;, &quot;401&quot;">4xx</choice> <delimiter> </delimiter> </input> </fieldset> </form> In this case, if I change the time from time picker, the "Select service" dropdown is not researched. This used to happen, when searches were different. But after using base search, this just doesn't work. It actually starts to search, once the "Submit" button is clicked, but I want to reserve that for "final" submit, i.e. when user has provided all his inputs.  Is there a way to fix this, or is it that base search are not supposed to be used in input searches when submitButton="true" ?  
The documentation is correct.  Once you go to SmartStore you can't go back; anything else would be a Science Experiment. Switching to SmartStore (S2) should not have caused the problems you listed. ... See more...
The documentation is correct.  Once you go to SmartStore you can't go back; anything else would be a Science Experiment. Switching to SmartStore (S2) should not have caused the problems you listed. Search performance can be affected if the S2 cache is too small or users have a tendency to search over more than 30 days. Is SmartStore in the same environment as your indexers?  Using a cloud S2 with on-prem indexers is likely cause problems and be expensive.
Why oneidentity override dnslookup transform   changing the parameters name ?  from clientip to ip , from clienhost to host 
Hello I have few services that today sends data some index via code. We are going to remove this index and create new one but cannot change the code so i want to change the point with transforms.co... See more...
Hello I have few services that today sends data some index via code. We are going to remove this index and create new one but cannot change the code so i want to change the point with transforms.conf + props.conf using regex that extract the service name from source field and the environment from _raw this is my transforms.conf file :   [service_extraction] SOURCE_KEY = source REGEX = \/var\/log\/pods\/(.+?)_ FORMAT = complaince_int_front::@service_$environment DEST_KEY = _MetaData:Index LOOKAHEAD = 40000 [environment_extraction] SOURCE_KEY = sourcetype::kube:container:mockapiservice REGEX = "Region":"(.+?)" FORMAT = complaince_int_front::@service_$1 DEST_KEY = _MetaData:Index LOOKAHEAD = 40000 i guess i did something wrong since its not working
Thanks for the reply @PickleRick  Sure, there will be a third column containing only assets that are not seen in both sources simultaneously and  in addition at the end of the list there should be T... See more...
Thanks for the reply @PickleRick  Sure, there will be a third column containing only assets that are not seen in both sources simultaneously and  in addition at the end of the list there should be Totals of these assets. Would you be able to develop a sample solution for this, please? Thank you.
You can either use the "splunk _internal call" command on the cmdline or use | rest /services/cluster/manager/buckets | where multisite_bucket=0 AND standalone=0 (or "false" instead of 0, I'm not ... See more...
You can either use the "splunk _internal call" command on the cmdline or use | rest /services/cluster/manager/buckets | where multisite_bucket=0 AND standalone=0 (or "false" instead of 0, I'm not sure here)
That use case is not supported by WLM admission rules.  Go to https://ideas.splunk.com to make a case for it.
Hi @tscroggins I was using the search app to run  | ldapsearch search="(&(objectClass=user))" attrs=name, accountExpires accountExpires is the attribute causing the aforementioned error. I kn... See more...
Hi @tscroggins I was using the search app to run  | ldapsearch search="(&(objectClass=user))" attrs=name, accountExpires accountExpires is the attribute causing the aforementioned error. I know the property exists because I am able to call it via Get-ADUser. 
Good day, First I want to say that this add-on is an absolute lifesaver when it comes to getting structured data into Splunk, and if you ever put it up on GitHub please let me know - I'd be happy to... See more...
Good day, First I want to say that this add-on is an absolute lifesaver when it comes to getting structured data into Splunk, and if you ever put it up on GitHub please let me know - I'd be happy to contribute. I have found a few minor issues.  I'll be using the following json in my examples:     {"total":52145,"rows": [ {"discoverable_guid":"94937859-A157-4C43-94AC-290172D50C4D","component_cpe":{"cpe23":"cpe:2.3:a:oracle:java_runtime_environment:1.8.0_381"},"cve":[]}, {"discoverable_guid":"2B933591-6192-4E42-9DFC-32C361D32208","component_cpe":{"cpe23":"cpe:2.3:a:oracle:jdk\\/sdk:1.8.0_201"},"cve":[]}, {"discoverable_guid":"DD854B8C-5900-518C-B8B6-096285936816","component_cpe":{"cpe23":"cpe:2.3:o:microsoft:windows_defender:4.18.1909.6"},"cve":[{"name":"CVE-2006-5270"},{"name":"CVE-2018-0986"},{"name":"CVE-2021-24092"},{"name":"CVE-2021-1647"},{"name":"CVE-2020-1170"},{"name":"CVE-2020-1163"},{"name":"CVE-2020-0835"},{"name":"CVE-2017-8558"},{"name":"CVE-2017-8541"},{"name":"CVE-2017-8540"},{"name":"CVE-2017-8538"},{"name":"CVE-2017-0290"},{"name":"CVE-2019-1255"},{"name":"CVE-2013-0078"},{"name":"CVE-2011-0037"},{"name":"CVE-2020-1461"},{"name":"CVE-2020-1002"},{"name":"CVE-2019-1161"},{"name":"CVE-2017-8542"},{"name":"CVE-2017-8539"},{"name":"CVE-2017-8537"},{"name":"CVE-2017-8536"},{"name":"CVE-2017-8535"},{"name":"CVE-2008-1438"},{"name":"CVE-2008-1437"}]}, {"discoverable_guid":"ADF7E72A-4A72-4D92-B278-F644E27EA88F","component_cpe":{"cpe23":"cpe:2.3:a:microsoft:.net_framework:4.8.04084"},"cve":[{"name":"CVE-2020-0646"},{"name":"CVE-2020-0606"},{"name":"CVE-2020-0605"},{"name":"CVE-2020-1147"},{"name":"CVE-2022-26832"},{"name":"CVE-2021-24111"},{"name":"CVE-2020-1108"},{"name":"CVE-2019-1083"},{"name":"CVE-2019-1006"},{"name":"CVE-2019-0981"},{"name":"CVE-2019-0980"},{"name":"CVE-2019-0820"},{"name":"CVE-2023-36873"},{"name":"CVE-2022-41064"},{"name":"CVE-2020-16937"},{"name":"CVE-2020-1476"},{"name":"CVE-2019-0864"},{"name":"CVE-2022-30130"}]}, {"discoverable_guid":"2B933591-6192-4E42-9DFC-32C361D32208","component_cpe":{"cpe23":"cpe:2.3:a:oracle:jdk\\/sdk:1.8.0_261"},"cve":[]} ]}     1. There are certain cases where nested json is rendered in splunk with  single quotes (') instead of double-quotes("): which makes me have to use a      | rex mode=sed field=<field_with_nested_json> "s/\'/\"/g"     to make it compatible with spath. 2. The "autoextract=0" option when pulling down json does not put the contents into a _raw field (as stated in your docs), but instead seems to do first-level extraction -  So a page that contains the following json:    EDIT - covered in #3 below Renders looking like this when I use getwatchlist json <url> autoextract=0 3.  the "dataKey" parameter All of the parameters seem to be case-sensitive - "dataKey=rows" produces correct content (below) vs "datakey=rows", which seems to ignore the parameter entirely 4. your docs don't seem to match the feature set or version in all places - Splunkbase "details" tab still refers to 1.3.2 Add-on "About" tab (after install) refers to 1.3.3, but does not include details of the url parsing features that can only be found in your release notes on Splunkbase 5. The flattenJson parameter does not seem to be working at all.  I find references to it in the code, but if I put it into the search as a parameter Splunk does not recognize it as such, but it also does not treat it as a custom field either. As I said above, this add-on is great work, and literally the only things I could ask for "extra" are maybe xml parsing, and being able to perhaps pass URL parameters as an array. EDIT: A little more testing made me realize that a lot of my problems are specific to capitalization of the command parameters.  I've edited #3 above
Hi @phanTom  Thank for your reply. On my connector,  there are some actions that are repeated a lot and having logs on them could flood the logs. I was hopping to add those logs only if customer ch... See more...
Hi @phanTom  Thank for your reply. On my connector,  there are some actions that are repeated a lot and having logs on them could flood the logs. I was hopping to add those logs only if customer chose to enable them. Why is the reason to have different levels of loggings if we cannot decide whether to print them or not? 
With which part of the task do you need help?  What have you tried so far?  Have to seen the Website Monitoring app (https://splunkbase.splunk.com/app/1493)?
The absolute best practice is to always explicitly define your timestamp along with timezone.  You can get away with it most times if you don't but when it doesn't work always revert back to a proper... See more...
The absolute best practice is to always explicitly define your timestamp along with timezone.  You can get away with it most times if you don't but when it doesn't work always revert back to a proper definition.
@jamie1 - Here is generic guidance that will boost your Splunk journey. For Data collection, you should look for Add-ons on Splunkbase. For dashboards, you should look for Apps on Splunkbase. Gen... See more...
@jamie1 - Here is generic guidance that will boost your Splunk journey. For Data collection, you should look for Add-ons on Splunkbase. For dashboards, you should look for Apps on Splunkbase. Generally, these Apps and add-ons work the same on On-prem vs Cloud. (And that's why you don't see much difference in documentation, as there isn't much difference)   Regarding this dashboards, this is something you can try - https://docs.splunk.com/Documentation/CPWindowsDash/latest/CP/About But if this is just regarding CPU performance, you can also try create your own based on your requirement.   I hope this helps!!! Kindly upvote if it does!!!
Hello everyone Two parter. First of all, am I correct in assuming that /appname/metadata/local.meta takes precedence over /appname/metadata/local.meta The reason for this question is that while ... See more...
Hello everyone Two parter. First of all, am I correct in assuming that /appname/metadata/local.meta takes precedence over /appname/metadata/local.meta The reason for this question is that while applying changes in a SH cluster from a deployer, changes made to default.meta in an app has no effect and changes in local.meta are retained. Second, what is the best praxis for editing the local.meta file? I think I, at least, need to completely remove the read/write permissions so it falls back on the default.meta file [] access = read : [ ], write : [ ] export = none version = x.x.x.x modtime = tttttttttt.tttttttt Otherwise any future edits made and rolled out will not take effect as there is a local.meta file. Though I was hoping that I could just delete the entire local.meta file? To be clear, the actual question, can I     a) delete the entire local.meta file or do I have to     b) edit out the desired section in the local.meta file? I know I can edit access to alerts/dashboards etc via the GUI though I'd like to edit for everything in the app in one single move from the CLI. All the best // f