rjthibod's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

rjthibod's Topics

On Splunkbase, the Markdown interpreter/compiler that is used to convert Apps' Details pages to HTML doesn't properly render Markdown code blocks. Specifically, it is converting HTML characters into ... See more...
On Splunkbase, the Markdown interpreter/compiler that is used to convert Apps' Details pages to HTML doesn't properly render Markdown code blocks. Specifically, it is converting HTML characters into their HTML entity formats. For example, this text cd <ROOT_DIRECTORY> Is rendered as the following on App Details pages cd &lt;ROOT_DIRECTORY&gt; It doesn't appear to matter if you use single backticks, triple backticks, or four space indentation to delimit the code block. The same happens in all cases. This is really annoying if you try to include segments of HTML or XML in the App documentation. Here is a link to an example page where this is happening: https://splunkbase.splunk.com/app/3672/#/details
Does Splunk have any guidelines or limitations on the number of dimensions (i.e., cardinality) that the new Metrics Index supports? Are there specific limitations in terms of the number of dimensi... See more...
Does Splunk have any guidelines or limitations on the number of dimensions (i.e., cardinality) that the new Metrics Index supports? Are there specific limitations in terms of the number of dimensions or unique values of a single dimension or unique combinations of dimensions for a single measurement? I understand that Splunk's searching and indexing performance is always contingent on the hardware / platform. Just wanting to see if there are any hard limits built into the design of the Metrics Index or a configuration threshold, or even better, can Splunk provide some benchmarks about data sets they have tested? I have seen other metric stores / time-series databases enforce these kinds of limits (in configuration settings), hence the question.
Splunk 7.0 introduced the Metrics Index feature and a whole new naming scheme. Is Splunk planning to use or offer something similar to the CIM for metrics index measurements and dimensions? Will d... See more...
Splunk 7.0 introduced the Metrics Index feature and a whole new naming scheme. Is Splunk planning to use or offer something similar to the CIM for metrics index measurements and dimensions? Will data in the Metrics Index be targeted for integration with ITSI and ES based on the naming conventions? App developers are in the process of migrating things into the Metrics Index, so it would be could to know what to plan for.
In Splunk 7.0.0, when sending data to a metrics index, it looks like one can send duplicate metric measurement events (e.g., the same tuple of time, metric name, and dimensions) and the metric index ... See more...
In Splunk 7.0.0, when sending data to a metrics index, it looks like one can send duplicate metric measurement events (e.g., the same tuple of time, metric name, and dimensions) and the metric index will store all duplicates, thereby affecting the statistics that come out. Is that the intended behavior for the metric index? Other time-series metric stores/indexes/dbs I have played with use overwrite/last-in logic that only preserves the most-recently indexed value for a given metric tuple. Using similar logic here would seem to make more sense for the the use cases I would see for the metric store, but I freely admit to making assumptions. Please clarify how allowing duplicate metric events is intended to be used / handles. Note, my understanding of a distinct metric tuple is the timestamp (to milliseconds), metric name, and dimension fields. So, assuming you see the following two metric tuples that arrive at the indexer at different times (the first column), only the later one (the top row) would be saved in the index. Right now (as of Splunk 7.0.0), both are saved in the metrics index/store. | indexing timestamp | metric timestamp | metric name | metric value | server | 1506708015.390 | 1506708000.000 | server.power.kw | 126.06 | na-server-1 | 1506708010.242 | 1506708000.000 | server.power.kw | 104.56 | na-server-1 Additional Comments after posting The example data I provided above is simply made-up in order to simplify the discussion. Don't interpret it as relevant to the question - it's just an example. Some points to consider: At least two other time-series databases for metrics don't allow duplicate events: InfluxDB and OpenTSDB. Haven't fully evaluated others (e.g., DataDog), just using these as examples that I know of. Splunk's documentation openly says individual events are not really relevant in metric indexes, you cannot filter or search on the metric value field in mstats delete command doesn't work for metrics ( i.e., you can't delete duplicates if they happen) By allowing duplicate tuples {timestamp, metric name, dimensions} in metrics indexes, backfilling via saved searches or resending of metrics becomes very, very difficult. Backfilling using metrics distilled from event indexes is very easy if you use write-last / last-in / overwriting logic. Running aggregate metric sources (like my example above - total power consumed in an hour) become very challenging with current, duplicate metric logic Clustered environments raise the risk of getting duplicate events in the face of delays / blocked queues and resent events So, maybe best to sum up as this question: is the Splunk metrics index feature intended to work like other time-series databases with strict write logic limitations, or is it an optimized / pared-down version of the standard Splunk event index?
I am playing with a custom format for data going into Splunk on Splunk 7.0, and I am trying to extract fields at index-time. I cannot use search-time extraction, so please don't ask. When doing in... See more...
I am playing with a custom format for data going into Splunk on Splunk 7.0, and I am trying to extract fields at index-time. I cannot use search-time extraction, so please don't ask. When doing indexed extractions in transforms.conf, I am trying to extract the host field along with many other values in a single transformation step. There are no other transformation steps being applied besides this one. If I try to consolidate all of the extractions, my data appears with a field called extracted_host instead of host . The transform has the following form (I left out details of REGEX and other fields because they are not important - all of them work as expected and none are metadata/reserved fields) [my-custom-metrics] KEEP_EMPTY_VALS = true REGEX = ^... FORMAT = ... host::$3 ... WRITE_META = true Everything works fine if I use a second extraction for host and use DEST_KEY = MetaData:Host . This will write the correct value in the host field and not generate an extracted_host field. [my-custom-metrics-host] REGEX = ^... FORMAT = host::$1 DEST_KEY = MetaData:Host Is there some explanation for why this would be the case? Is this documented anywhere? Does this prefixing on reserved/metadata fields hold true when using WRITE_META = true ?
This is somewhat related to this question: https://answers.splunk.com/answers/551786/governance-and-licensing-for-add-on-builder-develo.html. Pinging @rpille_splunk since they gave the answer there. ... See more...
This is somewhat related to this question: https://answers.splunk.com/answers/551786/governance-and-licensing-for-add-on-builder-develo.html. Pinging @rpille_splunk since they gave the answer there. Many of the Splunk-built apps (Splunk 6.x Dsashboard Examples, Splunk Custom Vizs, etc.) have the scary Splunk Software License Agreement (https://www.splunk.com/en_us/legal/splunk-software-license-agreement.html) listed on Splunkbase. If you look at the internals of these apps, some have no license or copyright listed but others will actually say that they are licensed under MIT (e.g., Timeline and Parallel Coordinates Custom Viz apps). Following from the question linked above, is the intention that the widely distributed free apps like the Splunk 6.x Dashboards Example app should be under MIT or the new Splunk App EULA (https://www.splunk.com/en_us/legal/splunk-app-end-user-license-agreement.html) or something else? Many of the JS files in the Splunk 6.x Dashboards Examples app have been used or referenced in other non-Splunk apps, and it would be great to have some clarity on what kind of open-source rights are on these common, free apps.
For some time (since at least Splunk 6.3), SimpleXML has supported using the <style> element to include CSS settings directly in a SimpleXML dashboard. It commonly appears in a hidden element lik... See more...
For some time (since at least Splunk 6.3), SimpleXML has supported using the <style> element to include CSS settings directly in a SimpleXML dashboard. It commonly appears in a hidden element like the following <row depends="hiddenCSS"> <html> <style> /* insert CSS here */ </style> </html> </row> This does not seem to be documented in the SimpleXML reference: http://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#html From my own testing, I know that using CSS selectors, e.g., [id^="panel_custom"] , is not supported using this method. However, directly specifying CSS classes and IDs seems to work fine. Questions Does Splunk consider this an officially supported use of SimpleXML? If so, are there any other side-effects or limitations that users should be aware of?
This is a request to make token behaviors consistent for form inputs (applies to version 6.2 through 6.6.1 as of posting). Here is a link to the docs page about the types of tokens I am referring to:... See more...
This is a request to make token behaviors consistent for form inputs (applies to version 6.2 through 6.6.1 as of posting). Here is a link to the docs page about the types of tokens I am referring to: http://docs.splunk.com/Documentation/Splunk/latest/Viz/tokens#Define_tokens_for_conditional_operations_with_form_inputs Basically, the request is to make the "searchWhenChanged" setting on the input element apply to the tokens that can be set under the change and condition elements. The problem is if you have an input that sets searchWhenChanged="false" so that the input token is only applied when tokens are submitted, any tokens you set using set , eval , or unset inside the change and condition elements of the input are immediately applied in the dashboard. This happens because those "child" tokens of the input are immediately applied to both default and submitted token models regardless of the "searchWhenChanged" setting. A simple scenario where this is a problem is any time you use a dropdown or radio button to set a field used in the "by" clause of a chart. If you have any custom operations or axes label changes that are field dependent and set under change or condition of the dropdown, the chart is going to be immediately updated whenever the value changes even if the top-level token value is supposed to require a submit event. Here is XML to demonstrate the use case <input id="target_field" searchWhenChanged="false" token="target_field" type="radio"> <label>Split-by Field</label> <choice value="process">Executables</choice> <choice value="url_domain">URL Domains</choice> <default>process</default> <change> <condition value="url_domain"> <set token="target_field_filter">valid_url=1</set> <set token="target_field_axes_label">$label$</set> </condition> <condition value="*"> <set token="target_field_filter">&#32;</set> <set token="target_field_axes_label">$label$</set> </condition> </change> </input> In this use case, any chart or search that is based on the tokens target_field_filter or target_field_axes_label will be automatically rerun or updated when the radio button changes even though the searchWhenChanged="false" setting would indicate the top-level token requires a submit event.
I recently saw a reference in slides from .conf 2016 (https://conf.splunk.com/files/2016/slides/dashboard-wizardry.pdf, slide 16) that suggests one can reference a specific token namespace in SimpleX... See more...
I recently saw a reference in slides from .conf 2016 (https://conf.splunk.com/files/2016/slides/dashboard-wizardry.pdf, slide 16) that suggests one can reference a specific token namespace in SimpleXML, e.g., $submitted:mytoken$ or $default:mytoken$ . Initial searches for documentation came up empty, and simple tests in 6.4 seem to fail. Can anyone clarify if this is supported, what versions support it, and if there is some documentation?
Splunk 6.5 added global environment tokens that are accessible in SimpleXML (http://docs.splunk.com/Documentation/Splunk/6.5.0/Viz/tokens#Use_global_tokens_to_access_environment_information), one of ... See more...
Splunk 6.5 added global environment tokens that are accessible in SimpleXML (http://docs.splunk.com/Documentation/Splunk/6.5.0/Viz/tokens#Use_global_tokens_to_access_environment_information), one of which reports the version of the Splunk instance, $env:version$ . How does one obtain the equivalent version number from an app in Splunk 6.4 or older? I am most interested in obtaining this value using SplunkJS or SimpleXML, e.g., a function call in SplunkJS or a rest search. I am aware some of the global environment tokens are accessible via the REST API, but I cannot seem to find anything that reports the version. A SplunkJS function call would be the most ideal. Here is a related question for the splunk_server value: https://answers.splunk.com/answers/506296/is-there-a-javascript-token-with-the-hostname-of-t.html
Splunk 6.5 added global environment tokens that are accessible in SimpleXML (http://docs.splunk.com/Documentation/Splunk/6.5.0/Viz/tokens#Use_global_tokens_to_access_environment_information). My q... See more...
Splunk 6.5 added global environment tokens that are accessible in SimpleXML (http://docs.splunk.com/Documentation/Splunk/6.5.0/Viz/tokens#Use_global_tokens_to_access_environment_information). My question is how does one obtain these token values from SplunkJS such as in a JavaScript extension to a SimpleXML dashboard? Trying to obtain the values from defaultTokenModel or submittedTokenModel returns no result, and exploring those objects in the Chrome debugger does not indicate they would be defined in those objects. I understand some of these values are available via rest API calls. I am specifically interested in getting them via SplunkJS.
Is there a Splunk documentation page that details the full specification / schema for App navigation file, default.xml, for recent versions of Splunk? There are specification pages on Splunk docs... See more...
Is there a Splunk documentation page that details the full specification / schema for App navigation file, default.xml, for recent versions of Splunk? There are specification pages on Splunk docs for so many other files, but I can't find anything that details the full set of options in default.xml for versions 6.3 or newer of Splunk.
In the inputs.conf spec for collecting perfmon data (https://docs.splunk.com/Documentation/Splunk/6.5.1/Admin/Inputsconf#Performance_Monitor ), there is an option called "instances". Reading the des... See more...
In the inputs.conf spec for collecting perfmon data (https://docs.splunk.com/Documentation/Splunk/6.5.1/Admin/Inputsconf#Performance_Monitor ), there is an option called "instances". Reading the description of the option seems to suggest that it allows one to specify string patterns that will filter the reported perfmon data based on if the instance field from the host matches the string specified in the stanza. For example, if one wanted to capture perfmon data for all instances of svchost, I would assume this could be done by specifying a stanza like the following: [perfmon://Process] counters = Working Set;Virtual Bytes;% Processor Time;Handle Count;Thread Count;Elapsed Time;Creating Process ID;ID Process; disabled = 0 index = perfmon instances = svchost* interval = 30 object = Process mode = multikv showZeroValue = 1 Setting up the stanza in this way does not result in all instances of svchost being reported with the prescribed configuration. Instead, the only thing reported back is the perfmon data for the top-level, parent svchost process, and its value for the "instance" field is set to the pattern in the stanza, e.g., "svchost*". None of the child svchost processes (whose instances should be svchost#1, svchost#2, etc.) are reported. Is this the expected behavior? I tested this with Splunk Forwarder 6.4.4, Splunk Add-on for Windows version 4.8.0 on Windows 10 64-bit. Another user (@Yorokobi) reported seeing this on Windows Server 2012 R2.
Is there any built-in mechanism (e.g. settings in limits.conf or server.conf) that would throttle the execution of the Splunk Universal Forwarder in such a way that it stops collecting perfmon data (... See more...
Is there any built-in mechanism (e.g. settings in limits.conf or server.conf) that would throttle the execution of the Splunk Universal Forwarder in such a way that it stops collecting perfmon data (via the Splunk Add-on for Microsoft Windows) if the host was under distress with high CPU or high memory utilization? I am not talking about throttling data output or throughput. This is not a matter of limiting outgoing data. Instead, I am talking about the collection scripts not running when the system is heavily loaded. I cannot reproduce it on my Windows 7 system, but I seem to recall seeing it a long time ago and someone else is reporting that they have observed this behavior. The perfmon collections resumes once the system is no longer under distress, so that is why I suspected that there might be some configuration option that tells the forwarder to stop collecting if a certain CPU threshold is reached.
I have a deployment server app that makes changes on the target client. Part of the process requires closing another application. I would like to have an option to present a Windows form box that ... See more...
I have a deployment server app that makes changes on the target client. Part of the process requires closing another application. I would like to have an option to present a Windows form box that allows the to acknowledge that the application needs to be shutdown. The deployment server app uses Powershell to perform everything. All pieces of the script work fine except for showing user form. If I run the script locally on the machine (as admin or even as the system account using PSExec), everything works fine and I see the Windows form. However, running the script in the deployment server app results in the call to show the Windows form returning immediately with the response "OK". In Powershell, the simple way of doing this is with a sequence like the following. My issue is the second line always immediately returns if I try to do this in the deployment app. Running any other way seems to work fine. [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") | Out-Null; $response = [System.Windows.Forms.MessageBox]::Show("User Input Text", "Window Title", ...); Write-Host "User selected $response" I have also tried many other variations including the addition of "$this" as the first argument to [System.Windows.Forms.MessageBox]::Show and putting the call in a background job. Like I said, it always works outside of running as the deployment app, but the deployment app execution never shows the form because the call always returns immediately.
I currently use various macros to store default values (thresholds, static filter strings, etc.) in an app. These default value macros (especially the numerical ones) can be changed by a customer/org... See more...
I currently use various macros to store default values (thresholds, static filter strings, etc.) in an app. These default value macros (especially the numerical ones) can be changed by a customer/organization to suit their use cases/environment. I have other search macros that use those default values in their pipepline. For example, assume there is a default alert threshold for a metric, and I have a search macro that use that threshold to trigger alerts, write to a summary index, etc. My question is, what is the appropriate way to pass those default threshold/alerting macro values into search-related macros? I have devised a solution that I will share, but I wanted to see if anyone else has a suggestion. Note: assume lookups are not really an option. Besides, I am pretty sure my solution would work the same for lookups. Here are is more detail about an example set of macros: [default_metric_5min_threshold] description = Default threshold (in seconds) to trigger a alert for my custom sourcetype iseval = 0 definition = 100 ... [alerts_default_metric_search_5min_span] description = Search for calculating alert using default alert threshold value. Assume 5-min sliding window to aggregate data. iseval = 0 definition = <INSERT_MACRO_MAGIC>
I noticed that timewrap came up as suggested SPL command in a Splunk 6.5 search box (see attachment). The command does seem to work. I do not have the timewrap app installed on this system. Is ... See more...
I noticed that timewrap came up as suggested SPL command in a Splunk 6.5 search box (see attachment). The command does seem to work. I do not have the timewrap app installed on this system. Is timewrap officially part of the SPL lexicon in 6.5? If so, are people going to encounter significant problems if they have the timewrap installed on a Splunk 6.5 system?
If I have Key-Value pair events and fields that are automatically extracted with KV_MODE=auto in props.conf, can I apply a field transformation to an extracted field? For example, I have a fiel... See more...
If I have Key-Value pair events and fields that are automatically extracted with KV_MODE=auto in props.conf, can I apply a field transformation to an extracted field? For example, I have a field UserName that appears in the raw events like (e.g., ... UserName="ryan" ...). I want a field user to appear at search-time, but I don't want to use and EVAL- or a FIELDALIAS- clause in props.conf, because I don't to overload the server and how it looks for fields (see https://splunkbase.splunk.com/app/2871/ and explanation about how litsearch works). I have tried using this in props.conf [my_src_type] KV_MODE = auto REPORT-extractions = RenameUser,ExtractSessionType And the following in transforms.conf [RenameUser] SOURCE_KEY = UserName REGEX = (.+) FORMAT = user::"$1" [ExtractSessionType] REGEX = SessionName="(?<SessionType>\w+(-\w+)*)\S*" The "SessionType" field extractions from the "SessionName" field are successful, but the "UserName" field is never renamed to "user". Is this possible with the Key-Value extractions being applied first? I have looked in the job inspector and found no mention of errors or issues.
The background color of the Splunk Web timechart tooltip is all black. Depending on the data series color, it is very hard to read the data series' name in the tooltip. Is there some way in CSS or ... See more...
The background color of the Splunk Web timechart tooltip is all black. Depending on the data series color, it is very hard to read the data series' name in the tooltip. Is there some way in CSS or SimpleXML to control the background color of the tooltip so that I could lighten it up. No AdvancedXML. JS extensions are fine, but CSS and SimpleXML are preferred. Assume Splunk Enterprise 6.2 or newer. Assume that the default data series colors are going to be used, i.e., I am not going to generate a custom set of series colors for all of my dashboards.
The documentation for Pivot has included a "LIMIT" operator for quite some time, yet there is no explanation of what it does or how to use it. I have not yet seen any examples that use it either. ... See more...
The documentation for Pivot has included a "LIMIT" operator for quite some time, yet there is no explanation of what it does or how to use it. I have not yet seen any examples that use it either. Can someone shed some light on this? Here is the snippet of documentation from the Pivot Knowledgebase page. This is all. There is nothing else mentioned. Descriptions for limit elements Limit <limit expression> Syntax: LIMIT <fieldname> BY <limittype> <number> <stats-function>(<fieldname>) Description: