All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Community, Any assistance given will be appreciated. Trying to figure out why my table not populating. <form version="1.1" theme="dark"> <label>ATM Analyzer</label> <fieldset submitButton... See more...
Hello Community, Any assistance given will be appreciated. Trying to figure out why my table not populating. <form version="1.1" theme="dark"> <label>ATM Analyzer</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="status_token"> <label>Status</label> <fieldForLabel>eventstatus</fieldForLabel> <fieldForValue>eventstatus</fieldForValue> <selectFirstChoice>true</selectFirstChoice> <search> <query>index=atm source="D:\\Program Files\\file.dat" | dedup eventstatus | table eventstatus</query> </search> <default>INFO</default> <initialValue>INFO</initialValue> </input> <input type="dropdown" token="atm_token" searchWhenChanged="false"> <label>ATM</label> <selectFirstChoice>true</selectFirstChoice> <search> <query>index=atm source="D:\\Program Files\\file.dat" | search eventstatus=$status_token$ | dedup atmnumber | table atmnumber</query> </search> <fieldForLabel>atmnumber</fieldForLabel> <fieldForValue>atmnumber</fieldForValue> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="event_token"> <label>Event</label> <selectFirstChoice>true</selectFirstChoice> <search> <query>index=atm source="D:\\Program Files\\file.dat" | search (eventstatus=$status_token$ AND atmnumber=$atm_token$) | dedup eventtype | table eventtype</query> </search> <fieldForLabel>eventtype</fieldForLabel> <fieldForValue>eventtype</fieldForValue> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="time" token="timerange"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query>index=atm source="D:\\Program Files\\file.dat" where (eventstatus=$status_token$ AND atmnumber="atm_token" AND eventtype=$event_token$) | rename eventtime as Time, eventstatus as Status, atmnumner as ATM, eventtype as Fault, eventdescription as Description | table Time Status ATM Fault Description</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>  
i renamed and replaced the directory with fresh download. Error has gone away but wondering if I broke a Security app that may have altered the files. Splunk is so massively large that its daunting... See more...
i renamed and replaced the directory with fresh download. Error has gone away but wondering if I broke a Security app that may have altered the files. Splunk is so massively large that its daunting for Newbs. Glad I seemed to have gotten rid of the error though.
Hi @tonishantsms, The functionality is deprecated, but the single value visualization still supports automatic color-coding using rangemap and the range values severe (red), high (orange), elevated ... See more...
Hi @tonishantsms, The functionality is deprecated, but the single value visualization still supports automatic color-coding using rangemap and the range values severe (red), high (orange), elevated (yellow), guarded (blue), and low (green). You can take advantage of this functionality by combining the rangemap and chart commands with a trellised single value visualization: | makeresults format="csv" data="cf_app_name,error_rate foo,0 bar,6 baz,51" | rangemap field=error_rate UP=0-5 ISSUE=6-50 default=DOWN | rename range as status | rangemap field=error_rate low=0-5 elevated=6-50 default=severe | chart values(status) as status values(range) as range over cf_app_name You can technically use any method to generate a field named range with the correct values. To use trellis, though, you must use chart, timechart, xyseries, etc. to add hidden field metadata required by the visualization code. <dashboard version="1.1" theme="light"> <label>tonishantsms_single</label> <row> <panel> <single> <search> <query>| makeresults format="csv" data="cf_app_name,error_rate foo,0 bar,6 baz,51" | rangemap field=error_rate UP=0-5 ISSUE=6-50 default=DOWN | rename range as status | rangemap field=error_rate low=0-5 elevated=6-50 default=severe | chart values(status) as status values(range) as range over cf_app_name</query> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="trellis.enabled">1</option> <option name="trellis.size">medium</option> </single> </panel> </row> </dashboard> Older documentation is still available through archive.org, e.g. https://web.archive.org/web/20150831233457/http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rangemap, but Splunk may remove the functionality in a future release.
New Windows install and getting this error.   Wish there was a simple fix as its polluting our POC for a large purchase to get away from other product(s).
Hi @EPitch, you could try to create a lookup (called e.g. "conditions.csv") containing in three columns your three conditions (use as column names the fields of your search:  name, ip, id). then yo... See more...
Hi @EPitch, you could try to create a lookup (called e.g. "conditions.csv") containing in three columns your three conditions (use as column names the fields of your search:  name, ip, id). then you can use the lookup in a subsearch running a simple search like the folowing: index=blah sourcetype=blah [ | inputlookup conditions.csv | fields name ip id ] | stats count by name ip id Remember to create also the lookup definition. Ciao. Giuseppe
Each ingested event is a separate entity and is processed independently so if you make the same data available to the input twice (for example by sending the same syslog event to a network-listening ... See more...
Each ingested event is a separate entity and is processed independently so if you make the same data available to the input twice (for example by sending the same syslog event to a network-listening input) it's gonna get ingested, processed and indexed twice. It's up to the input - if applicable - to make sure the same data is not ingested twice. That's why file monitoring inputs have some logic implemented which keeps track which files and "how far" have been read so far, database inputs can have checkpoints storing information at which point in time you stopped reading from DB and so on. But that happens on the input level. After the even is read by the input, it's getting processed regardless of whether another "copy" of it have ever been indexed or not.
The two HFs have no way to know what the other has done so the new HF probably will reingest the same data.  I say "probably" because I'm not familiar with the mechanism the add-on uses to fetch data... See more...
The two HFs have no way to know what the other has done so the new HF probably will reingest the same data.  I say "probably" because I'm not familiar with the mechanism the add-on uses to fetch data from Azure.  If the checkpoint is stored on the HF then data will be reingested by a different HF; if the checkpoint is stored on Azure then data may not be reingested.
Perhaps https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Setaretirementandarchivingpolicy will help.
I am trying to achieve below requirement 1- Calculate the error rate label for multiple application if Error Rate greater than50%, mark "DOWN" in red; if Error Rate > 5% & <50%, mark "ISSUE" in Orang... See more...
I am trying to achieve below requirement 1- Calculate the error rate label for multiple application if Error Rate greater than50%, mark "DOWN" in red; if Error Rate > 5% & <50%, mark "ISSUE" in Orange; else "UP" in Green. 2- After label column done then needs to create new widget with single value and check the all the labels (DOWN, ISSUE, UP) if any (at least one) APIs in Error Rate is "DOWN", show "DOWN" in red; If any APIs in Error Rate is "ISSUE", show "ISSUE" in orange; else "UP" in green. Note- I need single text value result This is code i wrote till now but still not able to fullfill my requirement   <panel> <single> <title>Error Rate</title> <search> <query> app_name-abc OR app_name=xyz | rex field msg "\"[^\"]*\"\s(?&lt;status&gt;\d+)" | stats count(eval(status&gt;-200 AND status&lt;-300)) as pass count, count(eval(status&gt;-400)) as fail_count by cf_app_name | eval error rate (fail_count/ (pass_count + fail_count)) 100 | eval label if (error rate &gt; 50, "DOWN", if(error_rate &gt; 5, "ISSUE", "UP")) | eval error rate round(error_rate, 2) "X" rename error_rate AS "Error_rate(percent)" | stats count(eval(label="DOWN")) as down_count, count(eval (label-"ISSUE")) as issue count, count(eval (label-"UP")) as up_count | rangemap field-issue_count low-0-0 high-2-99 default-low | eval Status-case(down_count &gt;-1, "DOWN", down_count=0 AND issue_count&gt;-1, "ISSUE", 1--1, "UP") </query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="field">Status</option> <option name="rangeValues">ISSUE, UP</option> <option name="rangeColors">orange, green</option> <option name="drilldown">none</option> <option name="field"> Status</option> <option name="drilldown">none</option> </single> </panel>          
Try something like this <dashboard version="1.1" theme="light"> <label>Gender</label> <row> <panel> <table> <search> <query>| makeresults format=csv data="name,gender... See more...
Try something like this <dashboard version="1.1" theme="light"> <label>Gender</label> <row> <panel> <table> <search> <query>| makeresults format=csv data="name,gender,age Alice,female,18 Bob,male,22"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <condition match="$click.name2$==&quot;gender&quot; AND $click.value2$==&quot;female&quot;"> <set token="female">true</set> <unset token="male"></unset> </condition> <condition match="$click.name2$==&quot;gender&quot; AND $click.value2$==&quot;male&quot;"> <set token="male">true</set> <unset token="female"></unset> </condition> </drilldown> </table> </panel> </row> <row depends="$female$"> <panel> <table> <title>Female</title> <search> <query>| makeresults format=csv data="Name Alice"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row depends="$male$"> <panel> <table> <title>Male</title> <search> <query>| makeresults format=csv data="Name Bob"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </dashboard>
Excellent. Thanks
How many of them had "failed request" in? How many of those were able to extract request id?
267 events only
Hi @ITWhisperer , @PickleRick  I was playing around further, I ran below query   search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>... See more...
Hi @ITWhisperer , @PickleRick  I was playing around further, I ran below query   search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>[\w-]+)" | | stats values(request_id) as request_ids | format     this query gave me output as   ( ( ( request_ids="10a-b-0m” OR request_ids="10a-b-0m” OR request_ids="10a-bn-10m” OR request_ids="10a-b-8m” OR request_ids="10a-b-6m” OR request_ids="10a-b-3c“ OR request_ids="10a-b-3cw” OR request_ids="10a-bv” OR request_ids="10a-b-0m” OR request_ids="10a-b-09m” OR request_ids="10a-b-m9” OR request_ids="10a-bb-4c” OR request_ids="10a-b" OR request_ids="10a-e OR request_ids="101v-n” OR request_ids="10a-c” OR request_ids=“10a-b” ) ) )   but again same thing happened, if I use this as subquery it is not working. I think my main query is searching like below   request_ids="10a-b-0m” OR request_ids="10a-b-0m”     instead of   "10a-b-0m” OR "10a-b-0m”     what could be the solution?
As @isoutamo said - there is no official way to do that. The typical way is the "other way" migration - from on-prem to Cloud. There is no way to do the thing you'd want to do if you wanted to migra... See more...
As @isoutamo said - there is no official way to do that. The typical way is the "other way" migration - from on-prem to Cloud. There is no way to do the thing you'd want to do if you wanted to migrate your on-prem environment between different locations (add new peers, let them replicate data, remove old peers) if one of those locations is the Splunk Cloud. Customers simply don't have access to all the underlying infrastructure. So there are three things you'd need to take into account when trying to migrate "back" from Cloud to on-prem 1. The "infrastructure configuration" - this is the part you have to create from scratch. You need to spin up your own machines, create all the "technical" configs for indexers, search heads and so on right for your deployment. And here it doesn't differ from setting up a completely new environment 2. The knowledge migration - you have to deploy the same apps (which might be relatively easy) and migrate user configs (I'm not sure how hard it is to export it from the Cloud - if it's not possible using native Cloud mechanisms, you can always ask support for help here) 3. Data migration. Here's where the "fun" part begins. As I said before, you don't have access to the indexers and I seriously doubt you can get your buckets right from the indexers. I see two options: - export your data using searches and reingest them to your new environment (this can raise some issues with timestamps, parsing and so on and of course will reflect on your license usage) - configure DDSS and set very short retention period so that all your data moves to frozen buckets in yout DDSS. Then you can pull those buckets from there to your on-prem installation and thaw them. This is not something nice and easy so I'd suggest you engage your local friendly Splukn Partner in this process.
If everything else works OK (other logs are ingested properly), it seems to be a local permissions problem. You can try to check the _internal events from this forwarder but I don't remember if the e... See more...
If everything else works OK (other logs are ingested properly), it seems to be a local permissions problem. You can try to check the _internal events from this forwarder but I don't remember if the eventlog access problems show up in the logs if you don't raise debugging levels.
This is one of the approaches. Another one would be to list all data and categorize it, then summarize and pick only matching ones. So in your case you probably can do something like <your_search> ... See more...
This is one of the approaches. Another one would be to list all data and categorize it, then summarize and pick only matching ones. So in your case you probably can do something like <your_search> earliest=-30d to list all events and | eval state=if(_time<now()-86400,"old","new") to categorize it. But this approach will work only because you have a single "type of search" and only the time differs so the events are easily distinguishable. In more complicated case you can use another approach: <your search> earliest=-30d latest=-24h | eval state="old" | append      [ <your search> earliest=-24h | eval state="new" ] Of course this one has limitations from the append command so you might use multisearch instead. Anyway. As you now have your search results, you can stats them | stats values(state) by answer so you know whether each answer is included in the old or new set. Now all that's left is to filter the result to only see those you want. For example if you want only those that are in the "new" period, but not in the "old" one you simply do | where state="new" AND NOT state="old" One caveat - matching multivalued fields can be a bit unintuitive since a condition is matched on each value from the mvfiled separately so | where state="new" AND state!="old" is a completely different condition (and I'll leave it as an exercise for the reader to find out what it matches).  
Subsearches are limited to 50,000 events. How many events are returned by  sourcetype="my_source" "failed request, request id="
Again, as Rich said - all data is searchable as long as it is hot, warm or cold. When it's rolled into frozen, it's either deleted (by default) or moved "out of" your Splunk installation and can be t... See more...
Again, as Rich said - all data is searchable as long as it is hot, warm or cold. When it's rolled into frozen, it's either deleted (by default) or moved "out of" your Splunk installation and can be treated as "archived" because it can't be used immediately, needs to be thawed in order to be searchable again. See https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/HowSplunkstoresindexes As soon as the bucket is frozen (assuming it's not deleted, but copied out to the frozen path or using your own script), it's not managed by Splunk anymore so it's up to you to manage that frozen data and make sure it's kept for another year and deleted after that period.
The "\\" sequence is a double escape. It is used because the regular expression is provided here as a string parameter to a command. In SPL strings can contain some special characters which can be ... See more...
The "\\" sequence is a double escape. It is used because the regular expression is provided here as a string parameter to a command. In SPL strings can contain some special characters which can be escaped with backslash. Since backslash is used to escape other characters, it needs to be escaped itself. So if you type in "\\", it effectively becomes a string consisting of a single backslash. Therefore you have to be careful when testing regexes using rex command and later moving those regexes to config files as props/transforms since in props/transforms you usually don't have to escape the regexes. (unless you put them as string arguments for functions called using INGEST_EVAL). So - to sum up - your (?<char>(?=\\S)\\X) put as a string argument will be effectively a (?<char>(?=\S)\X) regex when unescaped and called as a regex. And this one you can of course test on regex101.com