All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

<search> <query>index=atm source="D:\\Program Files\\file.dat" where (eventstatus=$status_token$ AND atmnumber="atm_token" AND eventtype=$event_token$) | rename eventtime as Time, eventstatus as Sta... See more...
<search> <query>index=atm source="D:\\Program Files\\file.dat" where (eventstatus=$status_token$ AND atmnumber="atm_token" AND eventtype=$event_token$) | rename eventtime as Time, eventstatus as Status, atmnumner as ATM, eventtype as Fault, eventdescription as Description | table Time Status ATM Fault Description</query>
Thank You for replying: I am totally new to this so I don't have the domain knowledge. I believe it is indexed extractions. As far as the extraction configuration it shows _time, _raw   again I ... See more...
Thank You for replying: I am totally new to this so I don't have the domain knowledge. I believe it is indexed extractions. As far as the extraction configuration it shows _time, _raw   again I am a total noob - I appreciate any assistance. 
It's being used. highlighted in bold. <search> <query>index=atm source="D:\\Program Files\\file.dat" where (eventstatus=$status_token$ AND atmnumber="atm_token" AND eventtype=$event_token$) | renam... See more...
It's being used. highlighted in bold. <search> <query>index=atm source="D:\\Program Files\\file.dat" where (eventstatus=$status_token$ AND atmnumber="atm_token" AND eventtype=$event_token$) | rename eventtime as Time, eventstatus as Status, atmnumner as ATM, eventtype as Fault, eventdescription as Description | table Time Status ATM Fault Description</query>
Hi everyone. Is there any way to resolve GPO GUID or SID within Windows Security Logs? For instance, when we change any GPO in the domain it is logged under EventCode 5136. There is a CN name inside... See more...
Hi everyone. Is there any way to resolve GPO GUID or SID within Windows Security Logs? For instance, when we change any GPO in the domain it is logged under EventCode 5136. There is a CN name inside the log that can be used for getting the Display name of GPO.  Thanks 
It looks like atm_token is not being used as a token in your table search
Hello Community, Any assistance given will be appreciated. Trying to figure out why my table not populating. <form version="1.1" theme="dark"> <label>ATM Analyzer</label> <fieldset submitButton... See more...
Hello Community, Any assistance given will be appreciated. Trying to figure out why my table not populating. <form version="1.1" theme="dark"> <label>ATM Analyzer</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="status_token"> <label>Status</label> <fieldForLabel>eventstatus</fieldForLabel> <fieldForValue>eventstatus</fieldForValue> <selectFirstChoice>true</selectFirstChoice> <search> <query>index=atm source="D:\\Program Files\\file.dat" | dedup eventstatus | table eventstatus</query> </search> <default>INFO</default> <initialValue>INFO</initialValue> </input> <input type="dropdown" token="atm_token" searchWhenChanged="false"> <label>ATM</label> <selectFirstChoice>true</selectFirstChoice> <search> <query>index=atm source="D:\\Program Files\\file.dat" | search eventstatus=$status_token$ | dedup atmnumber | table atmnumber</query> </search> <fieldForLabel>atmnumber</fieldForLabel> <fieldForValue>atmnumber</fieldForValue> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="event_token"> <label>Event</label> <selectFirstChoice>true</selectFirstChoice> <search> <query>index=atm source="D:\\Program Files\\file.dat" | search (eventstatus=$status_token$ AND atmnumber=$atm_token$) | dedup eventtype | table eventtype</query> </search> <fieldForLabel>eventtype</fieldForLabel> <fieldForValue>eventtype</fieldForValue> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="time" token="timerange"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query>index=atm source="D:\\Program Files\\file.dat" where (eventstatus=$status_token$ AND atmnumber="atm_token" AND eventtype=$event_token$) | rename eventtime as Time, eventstatus as Status, atmnumner as ATM, eventtype as Fault, eventdescription as Description | table Time Status ATM Fault Description</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>  
i renamed and replaced the directory with fresh download. Error has gone away but wondering if I broke a Security app that may have altered the files. Splunk is so massively large that its daunting... See more...
i renamed and replaced the directory with fresh download. Error has gone away but wondering if I broke a Security app that may have altered the files. Splunk is so massively large that its daunting for Newbs. Glad I seemed to have gotten rid of the error though.
Hi @tonishantsms, The functionality is deprecated, but the single value visualization still supports automatic color-coding using rangemap and the range values severe (red), high (orange), elevated ... See more...
Hi @tonishantsms, The functionality is deprecated, but the single value visualization still supports automatic color-coding using rangemap and the range values severe (red), high (orange), elevated (yellow), guarded (blue), and low (green). You can take advantage of this functionality by combining the rangemap and chart commands with a trellised single value visualization: | makeresults format="csv" data="cf_app_name,error_rate foo,0 bar,6 baz,51" | rangemap field=error_rate UP=0-5 ISSUE=6-50 default=DOWN | rename range as status | rangemap field=error_rate low=0-5 elevated=6-50 default=severe | chart values(status) as status values(range) as range over cf_app_name You can technically use any method to generate a field named range with the correct values. To use trellis, though, you must use chart, timechart, xyseries, etc. to add hidden field metadata required by the visualization code. <dashboard version="1.1" theme="light"> <label>tonishantsms_single</label> <row> <panel> <single> <search> <query>| makeresults format="csv" data="cf_app_name,error_rate foo,0 bar,6 baz,51" | rangemap field=error_rate UP=0-5 ISSUE=6-50 default=DOWN | rename range as status | rangemap field=error_rate low=0-5 elevated=6-50 default=severe | chart values(status) as status values(range) as range over cf_app_name</query> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="trellis.enabled">1</option> <option name="trellis.size">medium</option> </single> </panel> </row> </dashboard> Older documentation is still available through archive.org, e.g. https://web.archive.org/web/20150831233457/http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rangemap, but Splunk may remove the functionality in a future release.
New Windows install and getting this error.   Wish there was a simple fix as its polluting our POC for a large purchase to get away from other product(s).
Hi @EPitch, you could try to create a lookup (called e.g. "conditions.csv") containing in three columns your three conditions (use as column names the fields of your search:  name, ip, id). then yo... See more...
Hi @EPitch, you could try to create a lookup (called e.g. "conditions.csv") containing in three columns your three conditions (use as column names the fields of your search:  name, ip, id). then you can use the lookup in a subsearch running a simple search like the folowing: index=blah sourcetype=blah [ | inputlookup conditions.csv | fields name ip id ] | stats count by name ip id Remember to create also the lookup definition. Ciao. Giuseppe
Each ingested event is a separate entity and is processed independently so if you make the same data available to the input twice (for example by sending the same syslog event to a network-listening ... See more...
Each ingested event is a separate entity and is processed independently so if you make the same data available to the input twice (for example by sending the same syslog event to a network-listening input) it's gonna get ingested, processed and indexed twice. It's up to the input - if applicable - to make sure the same data is not ingested twice. That's why file monitoring inputs have some logic implemented which keeps track which files and "how far" have been read so far, database inputs can have checkpoints storing information at which point in time you stopped reading from DB and so on. But that happens on the input level. After the even is read by the input, it's getting processed regardless of whether another "copy" of it have ever been indexed or not.
The two HFs have no way to know what the other has done so the new HF probably will reingest the same data.  I say "probably" because I'm not familiar with the mechanism the add-on uses to fetch data... See more...
The two HFs have no way to know what the other has done so the new HF probably will reingest the same data.  I say "probably" because I'm not familiar with the mechanism the add-on uses to fetch data from Azure.  If the checkpoint is stored on the HF then data will be reingested by a different HF; if the checkpoint is stored on Azure then data may not be reingested.
Perhaps https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Setaretirementandarchivingpolicy will help.
I am trying to achieve below requirement 1- Calculate the error rate label for multiple application if Error Rate greater than50%, mark "DOWN" in red; if Error Rate > 5% & <50%, mark "ISSUE" in Orang... See more...
I am trying to achieve below requirement 1- Calculate the error rate label for multiple application if Error Rate greater than50%, mark "DOWN" in red; if Error Rate > 5% & <50%, mark "ISSUE" in Orange; else "UP" in Green. 2- After label column done then needs to create new widget with single value and check the all the labels (DOWN, ISSUE, UP) if any (at least one) APIs in Error Rate is "DOWN", show "DOWN" in red; If any APIs in Error Rate is "ISSUE", show "ISSUE" in orange; else "UP" in green. Note- I need single text value result This is code i wrote till now but still not able to fullfill my requirement   <panel> <single> <title>Error Rate</title> <search> <query> app_name-abc OR app_name=xyz | rex field msg "\"[^\"]*\"\s(?&lt;status&gt;\d+)" | stats count(eval(status&gt;-200 AND status&lt;-300)) as pass count, count(eval(status&gt;-400)) as fail_count by cf_app_name | eval error rate (fail_count/ (pass_count + fail_count)) 100 | eval label if (error rate &gt; 50, "DOWN", if(error_rate &gt; 5, "ISSUE", "UP")) | eval error rate round(error_rate, 2) "X" rename error_rate AS "Error_rate(percent)" | stats count(eval(label="DOWN")) as down_count, count(eval (label-"ISSUE")) as issue count, count(eval (label-"UP")) as up_count | rangemap field-issue_count low-0-0 high-2-99 default-low | eval Status-case(down_count &gt;-1, "DOWN", down_count=0 AND issue_count&gt;-1, "ISSUE", 1--1, "UP") </query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="field">Status</option> <option name="rangeValues">ISSUE, UP</option> <option name="rangeColors">orange, green</option> <option name="drilldown">none</option> <option name="field"> Status</option> <option name="drilldown">none</option> </single> </panel>          
Try something like this <dashboard version="1.1" theme="light"> <label>Gender</label> <row> <panel> <table> <search> <query>| makeresults format=csv data="name,gender... See more...
Try something like this <dashboard version="1.1" theme="light"> <label>Gender</label> <row> <panel> <table> <search> <query>| makeresults format=csv data="name,gender,age Alice,female,18 Bob,male,22"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <condition match="$click.name2$==&quot;gender&quot; AND $click.value2$==&quot;female&quot;"> <set token="female">true</set> <unset token="male"></unset> </condition> <condition match="$click.name2$==&quot;gender&quot; AND $click.value2$==&quot;male&quot;"> <set token="male">true</set> <unset token="female"></unset> </condition> </drilldown> </table> </panel> </row> <row depends="$female$"> <panel> <table> <title>Female</title> <search> <query>| makeresults format=csv data="Name Alice"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row depends="$male$"> <panel> <table> <title>Male</title> <search> <query>| makeresults format=csv data="Name Bob"</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </dashboard>
Excellent. Thanks
How many of them had "failed request" in? How many of those were able to extract request id?
267 events only
Hi @ITWhisperer , @PickleRick  I was playing around further, I ran below query   search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>... See more...
Hi @ITWhisperer , @PickleRick  I was playing around further, I ran below query   search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>[\w-]+)" | | stats values(request_id) as request_ids | format     this query gave me output as   ( ( ( request_ids="10a-b-0m” OR request_ids="10a-b-0m” OR request_ids="10a-bn-10m” OR request_ids="10a-b-8m” OR request_ids="10a-b-6m” OR request_ids="10a-b-3c“ OR request_ids="10a-b-3cw” OR request_ids="10a-bv” OR request_ids="10a-b-0m” OR request_ids="10a-b-09m” OR request_ids="10a-b-m9” OR request_ids="10a-bb-4c” OR request_ids="10a-b" OR request_ids="10a-e OR request_ids="101v-n” OR request_ids="10a-c” OR request_ids=“10a-b” ) ) )   but again same thing happened, if I use this as subquery it is not working. I think my main query is searching like below   request_ids="10a-b-0m” OR request_ids="10a-b-0m”     instead of   "10a-b-0m” OR "10a-b-0m”     what could be the solution?
As @isoutamo said - there is no official way to do that. The typical way is the "other way" migration - from on-prem to Cloud. There is no way to do the thing you'd want to do if you wanted to migra... See more...
As @isoutamo said - there is no official way to do that. The typical way is the "other way" migration - from on-prem to Cloud. There is no way to do the thing you'd want to do if you wanted to migrate your on-prem environment between different locations (add new peers, let them replicate data, remove old peers) if one of those locations is the Splunk Cloud. Customers simply don't have access to all the underlying infrastructure. So there are three things you'd need to take into account when trying to migrate "back" from Cloud to on-prem 1. The "infrastructure configuration" - this is the part you have to create from scratch. You need to spin up your own machines, create all the "technical" configs for indexers, search heads and so on right for your deployment. And here it doesn't differ from setting up a completely new environment 2. The knowledge migration - you have to deploy the same apps (which might be relatively easy) and migrate user configs (I'm not sure how hard it is to export it from the Cloud - if it's not possible using native Cloud mechanisms, you can always ask support for help here) 3. Data migration. Here's where the "fun" part begins. As I said before, you don't have access to the indexers and I seriously doubt you can get your buckets right from the indexers. I see two options: - export your data using searches and reingest them to your new environment (this can raise some issues with timestamps, parsing and so on and of course will reflect on your license usage) - configure DDSS and set very short retention period so that all your data moves to frozen buckets in yout DDSS. Then you can pull those buckets from there to your on-prem installation and thaw them. This is not something nice and easy so I'd suggest you engage your local friendly Splukn Partner in this process.