All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I see a column with name search and value (""field1"") Do we need to have field1 inside parentheses and two double quotes? Field label "search" in a subsearch is a pseudo keyword for "use as is... See more...
I see a column with name search and value (""field1"") Do we need to have field1 inside parentheses and two double quotes? Field label "search" in a subsearch is a pseudo keyword for "use as is literal" in a search command.  No, they should NOT have two quotation marks on each side.  Maybe your lookup values insert one additional set of double quotes?  If so, we can get rid of one set. Here is my emulation   | makeresults format=csv data="id,Messages ,a ,b ,c ,d" ``` the above emulates | inputlookup messages.csv ``` | fields Messages | rename Messages as search | format "(" "\"" "" "\"" "," ")" | rex field=search mode=sed "s/ *\" */\"/g"     Output only contains one set of double quotes search ("a","b","c","d")
Hello,I have this type of data, and I'd like to extract the following fields with a rex command: Two words: Don't.  The data you show is clearly a fragment from a JSON object.  Do not treat stru... See more...
Hello,I have this type of data, and I'd like to extract the following fields with a rex command: Two words: Don't.  The data you show is clearly a fragment from a JSON object.  Do not treat structured data such as JSON as text because the developer can change format at any time without changing syntax and render your rex useless.  Splunk has robust, QA-tested commands like spath.  Follow @ITWhisperer's advice to share valid, raw JSON data. (Anonymize as needed.)  If your raw data is a mix of free text and JSON, show examples of how they are mixed so we can extract the valid JSON, then handle JSON in spath or fromjson (9.0+) Specific questions: I have a strong suspicion that your data illustration is not a faithful representation of raw data.  Because it contains lots of parentheses "(", ")", instead of curly brackets "{" and "}" as in compliant JSON. It is almost impossible for a developer to make this kind of mistake to mix parentheses and curly brackets randomly.  Can you verify and clarify? If your raw event is pure JSON, your highlighted snippets should have already been extracted by Splunk as multivalued data{}.from, data{}.to, data{}.intensity.forecast.  Do you not get those? Alternatively, is that illustrated data from a field that is already extracted (but misrepresented with mixed parentheses and curly brackets)? Lastly, in a common logging practice is to append JSON data at the end, following some other informational strings that do not contain opening curly bracket.  If this is the case, you can easily extract that JSON part with the following and handle it robustly with spath: | rex "^[^{]*(?<json_data>.+)" | spath input=json_data path=data{} | mvexpand data{} | spath input=data{} After this, your highlighted values would be in fields from, to, and intensity.forecast, respectively.
@bowesmana  Thank you for confirming.  That was how I understood this as well.  I was curious if there were options I wasn't aware of.     Thank you once again.
You are correct.
@gcusello , thank you so much, it’s working as expected. thank you once again
That's a great idea! I'll give it a shot.
Aren't you overcomplicating it a bit? Just render the date to a field | eval day=strftime(_time,"%F") and you're ready to go | stats min(_time) as earliest max(_time) as latest by day
 Yes, I understand it’s the copy part, you want, unless the system logs it its going to be tricky.  This may be of help not sure, so now I remember many years ago, I helped a company, to monitor fo... See more...
 Yes, I understand it’s the copy part, you want, unless the system logs it its going to be tricky.  This may be of help not sure, so now I remember many years ago, I helped a company, to monitor for when their important/sensitive files in a Windows Server were being accessed/copied or deleted.   We got the Windows Administrator to enable security/auditing for the important files etc (read/write/delete attributes) etc.  We then tested for files being copied/deleted/accessed, the event ID's were something like below, and we sent these to Splunk and was able to monitor them for insider threats. (This was only done for the most important files as they could generate lots of events)  If you enable these they may contain more info as to the device as well, cant remember, but worth a go.  Event ID 4656: A handle to an object was requested. Event ID 4663: An attempt was made to access an object. Event ID 4660: An object was deleted.
Thank you this one working for me  
Lastly I will figure out how to organize this by Day in desc order; right now it is sorting the results by another column...  Yes, Splunk has some weird obsession with alphabetic/ASCII ordering ... See more...
Lastly I will figure out how to organize this by Day in desc order; right now it is sorting the results by another column...  Yes, Splunk has some weird obsession with alphabetic/ASCII ordering unless you tell it otherwise. (It kind of surprises me when the "natural" sorting order is already set in groupby (numeric on original value of "day") but Splunk changes it after adding character string "day -".)  All you need to do is to insert sort after stats and before that string conversion. When you say in desc order, I imagine that you want the reverse numeric order.  Is this correct? index=*app_pcf cf_app_name="mddr-batch-integration-flow" "posbatch04" earliest=-14d@d latest=-0d@d | eval dayback = mvrange(0, 14) | eval day = mvmap(dayback, if(_time < relative_time(now(), "-" . dayback . "d@day") AND relative_time(now(), "-" . tostring(dayback + 1) . "d@day") < _time, dayback, null())) | stats min(_time) as Earliest max(_time) as Latest by day | sort - day | fieldformat Earliest = strftime(Earliest, "%F %T") | fieldformat Latest = strftime(Latest, "%F %T") | eval day = "day -" . tostring(day + 1) | eval Elapsed_Time=Latest-Earliest, Start_Time_Std=strftime(Earliest,"%H:%M:%S:%Y-%m-%d"), End_Time_Std=strftime(Latest,"%H:%M:%S:%Y-%m-%d") | eval Elapsed_Time=Elapsed_Time/60 If you want to last day first, just  | sort day.
Hi @cbiraris , I suppose that these three values are in a field (e.g. type), so you can run a search like the folowing: index= abc sourcetype=xyz type IN ("warning", "Error", Critical) | stats ... See more...
Hi @cbiraris , I suppose that these three values are in a field (e.g. type), so you can run a search like the folowing: index= abc sourcetype=xyz type IN ("warning", "Error", Critical) | stats values(eval(type="warning")) AS warning_count values(eval(type="Error")) AS Error_count values(eval(type="Critical")) AS Critical_count | where warning_count>5 OR Error_count>5 OR Critical_count>5 you can aso setup a different threshold for each type of message. If you don't have the three values in a fied, you have to use a similar search: index= abc sourcetype=xyz ("warning" OR "Error" OR Critical) | stats values(eval(searchmatch("warning"))) AS warning_count values(eval(searchmatch("Error"))) AS Error_count values(eval(searchmatch("Critical"))) AS Critical_count | where warning_count>5 OR Error_count>5 OR Critical_count>5 Ciao. Giuseppe
Yes, only if any of them individually occur 5 times e.g. 5 warnings or 5 errors or 5 criticals
Yes, only if any of them individually occur 5 times e.g. 5 warnings or 5 errors or 5 criticals  
Thanks, yes I'm aware of those. We have media connected to our clients for various reasons. I'm specifically looking for an alert that data is being burned onto removeable media. 
Please clarify your requirement - do you want the alert to trigger if any of the values occurs 5 times e.g. 2 warnings, 2 errors and 1 critical, or only if any of them individually occur 5 times e.g.... See more...
Please clarify your requirement - do you want the alert to trigger if any of the values occurs 5 times e.g. 2 warnings, 2 errors and 1 critical, or only if any of them individually occur 5 times e.g. 5 warnings or 5 errors or 5 criticals?
Hi team, I need help to create a query with with 3 different threshold for 3 different event in single splunk alert. for example : index= abc sourcetype=xyz "warning" OR "Error" OR Criti... See more...
Hi team, I need help to create a query with with 3 different threshold for 3 different event in single splunk alert. for example : index= abc sourcetype=xyz "warning" OR "Error" OR Critical If any of these ("warning" OR "Error" OR Critical) occurred 5 times in events in last 15 minutes alert should be triggered .  
Hi @Harish2 , the most efficient solution is using a lookup, if you aren0t enabled to do this, you have to insert in the input all the conditions, somethin ike this: <form version="1.1" theme="ligh... See more...
Hi @Harish2 , the most efficient solution is using a lookup, if you aren0t enabled to do this, you have to insert in the input all the conditions, somethin ike this: <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="env"> <label>Environment</label> <choice value="*">All</choice> <prefix>env="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>env</fieldForLabel> <fieldForValue>env</fieldForValue> <search> <query> | makeresults | eval env="DEV" | fields env | append [ | makeresults | eval env="SIT" | fields env ] | append [ | makeresults | eval env="UAT" | fields env ] | append [ | makeresults | eval env="SYS" | fields env ] | sort env | table env </query> </search> </input> <input type="dropdown" token="server"> <label>Server</label> <choice value="*">All</choice> <prefix>server="</prefix> <suffix>"</suffix> <default>*</default> <fieldForLabel>server</fieldForLabel> <fieldForValue>server</fieldForValue> <search> <query> | makeresults | eval env="DEV", server="amptams.dev.com" | fields env server | append [ | makeresults | eval env="DEV", server="ampvitss.dev.com" | fields env server ] | append [ | makeresults | eval env="DEV", server="ampdoctrc.dev.com" | fields env server ] | append [ | makeresults | eval env="SIT", server="ampastdmsg.dev.com" | fields env server ] | append [ | makeresults | eval env="SIT", server="ampmorce.dev.com" | fields env server ] | append [ | makeresults | eval env="SIT", server="ampsmls.dev.com" | fields env server ] | append [ | makeresults | eval env="UAT", server="ampserv.dev.com" | fields env server ] | append [ | makeresults | eval env="UAT", server="ampasoomsg.dev.com" | fields env server ] | append [ | makeresults | eval env="SYS", server="ampmsdser.dev.com" | fields env server ] | append [ | makeresults | eval env="SYS", server="ampastcol.dev.com" | fields env server ] | search $env$ | dedup server | sort server | table server </query> </search> </input> </fieldset> <row> <panel> <table> <title>Incoming Count &amp; Total Count</title> <search> <query> index=app-index source=application.logs $env$ $server$ ( "Initial message received with below details" OR "Letter published correctley to ATM subject" OR "Letter published correctley to DMM subject" OR "Letter rejected due to: DOUBLE_KEY" OR "Letter rejected due to: UNVALID_LOG" OR "Letter rejected due to: UNVALID_DATA_APP" ) | rex field= _raw "application :\s(?<Application>\w+)" | rex field= _raw "(?<Msgs>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" | chart count over Application by Msgs | rename "Initial message received with below details" AS Income, "Letter published correctley to ATM subject" AS ATM, "Letter published correctley to DMM subject" AS DMM, "Letter rejected due to: DOUBLE_KEY" AS Reject, "Letter rejected due to: UNVALID_LOG" AS Rej_log, "Letter rejected due to: UNVALID_DATA_APP" AS Rej_app | table Income Rej_app ATM DMM Reject Rej_log Rej_app </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form> Ciao. Giuseppe
from hackthebox academy it's one of their questions and the hint is to use the range() function
Maybe you need to have these enabled, work with your Windows Admin to get them enabled or check for you. Or simple insert a drives and look for the logs. Here are some common event IDs that I was abl... See more...
Maybe you need to have these enabled, work with your Windows Admin to get them enabled or check for you. Or simple insert a drives and look for the logs. Here are some common event IDs that I was able to find using Google! Some may be logged others not when drives are added etc, but you will have to work with your Windows Admin to find out if they have been enabled or are logging etc or use sysmon which is a tool to also monitor and create events. I have listed the TA's that you will need to use once you have all the logs in place. Event ID 12: This event is logged when a removable media is inserted into the computer. Event ID 106: This event is logged when a new external storage device is connected to the system. Event ID 20001: This event is logged by Microsoft-Windows-DriverFrameworks-UserMode when a new device is connected to the system. Event ID 20003: This event is logged by Microsoft-Windows-DriverFrameworks-UserMode when a device is removed from the system. Event ID 1001: This event is logged when a device is enumerated by the Plug and Play manager. Event ID 1003: This event is logged when a device is started. Event ID 1004: This event is logged when a device is stopped Splunk TA's Windows Logs https://splunkbase.splunk.com/app/742 Sysmon logs / info https://splunkbase.splunk.com/app/5709
@gcusello , Sorry if i have confused you. If we have 2 hosts for each env like shown below apart from lookup file do we have other solutions to create drop down env wise. dev 2 hosts sit 2 hosts... See more...
@gcusello , Sorry if i have confused you. If we have 2 hosts for each env like shown below apart from lookup file do we have other solutions to create drop down env wise. dev 2 hosts sit 2 hosts sys 2 hosts uat 2 hosts