All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The firewall request from the splunk server (e.g. HF) should be both to the actual database physical host/IPs and  the SCAN host/IP server. ( if using this in the tns listener files). In a RAC setup ... See more...
The firewall request from the splunk server (e.g. HF) should be both to the actual database physical host/IPs and  the SCAN host/IP server. ( if using this in the tns listener files). In a RAC setup there may be multiple IP/host, so you would need access to all of them from splunk server. Another test would be to run a nslookup on the scan ip/hostname/FQDN and if it return multiple IPs/host, submit firewall req for them as well.
Since you have given your problem statement in generic terms, I will answer in the same manner. You could look to use the eventstats command to add / copy the exception indicator to all events with t... See more...
Since you have given your problem statement in generic terms, I will answer in the same manner. You could look to use the eventstats command to add / copy the exception indicator to all events with the corresponding request id. Then you can filter the event by whether the exception indicator is present.
Try changing the other search commands to their corresponding where commands
Hi, I need to find errors/exceptions which has been raised within a timestamp and as per the request_id field mentioned in the logs(with every row) , need to fetch relevant logs in splunk  for that ... See more...
Hi, I need to find errors/exceptions which has been raised within a timestamp and as per the request_id field mentioned in the logs(with every row) , need to fetch relevant logs in splunk  for that request_id and send this link to slack channel. I am able to fetch all the errors/exception within timestamp and able to send to slack but I am not able to generate the relevant logs for the request_id mentioned with error/exception as it is dynamic in nature. I am new to splunk so would like to understand, is this possible? if yes then could you please share relevant documentation so that I can understand it better.   Thank you so much.
Thanks Rich - logical when you think about it. Works a treat - thank you
Support is correct that documented behavior is not a bug.  That explains your original observation about tonumber. Your observation about typeof is also normal.  Imagine you are the interpreter.  In... See more...
Support is correct that documented behavior is not a bug.  That explains your original observation about tonumber. Your observation about typeof is also normal.  Imagine you are the interpreter.  In typeof("NaN"), you are given a string literal.  Of course you say that's of type String.  In typeof(num), you are given a variable whose value is documented as a number.  You say that's of type Number.
Hi! Filtering data from an amount of hosts looking for downtime durations. I get a "forensic" use view with this search string: index=myindex host=* | rex "to state\s(?<mystate>.*)" | search my... See more...
Hi! Filtering data from an amount of hosts looking for downtime durations. I get a "forensic" use view with this search string: index=myindex host=* | rex "to state\s(?<mystate>.*)" | search mystate="DOWN " OR mystate="UP | transaction by host startswith=mystate="DOWN " endswith=mystate="*UP " | table host,duration,_time | sort by duration | reverse ...where I REX for the specific patterns of "to state " (host transition into another state, in this example "DOWN" or "UP"), I had do do another "search" to get only the specific ones as there are more than DOWN/UP states (due to my anonymization of the data). I then can retrieve the duration between transitions using "duration" and sorting it as I please. My question - if I'd like to look into ongoing, "at-this-moment-active" hosts in state "DOWN" ie. replace "endswith" with a nominal time value ("NOW"). Where there yet has not been any "endswith" match, just counting the duration from "startswith" to the present moment - any tips on how I can formulate that properly?
So, I've been talking to Splunk support, which directed me to the documentation at SearchReference/Eval  that kind of mentions that NaN is special, and also pointed to typeof() as alternative. Initi... See more...
So, I've been talking to Splunk support, which directed me to the documentation at SearchReference/Eval  that kind of mentions that NaN is special, and also pointed to typeof() as alternative. Initially, this seemed like a good idea, but unfortunately typeof() is even more interesting: | makeresults | eval t=typeof("NaN") | eval num="NaN" | eval tnum=typeof(num) ...returns t = String tnum = Number Oh well....?  
This is a task for spath and JSON functions, not for rex.  Important: Do not treat structured data as strings.  But most importantly, if there is a worse data design in JSON than this, I haven't seen... See more...
This is a task for spath and JSON functions, not for rex.  Important: Do not treat structured data as strings.  But most importantly, if there is a worse data design in JSON than this, I haven't seen it before. (Trust me, I've seen many bad JSON designs.)  Your developers really go to lengths to show the world how lazy they can be.  After lengthy reverse engineering, I cannot decide whether this is the poorest transliteration of an existing data table into JSON, or the worst imagined construction of a data table in JSON.  Using two separate arrays to describe row and column saves a little bit of space, but unlike in a SQL database, this makes search less efficient. (Unless the application reads the whole thing and reconstruct data into an in-memory SQL database.  It also wastes text to describe data type in each column when you JSON format has enough datatypes to include strings and numbers. Enough ranting.  If you have any influence over your developers, make them change the event format to something like this instead:   [{"id":"BM0077","event":35602782,"delays":3043.01,"drops":0},{"id":"BM0267","event":86497692,"delays":1804.55,"drops":44},{"id":"BM059","event":1630092,"delays":5684.5,"drops":0},{"id":"BM1604","event":2920978,"delays":4959.1,"drops":2},{"id":"BM1612","event":2141607,"delays":5623.3,"drops":6},{"id":"BM1834","event":74963092,"delays":2409,"drops":8},{"id":"BM2870","event":41825122,"delays":2545.34,"drops":7}]   With properly designed JSON data, extraction can be simple as    ``` extract from well-designed JSON ``` | spath path={} | mvexpand {} | spath input={} | fields - _* {}   Output will your sample data (see emulation below) will be delays drops event id 3043.01 0 35602782 BM0077 1804.55 44 86497692 BM0267 5684.5 0 1630092 BM059 4959.1 2 2920978 BM1604 5623.3 6 2141607 BM1612 2409 8 74963092 BM1834 2545.34 7 41825122 BM2870  This is how to reach good JSON structure from the sample's bad structure:   ``` transform from bad JSON to well-designed JSON ``` | spath path={} | mvexpand {} | fromjson {} | mvexpand rows | eval idx = mvrange(0, mvcount(columns)) | eval data = json_object() | foreach idx mode=multivalue [eval row = mvindex(json_array_to_mv(rows), <<ITEM>>), data = json_set(data, json_extract(mvindex(columns, <<ITEM>>), "text"), row)] | stats values(data) as _raw | eval _raw = mv_to_json_array(_raw, true())   But before your developer can change their mind, you can do one of the following. 1. String together the bad-JSON-to-good-JSON transformation and normal extraction   ``` transform, then extract ``` ``` transform from bad JSON to well-designed JSON ``` | spath path={} | mvexpand {} | fromjson {} | mvexpand rows | eval idx = mvrange(0, mvcount(columns)) | eval data = json_object() | foreach idx mode=multivalue [eval row = mvindex(json_array_to_mv(rows), <<ITEM>>), data = json_set(data, json_extract(mvindex(columns, <<ITEM>>), "text"), row)] | stats values(data) as _raw | eval _raw = mv_to_json_array(_raw, true()) ``` extract from well-designed JSON ``` | spath path={} | mvexpand {} | spath input={} | fields - _* {}   2. Transform the poorly designed JSON into table format with a little help from kv aka extract, like this:   ``` directly handle bad design ``` | spath path={} | mvexpand {} | fromjson {} | fields - _* {} type | mvexpand rows | eval idx = mvrange(0, mvcount(columns)) ``` the above is the same as transformation ``` | foreach idx mode=multivalue [eval _raw = mvappend(_raw, json_extract(mvindex(columns, <<ITEM>>), "text") . "=" . mvindex(json_array_to_mv(rows), <<ITEM>>))] | fields - rows columns idx | extract   This will give you the same output as above. Here is a data emulation you can play with the above methods and compare with real data   | makeresults | eval _raw = "[{\"columns\":[{\"text\":\"id\",\"type\":\"string\"},{\"text\":\"event\",\"type\":\"number\"},{\"text\":\"delays\",\"type\":\"number\"},{\"text\":\"drops\",\"type\":\"number\"}],\"rows\":[[\"BM0077\",35602782,3043.01,0],[\"BM1604\",2920978,4959.1,2],[\"BM1612\",2141607,5623.3,6],[\"BM2870\",41825122,2545.34,7],[\"BM1834\",74963092,2409.0,8],[\"BM0267\",86497692,1804.55,44],[\"BM059\",1630092,5684.5,0]],\"type\":\"table\"}]" ``` data emulation above ```   A final note: spath has some difficulty handling array of arrays, so I used fromjson (available since 9.0) in one filter.
Thank you @marnall  However, in my log (which has some line between event end markers and the next event start), something is wrong. Some info extra info ##start_string ##time = 1711292017 ##Field2... See more...
Thank you @marnall  However, in my log (which has some line between event end markers and the next event start), something is wrong. Some info extra info ##start_string ##time = 1711292017 ##Field2 = 12 ##Field3 = field_value ##Field4 = somethingelse ##Field8 = 1 ##Field7 = 12 ##Field6 = 1 ##Field5 = ##end_string Some info more info extra info ##start_string ##time = 1711291017 ##Field2 = 12 ##Field3 = field_value2 ##Field4 = somethingelse3 ##Field8 = 14 ##Field7 = 12 ##Field6 = 15 ##Field5 = ##end_string SOme info more info info extra info ##start_string ##time = 1711282017 ##Field2 = 12 ##Field3 = asrsar ##Field4 = somethingelsec ##Field8 = 1 ##Field7 = 12 ##end_string Some info extra info     Some idea to delimit events between the markers? ##start_string ##end_string   BR JAR  
  Its such a mystery! I think i tried everything already, each time the query has the eval command there is no results coming back.
We have Splunk Enterprise installed in almost 6 different regions(APAC/AUS/LATAM/EMEA/NA/LA) worldwide and now we are looking for feasibility check for implementing :- a. Single Triage Dashboard whi... See more...
We have Splunk Enterprise installed in almost 6 different regions(APAC/AUS/LATAM/EMEA/NA/LA) worldwide and now we are looking for feasibility check for implementing :- a. Single Triage Dashboard which can be deployed in one region and can be accessing data coming from all these 6 regions. b. I understand that looking at the current setup we can't access Splunk data from one region to other region. However, is there a possibility through API calls or any other method that we can access other region Splunk data? Kindly assist us on this topic if anybody can help.
Hi @slearntrain, yes, sorry: I forgot the second part of the if statements, but now it's correct. what's the format you would for the results? With this earch you have in the same row: appid fl... See more...
Hi @slearntrain, yes, sorry: I forgot the second part of the if statements, but now it's correct. what's the format you would for the results? With this earch you have in the same row: appid flowname endNBflow endPayload diff if you want to have in different rows endNBflow and endPayload, where do you want to put the difference? Could you indicate how would you have the results? Ciao. Giuseppe
@ITWhisperer I have modified the changes as per your suggestion in the macros. But now I am seeing issue persist with the data. When I select for 7 days, data is visible in a dashboard. Query and da... See more...
@ITWhisperer I have modified the changes as per your suggestion in the macros. But now I am seeing issue persist with the data. When I select for 7 days, data is visible in a dashboard. Query and dashboard screenshot is attached below. index="tput_summary" sourcetype="tput_summary_1h" | bin _time span=h | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | where like(LocationQualifiedName, "%/Aisle%Entry%") | strcat "raw" "," location group_name | search LocationQualifiedName="*/Aisle*Entry*" OR LocationQualifiedName="*/Aisle*Exit*" | strcat "raw" "," location group_name | timechart sum(count) as cnt by location When I have select for 30 days . There is no data visible in a dashboard. You can see query also. index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | where like(LocationQualifiedName, "%/Aisle%Entry%") | strcat "raw" "," location group_name | search LocationQualifiedName="*/Aisle*Entry*" OR LocationQualifiedName="*/Aisle*Exit*" | strcat "raw" "," location group_name | timechart sum(count) as cnt by location    
Have there been any updates on methodologies for extacting multiple metrics in a single mstats call?  I can do the work with a join across _time and dimensions held in common, but after about 2 metri... See more...
Have there been any updates on methodologies for extacting multiple metrics in a single mstats call?  I can do the work with a join across _time and dimensions held in common, but after about 2 metrics, the method gets a bit tedious.
Hi Splunk Experts,  I have some data coming into splunk which has the following format:  [{"columns":[{"text":"id","type":"string"},{"text":"event","type":"number"},{"text":"delays","type":"numbe... See more...
Hi Splunk Experts,  I have some data coming into splunk which has the following format:  [{"columns":[{"text":"id","type":"string"},{"text":"event","type":"number"},{"text":"delays","type":"number"},{"text":"drops","type":"number"}],"rows":[["BM0077",35602782,3043.01,0],["BM1604",2920978,4959.1,2],["BM1612",2141607,5623.3,6],["BM2870",41825122,2545.34,7],["BM1834",74963092,2409.0,8],["BM0267",86497692,1804.55,44],["BM059",1630092,5684.5,0]],"type":"table"}]   I tried to extract each field so that each value  corresponds to id,event,delays and drops as a table using the below command.    index=result | rex field=_raw max_match=0 "\[\"(?<id>[^\"]+)\",\s*(?<event>\d+),\s*(?<delays>\d+\.\d+),\s*(?<drops>\d+)" | table id  event delays drops    I get the result in table format , however it spits out as one whole table and not individual entries and I cannot manipulate the result.  I have tried using mvexpand , however it can only do for one value, so have not been helpful as well.    Does anyone know how we can properly get the table in splunk . 
Hi gcusello,  Thank you for responding. unfortunately, I didn't quite get what I was looking for. Initially, when I ran your query, I got the error that if command does not meet the requirement. So ... See more...
Hi gcusello,  Thank you for responding. unfortunately, I didn't quite get what I was looking for. Initially, when I ran your query, I got the error that if command does not meet the requirement. So I had to add the true/false parameters. As I am looking for infotime for each of the steptypes, I added in the true section. -- "latest(eval(if(steptype="endNBflow", infotime,0)))" Now, I need to find the "diff" (responseTime) in the table equivalent to the appid as that is the response time for the message. I have modified your query based on the time formats that I want:   index="xyz" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd-35tr-6s9i4ewlp6h3" | rex field=_raw "\"APPID\"\:\s\"(?<appid>.*?)\"" | rex field=_raw "\"stepType\"\:\s\"(?<steptype>.*?)\"" | rex field=_raw "\"flowname\"\:\s\"(?<flowname>.*?)\"" | rex field=_raw "INFO ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" | stats latest(eval(if(steptype="endNBflow", infotime,0))) AS endNBflow latest(eval(if(steptype="end payload",max(infotime),0))) AS endPayload BY appid flowname|eval endNBflowtime=strptime(endNBflow,"%Y-%m-%d %H:%M:%S,%3N")| eval endPayLoadtime=strptime(endPayLoad,"%Y-%m-%d %H:%M:%S,%3N")|eval diff=endpayloadtime-endNBflowtime|eval responseTime=strftime(diff,"%Y-%m-%d %H:%M:%S,%3N") But now how to bring the response time in the table format corresponding to the appid?  
Try using where and like() instead of search index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute dat... See more...
Try using where and like() instead of search index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | where like(LocationQualifiedName, "%/Aisle%Entry%") | strcat "raw" "," location group_name
+1 On the "forget the KVstore" part. Unless you have one of those strange inputs which insist on keeping state in kvstore, just disable the kvstore altogether and don't worry about it.
@ITWhisperer Below search is returning result as below screenshot.