All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We have Splunk Enterprise installed in almost 6 different regions(APAC/AUS/LATAM/EMEA/NA/LA) worldwide and now we are looking for feasibility check for implementing :- a. Single Triage Dashboard whi... See more...
We have Splunk Enterprise installed in almost 6 different regions(APAC/AUS/LATAM/EMEA/NA/LA) worldwide and now we are looking for feasibility check for implementing :- a. Single Triage Dashboard which can be deployed in one region and can be accessing data coming from all these 6 regions. b. I understand that looking at the current setup we can't access Splunk data from one region to other region. However, is there a possibility through API calls or any other method that we can access other region Splunk data? Kindly assist us on this topic if anybody can help.
Hi @slearntrain, yes, sorry: I forgot the second part of the if statements, but now it's correct. what's the format you would for the results? With this earch you have in the same row: appid fl... See more...
Hi @slearntrain, yes, sorry: I forgot the second part of the if statements, but now it's correct. what's the format you would for the results? With this earch you have in the same row: appid flowname endNBflow endPayload diff if you want to have in different rows endNBflow and endPayload, where do you want to put the difference? Could you indicate how would you have the results? Ciao. Giuseppe
@ITWhisperer I have modified the changes as per your suggestion in the macros. But now I am seeing issue persist with the data. When I select for 7 days, data is visible in a dashboard. Query and da... See more...
@ITWhisperer I have modified the changes as per your suggestion in the macros. But now I am seeing issue persist with the data. When I select for 7 days, data is visible in a dashboard. Query and dashboard screenshot is attached below. index="tput_summary" sourcetype="tput_summary_1h" | bin _time span=h | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | where like(LocationQualifiedName, "%/Aisle%Entry%") | strcat "raw" "," location group_name | search LocationQualifiedName="*/Aisle*Entry*" OR LocationQualifiedName="*/Aisle*Exit*" | strcat "raw" "," location group_name | timechart sum(count) as cnt by location When I have select for 30 days . There is no data visible in a dashboard. You can see query also. index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | where like(LocationQualifiedName, "%/Aisle%Entry%") | strcat "raw" "," location group_name | search LocationQualifiedName="*/Aisle*Entry*" OR LocationQualifiedName="*/Aisle*Exit*" | strcat "raw" "," location group_name | timechart sum(count) as cnt by location    
Have there been any updates on methodologies for extacting multiple metrics in a single mstats call?  I can do the work with a join across _time and dimensions held in common, but after about 2 metri... See more...
Have there been any updates on methodologies for extacting multiple metrics in a single mstats call?  I can do the work with a join across _time and dimensions held in common, but after about 2 metrics, the method gets a bit tedious.
Hi Splunk Experts,  I have some data coming into splunk which has the following format:  [{"columns":[{"text":"id","type":"string"},{"text":"event","type":"number"},{"text":"delays","type":"numbe... See more...
Hi Splunk Experts,  I have some data coming into splunk which has the following format:  [{"columns":[{"text":"id","type":"string"},{"text":"event","type":"number"},{"text":"delays","type":"number"},{"text":"drops","type":"number"}],"rows":[["BM0077",35602782,3043.01,0],["BM1604",2920978,4959.1,2],["BM1612",2141607,5623.3,6],["BM2870",41825122,2545.34,7],["BM1834",74963092,2409.0,8],["BM0267",86497692,1804.55,44],["BM059",1630092,5684.5,0]],"type":"table"}]   I tried to extract each field so that each value  corresponds to id,event,delays and drops as a table using the below command.    index=result | rex field=_raw max_match=0 "\[\"(?<id>[^\"]+)\",\s*(?<event>\d+),\s*(?<delays>\d+\.\d+),\s*(?<drops>\d+)" | table id  event delays drops    I get the result in table format , however it spits out as one whole table and not individual entries and I cannot manipulate the result.  I have tried using mvexpand , however it can only do for one value, so have not been helpful as well.    Does anyone know how we can properly get the table in splunk . 
Hi gcusello,  Thank you for responding. unfortunately, I didn't quite get what I was looking for. Initially, when I ran your query, I got the error that if command does not meet the requirement. So ... See more...
Hi gcusello,  Thank you for responding. unfortunately, I didn't quite get what I was looking for. Initially, when I ran your query, I got the error that if command does not meet the requirement. So I had to add the true/false parameters. As I am looking for infotime for each of the steptypes, I added in the true section. -- "latest(eval(if(steptype="endNBflow", infotime,0)))" Now, I need to find the "diff" (responseTime) in the table equivalent to the appid as that is the response time for the message. I have modified your query based on the time formats that I want:   index="xyz" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd-35tr-6s9i4ewlp6h3" | rex field=_raw "\"APPID\"\:\s\"(?<appid>.*?)\"" | rex field=_raw "\"stepType\"\:\s\"(?<steptype>.*?)\"" | rex field=_raw "\"flowname\"\:\s\"(?<flowname>.*?)\"" | rex field=_raw "INFO ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" | stats latest(eval(if(steptype="endNBflow", infotime,0))) AS endNBflow latest(eval(if(steptype="end payload",max(infotime),0))) AS endPayload BY appid flowname|eval endNBflowtime=strptime(endNBflow,"%Y-%m-%d %H:%M:%S,%3N")| eval endPayLoadtime=strptime(endPayLoad,"%Y-%m-%d %H:%M:%S,%3N")|eval diff=endpayloadtime-endNBflowtime|eval responseTime=strftime(diff,"%Y-%m-%d %H:%M:%S,%3N") But now how to bring the response time in the table format corresponding to the appid?  
Try using where and like() instead of search index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute dat... See more...
Try using where and like() instead of search index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | where like(LocationQualifiedName, "%/Aisle%Entry%") | strcat "raw" "," location group_name
+1 On the "forget the KVstore" part. Unless you have one of those strange inputs which insist on keeping state in kvstore, just disable the kvstore altogether and don't worry about it.
@ITWhisperer Below search is returning result as below screenshot.  
How about this search index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date... See more...
How about this search index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count
@ITWhisperer Below search is returning "0" results.  
What does this search return? index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_mo... See more...
What does this search return? index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | search LocationQualifiedName="*/Aisle*Entry*" | strcat "raw" "," location group_name
@ITWhisperer  Events are present in sourcetype="tput_summary_1d" for 30 days Events are present in sourcetype="tput_summary_1h" for 30 days   Please guide me on this
Your previous search returned events from tput_summary_1h whereas this latest search is used tput_summary_1d - check that there are events in your summary index for the *_1d sourcetype
It should work. Here is how I have it set up: log sample: (at /tmp/hashlogs) ##start_string ##time = 1711292017 ##Field2 = 12 ##Field3 = field_value ##Field4 = somethingelse ##Field8 = 1 ##Field7 =... See more...
It should work. Here is how I have it set up: log sample: (at /tmp/hashlogs) ##start_string ##time = 1711292017 ##Field2 = 12 ##Field3 = field_value ##Field4 = somethingelse ##Field8 = 1 ##Field7 = 12 ##Field6 = 1 ##Field5 = ##end_string ##start_string ##time = 1711291017 ##Field2 = 12 ##Field3 = field_value2 ##Field4 = somethingelse3 ##Field8 = 14 ##Field7 = 12 ##Field6 = 15 ##Field5 = ##end_string ##start_string ##time = 1711282017 ##Field2 = 12 ##Field3 = asrsar ##Field4 = somethingelsec ##Field8 = 1 ##Field7 = 12 ##end_string   inputs.conf (on forwarder machine) [monitor:///tmp/hashlogs] index=main sourcetype=hashlogs props.conf (on indexer machine) [hashlogs] SHOULD_LINEMERGE = false LINE_BREAKER = ([\n\r]+)##start_string   Result: (search is index=* sourcetype=hashlogs)    
@ITWhisperer While expanding macros I am getting below search : index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour da... See more...
@ITWhisperer While expanding macros I am getting below search : index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | search LocationQualifiedName="*/Aisle*Entry*" | strcat "raw" "," location group_name | timechart sum(count) as cnt by location Above search is not producing any results.  
It seems like this is an issue with the scheduled task settings. To make the process run in the background, you could set it to "Run whether user is logged in or not", or you could set it to run as ... See more...
It seems like this is an issue with the scheduled task settings. To make the process run in the background, you could set it to "Run whether user is logged in or not", or you could set it to run as the SYSTEM user. You could also try using powershell to run a bat file containing your command(s): powershell "start <executable path> -WindowStyle Hidden"  
Hi why you want keep kvstore running on your HFs? Usually it’s not needed there! Only reason why you need it, is some modular input which keep its checkpoint there. Usually this means that you have ... See more...
Hi why you want keep kvstore running on your HFs? Usually it’s not needed there! Only reason why you need it, is some modular input which keep its checkpoint there. Usually this means that you have wrote it by yourself. If you really need it, then add HF as indexer on your MC and use it for monitoring. r. Ismo
For non-persistent Instant-clones using VMware's clone-prep I have installed the UF with the launchsplunk=0 onto the master/gold image. The I run "splunk clone-prep-clear-config", set the service to ... See more...
For non-persistent Instant-clones using VMware's clone-prep I have installed the UF with the launchsplunk=0 onto the master/gold image. The I run "splunk clone-prep-clear-config", set the service to manual so it doesn't start automatically on the master/gold image and publishing the desktops. Then I have a scheduled task that runs a few minutes after the user logons the calls an elevated command to "splunk.exe restart" to erase the GUID and generate a new GUID prior to the splunkd service starting.  Is there a way for the process that this invokes to run silently? ie no pop-up screen
Try expanding the macros in the search to see what they are actually doing