All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Muthu_Vinith, you have at first to create the lookup and the lookup definition (don't forget definition!). Then you have to define the fields list of the new lookup from the fiels in the index ... See more...
Hi @Muthu_Vinith, you have at first to create the lookup and the lookup definition (don't forget definition!). Then you have to define the fields list of the new lookup from the fiels in the index and create a search, and at least create a search ending with the outputlookup command (https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/Outputlookup). So you can run something like this: index=abc | dedup field1 field2 field3 | sort field1 field2 field3 | table field1 field2 field3 | outputlookup your_lookup.csv Analyze the options of the outputlookup command to find the ones that you require. Ciao. Giuseppe
Looking for some advice please! I have pushed Splunk UF via MS Intune to all domain laptops. All looks well with config file and settings for reporting server and ports set. On an example machine, ... See more...
Looking for some advice please! I have pushed Splunk UF via MS Intune to all domain laptops. All looks well with config file and settings for reporting server and ports set. On an example machine, go to services, SplunkForwarder is running. These logs are meant to be pushed to our CyberDefence 3rd party.  However, it seems Splunk has no rights to send logs (possibly due to 'Log on' as settings in SplunkForwarder service). Has anyone ever encountered this before and resolved, or completed Spunk UF install via Intune?
Your expected output doesn't have a time element so why are you using timechart, or indeed bin _time?
Hi @New_splunkie, It's great to see your interest in Data Manager! You're absolutely correct – Data Manager is a native app to the Splunk Cloud Platform, which means there's no separate installation... See more...
Hi @New_splunkie, It's great to see your interest in Data Manager! You're absolutely correct – Data Manager is a native app to the Splunk Cloud Platform, which means there's no separate installation required. However, there are a few requirements that your deployment must meet in order to have it on your Splunk Cloud Platform.  You can use Data Manager if your Splunk Cloud Platform deployment meets the following requirements: Runs Splunk Cloud Platform versions 8.2.2104.1 and higher on the Victoria experience. Is provisioned in a region that supports Data Manager. See Available regions and region differences in the Splunk Cloud Platform Service Description. Please let me know if that helps! Antoni 
As I said, it depends on your data. For example, Apache HTTPD logs (and other HTTPD servers) log transactions using the timestamp for when the request was received, but it is added to the log when th... See more...
As I said, it depends on your data. For example, Apache HTTPD logs (and other HTTPD servers) log transactions using the timestamp for when the request was received, but it is added to the log when the response is sent back. This means that the event time could be minutes out from the index time even if the log was indexed instantaneously (which it isn't as there will always be a lag between when the log is written and when it reaches the indexers). However, in this instance, the time the response was sent could be inferred from the request time and the duration, so this could be used to compare against the index time to give you a better idea about the lag. Perhaps what might be more useful to you is the difference between successive index times? This might show you when either there was a pause in logging or when there was a breakdown in transmission of the log events to the indexers. However, this would need to be compared with the actual rate at which the events were written to the log, so, again, it depends on your data.
Brackets in the wrong place and it looks like the else part of the first if should start with another if | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403", if(like('thrown.extendedSt... See more...
Brackets in the wrong place and it looks like the else part of the first if should start with another if | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403", if(like('thrown.extendedStackTrace',"%404%"),"404","###ERROR####"))
Hello @ITWhisperer, Thank you for your response. If I take duration of last 10 days, for some hours I get more index data points than event data points, and over the time it changes to more event d... See more...
Hello @ITWhisperer, Thank you for your response. If I take duration of last 10 days, for some hours I get more index data points than event data points, and over the time it changes to more event data points to index data points. Additionally, there is no clear cyclic pattern observed during the day when this change happens, that some duration of time former is observed and some other duration latter is observed. The problem I am trying to solve is to identify potential data ingestion issue that may exist with respect to a given data source. That is the anticipated pattern would have event data points and indexed data points closely follow same pattern and the event data points volume will be slightly greater than or equal to indexed data points. But, when the event data points pattern and indexed data points pattern are not same over the same period of time or we may have more number of indexed data points than event data points, that is when the issue may be occurring. This is the problem I am trying to identify at run time or close to run time and later address with respect to each data source. Please share if the above helps to answer your questions to seek more guidance on the topic. Thank you
Create a search to find the data you want from your index, then use outputlookup to send it to a lookup source.
  Set this alert to run every 30 minutes looking back for 1 hour index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | streamstats count as weight |... See more...
  Set this alert to run every 30 minutes looking back for 1 hour index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | streamstats count as weight | eval alert=if(Average_Request_Time>1,weight,0) | stats sum(alert) as alert | where alert==1  
@ITWhisperer thanks for the solution, i did little changes as per my desired results.
Hey Experts, I'm new to splunk and I'm trying to create a new lookup from data in a index=abc. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreci... See more...
Hey Experts, I'm new to splunk and I'm trying to create a new lookup from data in a index=abc. Can someone please guide me on how to achieve this? Any help or example queries would be greatly appreciated. Thank You!
Hello @ITWhisperer , i am trying to get the details of "the volume of data ingestion, broken down by index group" i tried this SPL unable to get the results in the table index=summary source="sp... See more...
Hello @ITWhisperer , i am trying to get the details of "the volume of data ingestion, broken down by index group" i tried this SPL unable to get the results in the table index=summary source="splunk-ingestion" |dedup keepempty=t _time idx |stats sum(ingestion_gb) as ingestion_gb by _time idx |bin _time span=1h |eval ingestion_gb=round(ingestion_gb,3) |eval group_field=if(searchmatch("idx=.*micro.*group1"), "group1",searchmatch("idx=.*soft.*"), "group2", true(), "other") |timechart limit=0 span=1d sum(ingestion_gb) as GB by group_field We are having list of indexes like: AZ_micro micro AD_micro Az_soft soft AZ_soft From the above indexes 'micro' are grouped under the name 'microgroup', while the indexes 'soft' are grouped under 'softgroup', and so on like below. so, in the table i want to show the volume of the "groups" like ------------------------------------------ group name         |               volume ------------------------------------------ microgroup         |              <0000> softgroup             |              <0000>
  index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | where Average_Request_Time >1   I have this query setup as an alert if my web app request d... See more...
  index=my_index source="/var/log/nginx/access.log" | stats avg(request_time) as Average_Request_Time | where Average_Request_Time >1   I have this query setup as an alert if my web app request duration goes over 1 second and this searches back over a 30 min window. I want to know when this alert has recovered.  So I guess effectively running this query twice against 1st 30 mins of an hour then 2nd 30 mins of an hour then give me a result I can alert when that gets returned.  The result would be an indication that the 1st 30 mins was over 1 second average duration and the 2nd 30 mins was under 1 second average duration and thus, it recovered. I have no idea where to start with this!  But I do want to keep the alert query above for my main alert of an issue and have a 2nd alert query for this recovery element.  Hoep this is possible.
Are you saying that for every hour there are more index data points than event data points, or that it happens sometimes? Even then, lets say you have a lag between the event time and the index time... See more...
Are you saying that for every hour there are more index data points than event data points, or that it happens sometimes? Even then, lets say you have a lag between the event time and the index time and that indexing happens at 5 minutes past, but the events picked up are timestamped from 5 minutes before to 5 minutes past. The count for that index time will include events which are not in that hour. Index time and event time are two different scales running independently of each other. Depending on your source data, events may be indexed before or after their event time. What problem is it that you are trying to solve?
did you get any solution?
Works perfect, thanks!
I found that using the following match condition is enough to get the job done. <condition match="$row.gender$==&quot;female&quot;">  Thanks for your answer. It lets me find out that there is a th... See more...
I found that using the following match condition is enough to get the job done. <condition match="$row.gender$==&quot;female&quot;">  Thanks for your answer. It lets me find out that there is a thing called conditional drilldown!
Having a similar issue, | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403"),(like('thrown.extendedStackTrace',"%404%"),"404"),"###ERROR####") But getting error as --> Error ... See more...
Having a similar issue, | eval Test= if( (like('thrown.extendedStackTrace',"%403%"),"403"),(like('thrown.extendedStackTrace',"%404%"),"404"),"###ERROR####") But getting error as --> Error in 'EvalCommand': The expression is malformed. Expected ).  
Excellent, that worked.. Thank You !!
Single quotes around field names with dots in | eval Test1=substr('thrown.extendedStackTrace', 1, 3)