All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

    index=myindex source="/var/log/nginx/access.log" | eval status_group=case(status!=200, "fail", status=200, "success") | stats count by status_group | eventstats sum(count) as total | ev... See more...
    index=myindex source="/var/log/nginx/access.log" | eval status_group=case(status!=200, "fail", status=200, "success") | stats count by status_group | eventstats sum(count) as total | eval percent= round(count*100/total,2) | where status_group="fail"     Looking at nginx access logs for a web application.  This query tells me the amount of failures (non 200), total amount of calls (all msgs in log) and the % of failures vs total.  As follows: status_group count percent total fail 20976 2.00 1046605   What I'd like to do next is timechart these every 30m to see what % of failures I get in 30 min windows but the only attempt where I got close did it as a % of the total calls in the log skewing the result completely.  Basically a row like above but for every 30 min of my search period.  Feel free to rewrite the entire query as I cobbled this together anyway.
Haha why did I even split it up on two rows?! Thanks!  
Hi @nithin204 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I'm having the same exact error on Server 2022, except with UF v9.1.3.
thankyou for the reply. The errors Im getting are all under splunkd errors include [tcpoutautolb-0, file monitor input, ingestion latency, real-time reader-0, and more.]  sadly I did not save m... See more...
thankyou for the reply. The errors Im getting are all under splunkd errors include [tcpoutautolb-0, file monitor input, ingestion latency, real-time reader-0, and more.]  sadly I did not save my errors before I decided to delete the Splunk instances and try and reinstall. I believe I was receiving logs in the index _internal.     My deployment looks like this (splunk all-in- on (redhat linux))-------------------(S.U.F (rocky linux))
You're close.  Combine the case functions to produce a single expense field. index=idxCheap OR index=idxExpensive | stats count by index | eval idxCost = case(index="idxCheap", count*10, index="idxE... See more...
You're close.  Combine the case functions to produce a single expense field. index=idxCheap OR index=idxExpensive | stats count by index | eval idxCost = case(index="idxCheap", count*10, index="idxExpensive", count*20)  
Hi @nithin204, the way to pass a parameter to a drilldown is the one I described, please try this: <link>search?q=$drilldown_srch|u$$&amp;earliest=$field1.earliest$&amp;latest=$field1.latest$</link... See more...
Hi @nithin204, the way to pass a parameter to a drilldown is the one I described, please try this: <link>search?q=$drilldown_srch|u$$&amp;earliest=$field1.earliest$&amp;latest=$field1.latest$</link> anyway, usually a drilldown search takes the same time variables of the original. Ciao. Giuseppe
I have a "cost" for two different indexes that I want to calculate in one and the same SPL. As the "price" is different depending on index, I can't just use a "by" clause in my count/sum as I don't k... See more...
I have a "cost" for two different indexes that I want to calculate in one and the same SPL. As the "price" is different depending on index, I can't just use a "by" clause in my count/sum as I don't know how to apply the separate costs in that way. Let's say... idxCheap costs $10 per event. idxExpensive costs $20 per event. I've written this SPL that works, although the "cost" data ends up in a unique column for each index. The count is still in the same column. index=idxCheap OR index=idxExpensive | stats count by index | eval idxCheapCost = case(index="idxCheap", count*10) | eval idxExpensiveCost = case(index="idxExpensive", count*20)  The results looks like this: count idxCheapCost idxExpensiveCost index 44892 448920   idxCheap 155   3100 idxExpensive   Any pointers on how to most efficiently and dynamically achieve this?
No, exactly. I have configure two tcpout groups, one for the production servers and the other for development server. The data is not sent to two tcpout groups, only is sent to one. However, the inte... See more...
No, exactly. I have configure two tcpout groups, one for the production servers and the other for development server. The data is not sent to two tcpout groups, only is sent to one. However, the internal agent logs  yes it sent to development and production tcpout groups. It’s possible that the internal agent logs have the behavior you mention?
Hi @gcusello ,  I have to use the second $ as well after drilldown_srch as that is token.  <link>search?q=$drilldown_srch$?earliest=$field1.earliest$&amp;latest=$field1.latest$|u$</link> If I skip... See more...
Hi @gcusello ,  I have to use the second $ as well after drilldown_srch as that is token.  <link>search?q=$drilldown_srch$?earliest=$field1.earliest$&amp;latest=$field1.latest$|u$</link> If I skip the second "$" after the drillwon_srch, and if I click the value the new search opens as $drilldown_srch in the search bar in new window.    If I use the $drilldown_srch$ , the search is working correct but it is not taking the time variables. It always have a default of 15mins.  Thanks 
I already figured out how to get it to work.
I want to calculate average count per day and maximum count per month. Like all the Mondays , Tuesdays of a given month combined and averaged 
Applying the stanza you referenced below worked for us as well:   [indexAndForward] index = true selectiveIndexing = true  Thanks!
Hi @adamsobczykhsbc , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Strangertinz , sorry but what do you want to calculate with avg(count) and max(count)? count isn't a field to calculate average or maximun. you can have the count of events by period index=_i... See more...
Hi @Strangertinz , sorry but what do you want to calculate with avg(count) and max(count)? count isn't a field to calculate average or maximun. you can have the count of events by period index=_internal | bucket _time span =1d | eval date=strftime(_time,"%a-%b") | stats count by date Ciao. Giuseppe
Hi @MorgenHepton, could you share your error? are you receiving Splunk internal logs? you can check this running a simple search              index=_internal host=<your_host> what is you architec... See more...
Hi @MorgenHepton, could you share your error? are you receiving Splunk internal logs? you can check this running a simple search              index=_internal host=<your_host> what is you architecture? I understood that you have an stand-alone Splunk server (an all-in-one installation) and a Universal Forwarder (in a different system) that sgould send logs to the stand-alone system, is it correct? running a telnet on the UF on ports 9997 and 8089 can you reach to connect the stand-alone server? Ciao. Giuseppe
You would have to share some sample events (anonymised of course) in order for us to determine why your rex command is not extracting the code for you
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment pr... See more...
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment process of Splunk, by building a standalone instance. I get to a point where I think I have completed all the steps necessary to have a functioning Splunk set up. (connections are established on 8089 and 9997) and my web page is good. As soon as my apps are pushed to my (client)  this is when Splunk starts throwing an error stating indexers and ques are full. it also appears I am getting no logs from my applications. Any help is greatly appreciated. 
I am using the below query to merge 2 queries using append. However, I am unable to get the value of the field named "Code" from the first query under | search "Some Logger" printed in the Statistics... See more...
I am using the below query to merge 2 queries using append. However, I am unable to get the value of the field named "Code" from the first query under | search "Some Logger" printed in the Statistics section: index=* sourcetype=* host=* | search "Some Logger" | rex "LoggerName\|(?<time>\w+)\|(?<Service>\w+)\|(?<Type>\w+)\|(?<brand>\w+)\|(?<template>\w+)\|(?<hashId>[\w-]+)\|(?<Code>\w+)" | table Code | append [ search host=* | search "LoggerName2*" | rex field=_raw "field1=(?<field1>)\}" | rex field=_raw "field2=(?<field2>)," | rex field=_raw "field3=(?<field3>[a-zA-z-_0-9\\s]*)" | rex field=_raw "(?<field4>[\w-]+)$" | rex field=_raw "field5=(?<field5>)," | rex field=_raw "field6=(?<field6>)," | table field1,field2 ] The result from the 2nd/child query i.e. | search "LoggerName2*" is printing just fine in a tabular format. Value of the code field is API response code i.e. can be either 2XX, 3XX, 4XX, 5XX. Could someone please help ? Thanks!
Thanks for your quick response but the query is not working