HI Team,
Need one help, I want to run a schedule for the below search events every 1 hr and capture the inportant fields like responseStatus, requestMethod, requestURL, servicePath, Total request, hour, day, etc. and write to outputfile in csv. So that I can use this report for my dashboards.
The idea behind this is becase, our application logs millions of events per day and if we want to look for the historical data for reports, it takes long time to run and load the dashboard.
I want to run this every hour and append the data it in the existing csv file.
I tried the lookup but didn't work for me. Any solutions welcome
time=2021-04-14T17:57:07+00:00 requestId=751411798490203 traceId=751411798490203 servicePath="/ecp/" remoteAddr=71.74.45.8 clientIp=24.161.128.196 clientAppVersion=NOT_AVAILABLE app_version=- apiKey=72c07648-ea14-34f2-abed-e38263580b5c oauth_leg=2-legged authMethod=oauth apiAuth=true apiAuthPath=/ecp/ oauth_version=1.0 target_bg=default requestHost=api.spectrum.net requestPort=8080 requestMethod=GET requestURL="/ecp/entitlements/v2/entitlements?divisionId=NEW.004&accountNumber=28290420" requestSize=560 responseStatus=200 responseSize=8422 responseTime=0.025 userAgent="IPVS" mapTEnabled="F" charterClientIp="V-1|IP-24.161.128.196|SourcePort-|TrafficOriginID-24.161.128.196" sourcePort="" oauth_consumer_key="72c27648-ea14-44f2-abed-e38263580b5c" x_pi_auth_failure="-" pi_log="pi_ngxgw_access"
Thanks @aasabatini .
I actually tried other way too using timechart and look up the statstics gives the data in the format i need. But, will this data can be saved in the outputlookup file. I want to use this outputlookup file to plot the charts for multiple services reporting
"my search" | dedup requestId | timechart span=1h count by responseStatus limit=0
your query
"my search" | dedup requestId | stats count by responseStatus limit=0 | transpose header_field=responseStatus
outputlookup
"mysearch" | | dedup requestId | stats count by responseStatus limit=0 | transpose header_field=responseStatus | outputlookup "ecpstats.csv" append=true
Thanks for the inputs.
But, I want to see the data count in horizontal format with count.
200 | 201 | 202 | 203 | 204 | Total | Application | Date | Day | Time | RequestURL |
1 | 2 | 4 | 5 | 8 | 20 | ECP | 04/14/21 | Wed | 14:00 | Entitlements |
Hi @vijaysubramania
in this case you need the transpose comand.
https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transpose
Now I don't know your dataset but let me show you an example:
normal search have this results:
index=_internal sourcetype=splunk_web_access | stats count by status
use transpose comand at the end of the search and specify the header_field
index=_internal sourcetype=splunk_web_access | stats count by status | transpose header_field=status
if you want filter the first field column use table after the search
index=_internal sourcetype=splunk_web_access | stats count by status | transpose header_field=status | table 200 303 304 404
hope can help
Hi @vijaysubramania
you can use the outputlookup comand
"your search" | table "the list of your fields" | outputlookup "lookup name"
you can check the documentation
here:https://docs.splunk.com/Documentation/Splunk/8.1.3/SearchReference/Outputlookup
p.s if you want save your data in append you need to add the append condition in your search.
"your search" | table "the list of your fields" | outputlookup "lookup name" append=true
Hi Team,
I have created the lookupfile which i created with Status count of each status codes, Application Name and day. I want to calculate the success % from the data I saved without performing any transpose as I am capturing all the http status codes each day. The purpose is as the volume of data is too high, I am using lookup table to reduce the load and faster response
| inputlookup MC_V2_DAILY.csv