All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this index=myindex source="/var/log/nginx/access.log" | bin _time span=30m | stats count as total count(eval(status!=200)) as fail by _time | eval percent= round(fail*100/total,2)
| rex field=line "(?<ID>\w+)_ETC_RFG:.*/(?<NUM>\d+)" | eval ETC="ETC", RFG="RFG"
Your regex do not match your sample events, for example, the logger1 regex could look like this (assuming your example event is accurate): | rex "\<LoggerName\d\>\|(?<time>\w+)\|(?<Service>\w+)\|(?<... See more...
Your regex do not match your sample events, for example, the logger1 regex could look like this (assuming your example event is accurate): | rex "\<LoggerName\d\>\|(?<time>\w+)\|(?<Service>\w+)\|(?<Type>\w+)\|(?<brand>\w+)\|(?<template>\w+)\|(?<hashId>[\w-]+)\|(?<Code>\w+)"
thank you @gcusello  solution accepted.   side question or easy one. i was googling how to increase the width of an input field, but i see mostly html, but i only have <form> on mine how can i ... See more...
thank you @gcusello  solution accepted.   side question or easy one. i was googling how to increase the width of an input field, but i see mostly html, but i only have <form> on mine how can i change the width of the below input?  <input type="multiselect" token="field1" searchWhenChanged="true"> <label>Select Hostname</label> <fieldForLabel>Hostname</fieldForLabel> <fieldForValue>Hostname</fieldForValue> <search> <query>index = 1234 </query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <delimiter> OR </delimiter> </input>
Forgive me, I'm new to this. I've updated the config file and restarted the service. Is there anything else I should be doing to be able to search logs on my instance or do i need to do some sort of ... See more...
Forgive me, I'm new to this. I've updated the config file and restarted the service. Is there anything else I should be doing to be able to search logs on my instance or do i need to do some sort of registration over on the splunk instance? We're using cloud.    Many Thanks!
Exactly what have you tried?
Hi @thaghost99, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @thaghost99, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
sorry, if i confused. Yes, ETC and RFG are fixed and need to add in table Log Line: 05:02:05.213 Txt 46000 008a456b37de5982_ETC_RFG: (Q056) play this message id:announcement/4637825, duration:58 ... See more...
sorry, if i confused. Yes, ETC and RFG are fixed and need to add in table Log Line: 05:02:05.213 Txt 46000 008a456b37de5982_ETC_RFG: (Q056) play this message id:announcement/4637825, duration:58   i tired to get result with below query,  but not working properly  index=dg_hdgf_yrgt "(Q056) play this message" | rex field=_raw "Txt\s+46000\s+(?<IDvalue>\w+)" | rex field=_raw "announcement/(?<messagefile>\d+)" | where NOT isnull(messagefile) | mvexpand IDvalue | makemv IDvalue delim="_" | eval IDvalue=mvindex(IDvalue,0) | eval ENV=mvindex(IDvalue,1) | eval LOB=mvindex(IDvalue,2) | search LOB=RFG | table IDvalue,ENV,LOB,messagefile
oh man. you are super fast @gcusello,   and it works just how i want it to.. you are amazing. thank you very very much.
Logger 1 events: 2024-02-08 16:46:00.353 INFO 54208 ---[XX_XX:XXX-XX-XXX][cutor-thread-22] XXXXXXXXX : <LoggerName1>|17327025|field|field|field|field|field|200   Logger 2 events: 2024-02-13 13:... See more...
Logger 1 events: 2024-02-08 16:46:00.353 INFO 54208 ---[XX_XX:XXX-XX-XXX][cutor-thread-22] XXXXXXXXX : <LoggerName1>|17327025|field|field|field|field|field|200   Logger 2 events: 2024-02-13 13:58:24.174 INFO 54208 ---[XX_XX:xx-xxx-xxx][utor-thread-XXX] c.l.s.d.a.XXXXXXX : XXX-XXX-20000: XXX: true, XXX: XXXXXXXXXXX0305a8a3f369f518, XXXX: 6shfsgj7601f909<LoggerName2> {notificationDetails={key=XXX, key=hXXXXXn@XX.com , key=XXXXX, key=XXX, key=donotreply@XXX.co.uk, key=XXX63801, key=XXX_20240213, key=XXX_BATCH}, templateVariables={key=XXXX7757, key=9NE, key=Mr, key=Mau}} , 583d6bc3-5e7d-4af8-a626-22db8bb50cb9
Hi @thaghost99, please try this regex: (?<my_field>.*ethernet[^\n]+(\n.*){5}) that you can test at https://regex101.com/r/6MlmNV/1 Ciao. Giuseppe
hi i would like some help on how to extract the next 5 lines after a keyword where it extracts the full line where the keyword is part of. example below....   where the keyword is the 'ethernet' ... See more...
hi i would like some help on how to extract the next 5 lines after a keyword where it extracts the full line where the keyword is part of. example below....   where the keyword is the 'ethernet' ********************************************** Redundant-ethernet Information: Name Status Redundancy-group reth0 Down Not configured reth1 Up 1 reth2 Up 1 reth3 Up 1 reth4 Down Not configured reth5 Down Not configured reth6 Down Not configured reth7 Down Not configured reth8 Down Not configured reth9 Up 2 Redundant-pseudo-interface Information: Name Status Redundancy-group lo0 Up 0   *****************************************   example value of a field now would be..   Redundant-ethernet Information: Name Status Redundancy-group reth0 Down Not configured reth1 Up 1 reth2 Up 1 reth3 Up 1   thanks, if it can be generic enough enough so that i can use it for other rex searches that of similar data   
I tried in many ways but I am not getting expected output 
Hi @MorgenHepton, did you configured your UF to send logs to te Indexer? for more infos see at https://docs.splunk.com/Documentation/Forwarder/9.2.0/Forwarder/Configuretheuniversalforwarder Ciao. ... See more...
Hi @MorgenHepton, did you configured your UF to send logs to te Indexer? for more infos see at https://docs.splunk.com/Documentation/Forwarder/9.2.0/Forwarder/Configuretheuniversalforwarder Ciao. Giuseppe
    index=myindex source="/var/log/nginx/access.log" | eval status_group=case(status!=200, "fail", status=200, "success") | stats count by status_group | eventstats sum(count) as total | ev... See more...
    index=myindex source="/var/log/nginx/access.log" | eval status_group=case(status!=200, "fail", status=200, "success") | stats count by status_group | eventstats sum(count) as total | eval percent= round(count*100/total,2) | where status_group="fail"     Looking at nginx access logs for a web application.  This query tells me the amount of failures (non 200), total amount of calls (all msgs in log) and the % of failures vs total.  As follows: status_group count percent total fail 20976 2.00 1046605   What I'd like to do next is timechart these every 30m to see what % of failures I get in 30 min windows but the only attempt where I got close did it as a % of the total calls in the log skewing the result completely.  Basically a row like above but for every 30 min of my search period.  Feel free to rewrite the entire query as I cobbled this together anyway.
Haha why did I even split it up on two rows?! Thanks!  
Hi @nithin204 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I'm having the same exact error on Server 2022, except with UF v9.1.3.
thankyou for the reply. The errors Im getting are all under splunkd errors include [tcpoutautolb-0, file monitor input, ingestion latency, real-time reader-0, and more.]  sadly I did not save m... See more...
thankyou for the reply. The errors Im getting are all under splunkd errors include [tcpoutautolb-0, file monitor input, ingestion latency, real-time reader-0, and more.]  sadly I did not save my errors before I decided to delete the Splunk instances and try and reinstall. I believe I was receiving logs in the index _internal.     My deployment looks like this (splunk all-in- on (redhat linux))-------------------(S.U.F (rocky linux))