All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How many (approximately) index sourcetype pairs you have? Only those few or e.g. tens/hundreds?
Perhaps it is this line? | eval hour=strftime(_time, "%H") The _time value here will be the time for the start of the day when the summary index was updated i.e. the hour will always be 00
I have already shared before, events are in HTML.   Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2 hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=bla... See more...
I have already shared before, events are in HTML.   Disposition: inline Subject: INFO - Services are in Maintenance Mode over 2 hours -- AtWork-CIW-E1 Content-Type: text/html <font size=3 color=black>Hi Team,</br></br>Please find below servers which are in maintenance mode for more than 2 hours; </br></br></font> <table border=2> <TR bgcolor=#D6EAF8><TH colspan=2>Cluster Name: AtWork-CIW-E1</TH></TR> <TR bgcolor=#D6EAF8><TH colspan=1>Service</TH><TH colspan=1>Maintenance Start Time in MST</TH></TR><TR bgcolor=#FFB6C1><TH colspan=1>oozie</TH><TH colspan=1>Mon Oct 16 07:29:46 MST 2023</TH></TR> </table> <font size=3 color=black></br>  Please check in Bold characters. I want this in table format  
Hi @bmanikya, could you share more sample logs? because, as you can see in regex101.com, my regex works on the shared sample. Ciao. Giuseppe  
Hi @smanojkumar, please try this regex (10\d)|201|205|(30[1-3]) that you can test at https://regex101.com/r/ujeYQM/1 Ciao. Giuseppe
Hi there!    In inputs.conf whitelist, how do I create a regex expression for whitelisting files which contain a certain number (101-109, 201, 205, 301-303)?
Hi @ITWhisperer ,    It is fine but the prefix "form.office_filter=" is affecting the token that is used in search and the link is not expected. Here is the link &form.office_filter%3DBack%20O... See more...
Hi @ITWhisperer ,    It is fine but the prefix "form.office_filter=" is affecting the token that is used in search and the link is not expected. Here is the link &form.office_filter%3DBack%20Office%26form.office_filter%3DFront%20Office=& If I'm using this instead, it works &form.office_filter=Back%20Office&form.office_filter=Front%20Office=& = is replaced by %3D in first link, & is replaced by %26 Can you  please help me in this! Thanks!  
Hi Team,  I'm using summary index for below requirement : 1. Store daily counts of HTTP_Status_Code per hour for each of the application (app_name) on to daily summary index 2. Once in a week, cal... See more...
Hi Team,  I'm using summary index for below requirement : 1. Store daily counts of HTTP_Status_Code per hour for each of the application (app_name) on to daily summary index 2. Once in a week, calculate the average for each app_name by hour, HTTP_STATUS_CODE for the stored values in daily summary index.  3. This average values will be showed in dashboard widget.  But when I'm trying to calculate avg for the stored values, it isn't working. Below are the steps I'm following: 1. Pushing HTTP_Status_Code, _time,hour, day, app_name, count along with value "Summary_test" (for ease of filtering) to daily index named "summary_index_1d". Note : app_name is a extracted field. There are 25+ different values       index="index" | fields HTTP_STATUS_CODE,app_name | eval HTTP_STATUS_CODE=case(like(HTTP_STATUS_CODE, "2__"),"2xx",like(HTTP_STATUS_CODE, "4__"),"4xx",like(HTTP_STATUS_CODE, "5__"),"5xx") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%A") | bin _time span=1d | stats count by HTTP_STATUS_CODE,_time,hour,day,app_name | eval value="Summary_Test" | collect index=summary_index_1d         2. Retrieve data from summary index. its showing up the data pushed       index=summary_index_1d "value=Summary_Test"        3. Now I want to calculate the average for previous 2 or 4 weekday data stored in summary index. I'm using below as reference  https://community.splunk.com/t5/Splunk-Enterprise/How-to-Build-Average-of-Last-4-Monday-Current-day-vs-Today-in-a/m-p/657868/highlight/true#M17385       Trying to perform avg on summary index stored values. But this fails index=summary_index_1d "value=Summary_Test" app_name=abc HTTP_STATUS_CODE=2xx | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d")| eval dayOfWeek = strftime(_time, "%u") | where dayOfWeek >=1 AND dayOfWeek <= 5 | stats count as value by hour log_day day | sort log_day, hour | stats avg(value) as average by log_day,hour       I guess the "hour" in the query is creating conflict. I tried without it and also by changing the values, but not returning expected result. When the same query is used on main index, it works perfectly fine for my requirement. But when used on summary index, its not able to calculate the average.        This works fine for the requirement. But when same is applied on "Summary index", it fails index=index app_name=abc | eval HTTP_STATUS_CODE=case(like(status, "2__"),"2xx") | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d")| eval dayOfWeek = strftime(_time, "%u") | where dayOfWeek >=1 AND dayOfWeek <= 5 | stats count as value by hour log_day day | sort log_day, hour | stats avg(value) as average by log_day,hour       Can you please help me understand what's wrong with query used on summary index ?  @ITWhisperer @yuanliu @smurf  
Hello @isoutamo, The data is in the lookup file is in the form below, I need to read data from each row and compute the results: - index sourcetype ----------------------- idx1 s1 idx2 s2 i... See more...
Hello @isoutamo, The data is in the lookup file is in the form below, I need to read data from each row and compute the results: - index sourcetype ----------------------- idx1 s1 idx2 s2 idx3 s3 idx1 s4 Now I need to compute and display results of each row by running predict command on each of them.  The base query that I have built for running predict command that will fetch the forecast values for each row: - index=custom_index orig_index=idx1 orig_sourcetype=s1 earliest=-4w@w latest=-2d@d | timechart span=1d avg(event_count) AS avg_event_count | predict avg_event_count | tail 1 | fields prediction(avg_event_count)   Please share if you need any more details from my end. I hope to seek your inputs on solving the problem. Thank you
Task failed: Store encrypted MySQL credentials on disk on host: ESS-LT-68Q9FS3 as user: appd-team-9 with message: Command failed with exit code 1 and stdout Checking if db credential is valid... Mysq... See more...
Task failed: Store encrypted MySQL credentials on disk on host: ESS-LT-68Q9FS3 as user: appd-team-9 with message: Command failed with exit code 1 and stdout Checking if db credential is valid... Mysql returned with error code [127]: /home/appd-team-9/appdynamics/platform/product/controller/db/bin/mysql: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory and stderr .
No
As it solve you problem, please accept it as Solution so other can see it later. Happy Splunking!
excellent, i see it now. works perfect. thanks!
If you have extracted that whole value into some field (e.g. ldap_query) then use it. If that value is still in _raw then you could leave that field=xxxx part away. Just see https://docs.splunk.com/Do... See more...
If you have extracted that whole value into some field (e.g. ldap_query) then use it. If that value is still in _raw then you could leave that field=xxxx part away. Just see https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rex
I just checking that we are using 2 x c5.large for IHF and also HEC and there is TA-AWS and TA-gcp running too. Daily ingesting is something like 150GB. You should remember that if this is your 1st ... See more...
I just checking that we are using 2 x c5.large for IHF and also HEC and there is TA-AWS and TA-gcp running too. Daily ingesting is something like 150GB. You should remember that if this is your 1st full splunk instance after UF then you must add those props and transforms there not in then indexers to take those into use!
thanks for the info. when saying your existing field you mean to put the actual field that contain the format? also is there a way to save that so i could do a stats to show the output only with the... See more...
thanks for the info. when saying your existing field you mean to put the actual field that contain the format? also is there a way to save that so i could do a stats to show the output only with the cn value?
Thanks for that.  We are currently running c5n.9xl hosts (which are enormous).  Using those certainly made a difference, but looking at their logs in AWS Console they are clearly under utilised. I g... See more...
Thanks for that.  We are currently running c5n.9xl hosts (which are enormous).  Using those certainly made a difference, but looking at their logs in AWS Console they are clearly under utilised. I guess we are going to have to start to experiment with using fleets of more, but smaller, hosts to see how things go.  It's a pity Splunk don't have a recommended machine size if literally all you are doing is forwarding - we need to run the http collector and the AWS Add-on to pull some S3 info, but even they are basically just acquiring and forwarding.  No explicit indexing, no props and transforms, etc...
Hi you could use this ... | rex field=<your existing field> "cn=(?<cn>[^,]+)" r. Ismo PS. regex101.com is excellent place to test these! 
Hey everyone,    I have this format -  cn=<name>,ou=<>,ou=people,dc=<>,dc=<>,dc=<> that i'm pulling that i need to use only the cn= field. how can i do it with the regex command? is that possible?... See more...
Hey everyone,    I have this format -  cn=<name>,ou=<>,ou=people,dc=<>,dc=<>,dc=<> that i'm pulling that i need to use only the cn= field. how can i do it with the regex command? is that possible?   thanks!!
Try something like this | eval range=coalesce(range, id)