All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How do you know it is 5 hours? Is it always about 5 hours from all hosts? Or for all sourcetypes? Can you isolate a common attribute for all the events which are "delayed"? Which time zone or zones a... See more...
How do you know it is 5 hours? Is it always about 5 hours from all hosts? Or for all sourcetypes? Can you isolate a common attribute for all the events which are "delayed"? Which time zone or zones are you operating in?
Thank you for your assistance 1) Since I am using DS, do you think it's doable if I just display two numbers on two separate "single value" box? 2)  Is this  the alternative solution?    Can  yo... See more...
Thank you for your assistance 1) Since I am using DS, do you think it's doable if I just display two numbers on two separate "single value" box? 2)  Is this  the alternative solution?    Can  you please help translate it with the current case (plus percentile_Inc)?    https://community.splunk.com/t5/Splunk-Search/Is-there-a-way-to-calculate-the-percentile-of-a-value-within-a/m-p/269874 | stats count by value | sort + value | streamstats current=f sum(count) as rank | fillnull rank | eventstats sum(count) as total | eval percentile_rank = rank / total * 100   3) Can I use perc<percentage>(<value>)  or upperperc(<value>,<percentile>) to solve this ? https://docs.splunk.com/Documentation/SCS/current/SearchReference/Aggregatefunctions
Hi,  We have enabled all the default JMX metric collection in the configuration like, kafka, tomcat, weblogic, PMi, cassandra,etc., But when very limited metrics are available under Metric Browser. ... See more...
Hi,  We have enabled all the default JMX metric collection in the configuration like, kafka, tomcat, weblogic, PMi, cassandra,etc., But when very limited metrics are available under Metric Browser.  Only JVM --> classes, garbage collection, memory, threads are visible.  None of the above.  Why is it so? We are more interested in looking at Tomcat related JMX metrics.  Your inputs are much appreciated.  Thanks, Viji
@Cansel.OZCAN  Do you have any comments on my previous message?
we see a delay of over five hours in indexing. Is there a way to find out where these events "got stuck" or please let me know query to get the how much time log delay
I have an index that provides a Date and a row count to populate a line chart on a dashboard using DBConnect.  The data looks like this: Date Submissions 2023-11-13 7 2023-11-14 35 20... See more...
I have an index that provides a Date and a row count to populate a line chart on a dashboard using DBConnect.  The data looks like this: Date Submissions 2023-11-13 7 2023-11-14 35 2023-11-15 19   When the line chart displays the data, the dates show up like this:  2023-11-12T19:00:00-05:00,  2023-11-13T19:00:00-05:00, 2023-11-14T19:00:00-05:00.  Is there some setting/configuration that needs to be updated?
How are you measuring lag/delay?
Trying to get our Crowdstrike FDR set-up with the splunk TA. Tried resetting the Crowdstrike FDR API twice with the same error. error response recieved from server: unexpected error <class splunkl... See more...
Trying to get our Crowdstrike FDR set-up with the splunk TA. Tried resetting the Crowdstrike FDR API twice with the same error. error response recieved from server: unexpected error <class splunklib.reset_handler.error.resterror> from python handler: rest error [400]: bad request -- an error occured (accessdenied) when calling the listbuckets operation: access denied. see splunkd.log/python.log for more details. Any thoughts?
Hi, i need to add filter to error query into total transaction query so that i can get filtered error counts as well as total transaction in two column with service name  This below query i am usin... See more...
Hi, i need to add filter to error query into total transaction query so that i can get filtered error counts as well as total transaction in two column with service name  This below query i am using to get total transaction and total errors index="iss" Environment=PROD | where Appid IN ("APP-61", "APP-85", "APP-69", "APP-41", "APP-57", "APP-71", "APP-50", "APP-87") | rex field=_raw " (?<service_name>\w+)-prod" | eval err_flag = if(level="ERROR", 1,0) | eval success_flag = if(level!="ERROR", 1,0) | stats sum(err_flag) as Total_Errors, sum(success_flag) as Total_Successes by service_name | eval Total_Transaction = (Total_Successes+Total_Errors) | fields service_name, Total_Transaction, Total_Errors, Total_Successes i need to add search filter into errors so that it will only count those filtered errors not all errors and merge this below query into above one in err_flag line index="iss" Environment=PROD "Invalid JS format" OR ":[down and unable to retrieve response" OR "[Unexpected error occurred" OR ": [An unknown error has occurred" OR "exception" OR OR IN THE SERVICE" OR "emplateErrorHandler : handleError :" OR "j.SocketException: Connection reset]" OR "Power Error Code" OR "[Couldn't kickstart handshaking]" OR "[Remote host terminated the handshake]" OR "Caused by:[JNObject" OR "processor during S call" OR javx OR "Error while calling" OR level="ERROR" NOT "NOT MATCH THE CTRACT" NOT "prea_too_large" NOT g-500 NOT G-400 NOT "re-submit the request" NOT "yuu is null" NOT "igests data" NOT "characters" NOT "Asset type" NOT "Inputs U" NOT "[null" NOT "Invalid gii"   Please help me it would be wonderful, Thankyou
can you please suggest query to pull all the index and sourcetype lag/delay for last 30 days
Hi. I am a new splunk user with a question: When splunk is ingesting data we get a monitoring system warning about 10% FS Availability. Then the FS space returns to a value > 10% availability. Is th... See more...
Hi. I am a new splunk user with a question: When splunk is ingesting data we get a monitoring system warning about 10% FS Availability. Then the FS space returns to a value > 10% availability. Is there a file/location where temporary data is written while ingestion is happening?   Thanks 
1. UTF-8 includes normal ASCII range. I don't think that's what you meant by "remove UTF-8 characters". UTF-8 is just an encoding. 2. What you're presenting are so called ANSI escape sequences. 3. ... See more...
1. UTF-8 includes normal ASCII range. I don't think that's what you meant by "remove UTF-8 characters". UTF-8 is just an encoding. 2. What you're presenting are so called ANSI escape sequences. 3. Are you sure they are literarily in your logs or do you have them rendered and filtered already? 4. Ugh. Where are you getting those events from? It seems like capturing some terminal input instead of sending events as such. (BTW, you could try setting some dumb terminal type before starting your process so the service doesn't produce such ugly codes).
I think that would depend on how the syslog data is received, but I believe it's still possible.
I've Admin rights and when I click on any tag permission (Settings --> tags), I get the following error: The requested URL was rejected. Please consult with your administrator. Any idea why this ... See more...
I've Admin rights and when I click on any tag permission (Settings --> tags), I get the following error: The requested URL was rejected. Please consult with your administrator. Any idea why this is happening?  
Fantastic, many thanks This appears simple, and my output hasn't changed, i got latest date in top of the table now . Thank you
Super , This is functioning, however the column is shifting. But thank you, I now have a solution.
I don't think you can access a bucket without having any accounts (and subsequently being given access to that bucket). But I might be wrong, I'm not an AWS expert.
Not sure if this will work because the Add-On requires us to to have AWS account.  We don't have or manage any AWS accounts. 
Hi @ravir_jbp ... for the data already logged into splunk, do you want to use Splunk Search query and get some results? (and maybe do you want to create dashboard/alert/report)  or do you want to o... See more...
Hi @ravir_jbp ... for the data already logged into splunk, do you want to use Splunk Search query and get some results? (and maybe do you want to create dashboard/alert/report)  or do you want to onboard/ingest some csv files, but the field extraction not working as expected, please suggest, thanks. 
It would be best if you provided us with some mockup data and expected result. Selecting based on values from the lookup requires a subsearch indeed, similarily to what you already did (but you don'... See more...
It would be best if you provided us with some mockup data and expected result. Selecting based on values from the lookup requires a subsearch indeed, similarily to what you already did (but you don't need to specify append=t in case of a simple inputlookup; you need it only if you use that command later in the pipeline to append the results from the lookup to the earlier results). Again - you can't use two separate aggregations in a single timechart command. So you can't do, for example: timechart span=1h sum(A) avg(A)  You need to do two separate timechart commands. Or - as I said, do | bin _time span=1h | stats sum(A) as sum avg(a) as avg by _time If you want to combine them now to a single time-based table you'd need to do something like | stats values(sum) as sum values(avg) as (avg) by _time It gets tricky if you try to split that by additional field. Depending on your desired outcome you might want to either dynamically create fields or use some xyseries/untable tricks.