All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the response, it does show info, but it seems that it looks for all errors and not just 10001 and 69. and it seems not to respect that it only shows when the percentage is greater tha... See more...
Thanks for the response, it does show info, but it seems that it looks for all errors and not just 10001 and 69. and it seems not to respect that it only shows when the percentage is greater than 10. Regards
Running 9.2 and getting the same error
Ok. So I'd approach this from a different way. Let's do some initial search index=data Then for each user we find his first ever occurrence | stats min(_time) as _time by user After this we have... See more...
Ok. So I'd approach this from a different way. Let's do some initial search index=data Then for each user we find his first ever occurrence | stats min(_time) as _time by user After this we have a list of first logins spread across time. So now all we need is to count those logins across each day | timechart span=1d count And that's it. If you also wanted to have a list of those users for each day instead of doing the timechart you should rather group the users by day manually | bin _time span=1d So now you can aggregate the values over time | stats count as 'Overall number of logins' values(user) as Users  
@danrobertsContrary to the popular saying, here a snippet of (properly formatted) text is often worth a thousand pictures. Definitely a data sample in text form is easier to deal with than a screensh... See more...
@danrobertsContrary to the popular saying, here a snippet of (properly formatted) text is often worth a thousand pictures. Definitely a data sample in text form is easier to deal with than a screenshot. @deepakcYour general idea is relatively ok but it's best to avoid line-merging whenever possible (it's relatively "heavy" performance-wise). So instead of enabling line merging it would be better to find some static part which can be always matched as the event boundary. Also the TRUNCATE setting might be too low. So the question to @danroberts is where exactly the event starts/ends and how "flexible" the format is, especially regarding the timestamp position. Also remember that any additional "clearing" (by removing the lines of dashes which might or might not be desirable - in some cases we want to preserve the event in its original form due to compliance reasons regardless of extra license usage) comes after line breaking and timestamp recognition. Edit: oh, and KV_MODE also should rather be not set to auto (even if it was kv-parsesble, it should be set statically to something instead of auto; as a rule of thumb you should not make Splunk guess).
Hi @nsiva Please try this: | makeresults | eval _raw = "123 IP Address is 1.2.3.4" | rex field=_raw "is\s(?P<ip>.*)" | table _raw ip once if the rex is working fine, then you can do, "|stats count... See more...
Hi @nsiva Please try this: | makeresults | eval _raw = "123 IP Address is 1.2.3.4" | rex field=_raw "is\s(?P<ip>.*)" | table _raw ip once if the rex is working fine, then you can do, "|stats count by ip"   let us know what happens, thanks. 
depending on your method of collection, please see here: https://docs.splunk.com/Documentation/AddOns/released/AWS/ConfigureInputs Note this portion in case you are under this scenerio: Note: I... See more...
depending on your method of collection, please see here: https://docs.splunk.com/Documentation/AddOns/released/AWS/ConfigureInputs Note this portion in case you are under this scenerio: Note: It is a best practice to collect VPC flow logs and CloudWatch logs through Kinesis streams. However, the AWS Kinesis input has the following limitations: Multiple inputs collecting data from a single stream cause duplicate events in the Splunk platform.
thanks this has worked perfectly. 
I did look at that but couldn’t comprehend it to my need. Hence, posted this. 
Please see this previous post: https://community.splunk.com/t5/Splunk-Search/How-to-extract-ip-address-using-regex/m-p/379717
my output in splunk is as below  <error code #> IP Address is x.y.z.a    I want to extract only the x.y.z.a and its count. Should ignore duplicates.   Can someone please assist?
Hello, I have created a dashboard, it is public within my group. I want the end users to be able to open the main SPLUNK link and see all the teams dashboards. We have most of the dashboards linked t... See more...
Hello, I have created a dashboard, it is public within my group. I want the end users to be able to open the main SPLUNK link and see all the teams dashboards. We have most of the dashboards linked to the app but I dont know how to add the one I just did. Added a picutre. 
Yeah, I was afraid of that. I was hoping someone would have a magic work-around I hadn't thought of, as I do tend to find some winners around here. No worries, thanks for replying.
Thank you @Richfez , That worked for me. I really appreciate your quick response and love this community, it always give me answers.
This one is a bit tricky, but the below should get you started. Splunk is going into auto mode to determine the what it thinks on how to split the log into events, as this is log is not like the no... See more...
This one is a bit tricky, but the below should get you started. Splunk is going into auto mode to determine the what it thinks on how to split the log into events, as this is log is not like the normal logs with Date and a line of information say for arguments sake (they come in shapes and sizes, and you normally want well formatted logs) anyway you have to create a custom props and transforms.conf file. Create the below props and transforms for the sourcetype, this should get you started at least and you will have to make tweaks. It looks like you have redacted some of the lines with XXX..., so you may need to tweak the regex in transforms with the words as they look like extra header type of information, that you don’t want. The main thing with this kind of log it as its multi line, so we need merge it.  props.conf [jlogs] TIME_PREFIX = Job\sCompleted\sat: TIME_FORMAT = %d/%m/%Y %H:%M:%S BREAK_ONLY_BEFORE =Job\sCompleted\sat: MUST_BREAK_AFTER =local\stime([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 25 SHOULD_LINEMERGE=true TRUNCATE = 5000 NO_BINARY_CHECK = 1 KV_MODE = auto #Remove unwanted headers or data TRANSFORMS-null = remove_unwanted_data_from_jlog transforms.conf [remove_unwanted_data_from_jlog] REGEX=^(?:X*|-+)\s DEST_KEY = queue FORMAT = nullQueue There's a whole load of settings to help you with understanding this this config https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configureeventlinebreaking  
Thanks for your query! I have applied logic along with query, it working as expected. please let me know earliest and latest logic for 12:00 AM to 11:59PM.
Based on priority field and tracepoint field i am getting the status field.If priority is error and tracepoint as exception then i set status as per the keyword.But in some case its showing both ERRO... See more...
Based on priority field and tracepoint field i am getting the status field.If priority is error and tracepoint as exception then i set status as per the keyword.But in some case its showing both ERROR and SUCCESS. Message priority tracepoint After Common SFTP Get File List Response INFO AFTER_REQUEST  After Common SFTP Get File List Response INFO AFTER_REQUEST Before Common SFTP Get File Data Request INFO BEFORE_REQUEST Before Common SFTP Get File List Request INFO BEFORE_REQUEST Before Common SFTP Archive File Request INFO BEFORE_REQUEST File Upload Request for BEFORE_REQUEST INFO BEFORE_REQUEST File Upload to in SFTP mode. >>> END INFO END     END File Upload Request for f ERROR EXCEPTION Error while trying to upload file to GCP from Common SFTP ERROR EXCEPTION DEV(ERROR): Error while processing System request INFO BEFORE_REQUEST
While the series have been timewrap'ed so that they line up on the chart, which is done by using the x-axis values. You can't have multiple x-axis (unlike y-axis where an overlay series can have a di... See more...
While the series have been timewrap'ed so that they line up on the chart, which is done by using the x-axis values. You can't have multiple x-axis (unlike y-axis where an overlay series can have a different axis).
Please provide some anonymised representative events which demonstrate the issue you are facing, what results you are getting, and your expected results.
Hi, all. So, I'm using a timechart visualization (line graph) to display the number of events, by hour, over six weeks and using timewrap to overlay the weeks on top of each other, then showing the ... See more...
Hi, all. So, I'm using a timechart visualization (line graph) to display the number of events, by hour, over six weeks and using timewrap to overlay the weeks on top of each other, then showing the last two weeks along with a six week average in order to be able to spot anomalies at a glance. The problem I'm having is if I mouse over a data point from the current week it shows the appropriate date, but it still shows the same date if I mouse over the previous week's data point, too, or the week before that. For example, if I mouse over 12:00 on Wednesday for "latest_week," the tooltip will show "May 8th, 2024 12:00 PM." If I mouse over 12:00 on Wednesday for "1week_before," the tooltip still shows "May 8th, 2024 12:00 PM."  Is there any way to get the tooltip to show the proper date on the mouse-over? I know that's not going to work on the six week average, but it'd be nice with the current and previous weeks. It's a minor inconvenience, granted, but this is going into a dashboard for not-so-tech-savy customers and if I don't have to make them do math in their head we'll all be a lot better off. Here's my query, in case it'll help (and feel free to direct me toward something more efficient if I'm doing something stupid, you aren't going to hurt my feelings any):     | tstats count where <my_index> <data_field1> <data_field2> by _time span=1h prestats=t | timechart span=1h count by <data_field2> | rename <data_field2> as tot | timewrap 1w | addtotals | eval avg=round((Total/6),0) | table _time tot_1week_before tot_latest_week avg | rename avg as "6 Week Average" tot_latest_week as "Current Week" tot_1week_before as "Previous Week"      
I would change the code since I know I have to maintain any future updates to that file myself and that it might break how other reports display in a PDF. I would also check out the "betterpdf" app ... See more...
I would change the code since I know I have to maintain any future updates to that file myself and that it might break how other reports display in a PDF. I would also check out the "betterpdf" app in splunkbase (https://splunkbase.splunk.com/app/7171).