All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you @Richfez , That worked for me. I really appreciate your quick response and love this community, it always give me answers.
This one is a bit tricky, but the below should get you started. Splunk is going into auto mode to determine the what it thinks on how to split the log into events, as this is log is not like the no... See more...
This one is a bit tricky, but the below should get you started. Splunk is going into auto mode to determine the what it thinks on how to split the log into events, as this is log is not like the normal logs with Date and a line of information say for arguments sake (they come in shapes and sizes, and you normally want well formatted logs) anyway you have to create a custom props and transforms.conf file. Create the below props and transforms for the sourcetype, this should get you started at least and you will have to make tweaks. It looks like you have redacted some of the lines with XXX..., so you may need to tweak the regex in transforms with the words as they look like extra header type of information, that you don’t want. The main thing with this kind of log it as its multi line, so we need merge it.  props.conf [jlogs] TIME_PREFIX = Job\sCompleted\sat: TIME_FORMAT = %d/%m/%Y %H:%M:%S BREAK_ONLY_BEFORE =Job\sCompleted\sat: MUST_BREAK_AFTER =local\stime([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 25 SHOULD_LINEMERGE=true TRUNCATE = 5000 NO_BINARY_CHECK = 1 KV_MODE = auto #Remove unwanted headers or data TRANSFORMS-null = remove_unwanted_data_from_jlog transforms.conf [remove_unwanted_data_from_jlog] REGEX=^(?:X*|-+)\s DEST_KEY = queue FORMAT = nullQueue There's a whole load of settings to help you with understanding this this config https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configureeventlinebreaking  
Thanks for your query! I have applied logic along with query, it working as expected. please let me know earliest and latest logic for 12:00 AM to 11:59PM.
Based on priority field and tracepoint field i am getting the status field.If priority is error and tracepoint as exception then i set status as per the keyword.But in some case its showing both ERRO... See more...
Based on priority field and tracepoint field i am getting the status field.If priority is error and tracepoint as exception then i set status as per the keyword.But in some case its showing both ERROR and SUCCESS. Message priority tracepoint After Common SFTP Get File List Response INFO AFTER_REQUEST  After Common SFTP Get File List Response INFO AFTER_REQUEST Before Common SFTP Get File Data Request INFO BEFORE_REQUEST Before Common SFTP Get File List Request INFO BEFORE_REQUEST Before Common SFTP Archive File Request INFO BEFORE_REQUEST File Upload Request for BEFORE_REQUEST INFO BEFORE_REQUEST File Upload to in SFTP mode. >>> END INFO END     END File Upload Request for f ERROR EXCEPTION Error while trying to upload file to GCP from Common SFTP ERROR EXCEPTION DEV(ERROR): Error while processing System request INFO BEFORE_REQUEST
While the series have been timewrap'ed so that they line up on the chart, which is done by using the x-axis values. You can't have multiple x-axis (unlike y-axis where an overlay series can have a di... See more...
While the series have been timewrap'ed so that they line up on the chart, which is done by using the x-axis values. You can't have multiple x-axis (unlike y-axis where an overlay series can have a different axis).
Please provide some anonymised representative events which demonstrate the issue you are facing, what results you are getting, and your expected results.
Hi, all. So, I'm using a timechart visualization (line graph) to display the number of events, by hour, over six weeks and using timewrap to overlay the weeks on top of each other, then showing the ... See more...
Hi, all. So, I'm using a timechart visualization (line graph) to display the number of events, by hour, over six weeks and using timewrap to overlay the weeks on top of each other, then showing the last two weeks along with a six week average in order to be able to spot anomalies at a glance. The problem I'm having is if I mouse over a data point from the current week it shows the appropriate date, but it still shows the same date if I mouse over the previous week's data point, too, or the week before that. For example, if I mouse over 12:00 on Wednesday for "latest_week," the tooltip will show "May 8th, 2024 12:00 PM." If I mouse over 12:00 on Wednesday for "1week_before," the tooltip still shows "May 8th, 2024 12:00 PM."  Is there any way to get the tooltip to show the proper date on the mouse-over? I know that's not going to work on the six week average, but it'd be nice with the current and previous weeks. It's a minor inconvenience, granted, but this is going into a dashboard for not-so-tech-savy customers and if I don't have to make them do math in their head we'll all be a lot better off. Here's my query, in case it'll help (and feel free to direct me toward something more efficient if I'm doing something stupid, you aren't going to hurt my feelings any):     | tstats count where <my_index> <data_field1> <data_field2> by _time span=1h prestats=t | timechart span=1h count by <data_field2> | rename <data_field2> as tot | timewrap 1w | addtotals | eval avg=round((Total/6),0) | table _time tot_1week_before tot_latest_week avg | rename avg as "6 Week Average" tot_latest_week as "Current Week" tot_1week_before as "Previous Week"      
I would change the code since I know I have to maintain any future updates to that file myself and that it might break how other reports display in a PDF. I would also check out the "betterpdf" app ... See more...
I would change the code since I know I have to maintain any future updates to that file myself and that it might break how other reports display in a PDF. I would also check out the "betterpdf" app in splunkbase (https://splunkbase.splunk.com/app/7171).
If i use some of the transactionID is error but some of its showing as Success.If the priority=error and exception="error" but the status is SUCCESS.I dont know y.
This is because the transaction ids have events with both sorts of status. If you just want the latest, you could try something like this |stats latest(Status) as Status by transactionId
Stopping splunkd is taking up to 6 minutes to complete.  We have a process that snapshots the instance and we are stopping splunkd prior to taking that snapshot.  Previously with v9.0.1 we did not ex... See more...
Stopping splunkd is taking up to 6 minutes to complete.  We have a process that snapshots the instance and we are stopping splunkd prior to taking that snapshot.  Previously with v9.0.1 we did not experience this; now we are on v9.2.1. While shutting down I am monitoring spklunkd.log and the only errors I am seeing has to do with the HFs.  'TcpInputProc [65700 tcp] - Waiting for all connections to close before shutting down TcpInputProcessor '. Has anyone else experienced something similar post upgrade?  
That did the trick, think you again for the excellent help.  Have a good week.   Thanks, Tom
Hi All, This the query which i try to get status.But in the table its shows both error and success.PFA screenshot | eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error... See more...
Hi All, This the query which i try to get status.But in the table its shows both error and success.PFA screenshot | eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR):*","SUCCESS") |stats values(Status) as Status by transactionId
How can i resolve this error  "Couldn't complete HTTP request: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure".  I keep getting this error on splunkforwarder when... See more...
How can i resolve this error  "Couldn't complete HTTP request: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure".  I keep getting this error on splunkforwarder when i run SPLUNK_HOME/splunk list monitor,  SPLUNK_HOME/splunk list inputstatus.
I am trying to compute the R-squared value of a set of measured values, to verify the performance or accuracy of a predictive model. But I can't figure out how to go about this or if Splunk has a fun... See more...
I am trying to compute the R-squared value of a set of measured values, to verify the performance or accuracy of a predictive model. But I can't figure out how to go about this or if Splunk has a function or command for this Thanks
I have a dashboard that I use when checking if a server is compliant.  It looks normal in the dashboard but when I export it as a PDF the last column gets moved to a new page.  I found this in ./etc/... See more...
I have a dashboard that I use when checking if a server is compliant.  It looks normal in the dashboard but when I export it as a PDF the last column gets moved to a new page.  I found this in ./etc/system/bin/pdfgen_endpoint.py DEFAULT_PAPER_ORIENTATION = 'portrait' What I can't find is a way of overriding the default to change it to landscape.  Does such a file exist?  If not, beyond changing the code, any ideas on how to get a landscape report so the final column will be on the same page? TIA Joe
For "uptime", just subtract your downtime from 100%. Something like this at the end: | eval percent_uptime = 100 - percent_downtime Hope that works too!
I encountered a similar issue. My scenario involved comparing two alerts and wanting to write the results of the test alert to an index while maintaining the same configurations (like throttling) for... See more...
I encountered a similar issue. My scenario involved comparing two alerts and wanting to write the results of the test alert to an index while maintaining the same configurations (like throttling) for both.  Using collect wouldn't work, because it was writing duplicate entries to the index due to the alert configuration. I managed to achieve this by directing all the results to: | tojson output_field="foo"   Then in the event field you can just enter: $result.foo$  
Hi @adrifesa95 , it shouldn't be a probem: on your HF, you can receive logs on port 9997 and send logs to Splunk Cloud on 9997 port. Check if from the UFs you can reach the HF (using e.g. telnet). ... See more...
Hi @adrifesa95 , it shouldn't be a probem: on your HF, you can receive logs on port 9997 and send logs to Splunk Cloud on 9997 port. Check if from the UFs you can reach the HF (using e.g. telnet). Ciao. Giuseppe
With the information from both and research, I found the answer that I was looking for: | stats values(host) as host | eval host="(".mvjoin(host,",").")" | nomv host |eval description=host." host ... See more...
With the information from both and research, I found the answer that I was looking for: | stats values(host) as host | eval host="(".mvjoin(host,",").")" | nomv host |eval description=host." host have failed"   the results gave me what I was looking for: (host1,host2,host3....) host have failed the stats command made the host a multivalue field, the mvjoin allowed the commas between, and the nomv took away the multivalue and made it a normal field. Thanks for ideas. Appreciate the time from your busy schedules.