All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi as @richgalloway said most of parameters are still in use. Just look version 7.0.0 to get the nearest docs. You found those settings at least server and inputs conf files. If/when you are talking... See more...
Hi as @richgalloway said most of parameters are still in use. Just look version 7.0.0 to get the nearest docs. You found those settings at least server and inputs conf files. If/when you are talking about intermediate forwarders you should ensure that you are using also persistent disk queues and not only memory based. Otherwise you lost events if nodes or services goes down before indexers are up. r. Ismo
Awesome! Glad it's working out so far.  Feel free to leave reply if you run into any issues and I we can try to resolve.
From what I can tell in testing this over the last few hours this solution works really well. Still testing it out and validating accuracy but so far, it's great. I was actually working on adding dur... See more...
From what I can tell in testing this over the last few hours this solution works really well. Still testing it out and validating accuracy but so far, it's great. I was actually working on adding duration but you definitely beat me to it. Thanks!
Hi @indeed_2000, I'm not sure, but this app isn't downloadable in Splunkbase, so I suppose is only in Cloud, Youshould ask to your reference Splunk Partner. Ciao. Giuseppe
You may be able to use streamstats assuming that there is some degree off distribution of _time between each event. <base_search> | rex field=_raw "Processing\s+(?<process>[^\-]+)\-" | r... See more...
You may be able to use streamstats assuming that there is some degree off distribution of _time between each event. <base_search> | rex field=_raw "Processing\s+(?<process>[^\-]+)\-" | rex field=_raw "Person\s+Name\:\s+(?<person_name>[^\,]+)\," | sort 0 +_time | streamstats reset_before="("isnotnull(process)")" values(process) as current_process | streamstats window=2 first(_raw) as previous_log | eval checked_person_name=if( match(previous_log, "\-Check\s+for\s+Person\-"), 'person_name', null() ) | stats min(_time) as _time by current_process, checked_person_name | fields + _time, current_process, checked_person_name   The final output should look something like this The table before the final stats aggregation looked like this and show more context around what the streamstats are doing here. Note: For this method to work properly _timestamps of each process event shouldn't be exactly the same, there would need to be some sort of step up in time to the next event (event if it is milliseconds). This is because we need the events in the correct sequence for the streamstats to work as expected.  
Main question is: is there any splunk apm on-premises available or it just available in cloud version?
Hi @t_splunk_d , let me understand: you have each row in a different event and you're sure that the event are in this sequence. I suppose that you already extracted Process and Person_Name fields, ... See more...
Hi @t_splunk_d , let me understand: you have each row in a different event and you're sure that the event are in this sequence. I suppose that you already extracted Process and Person_Name fields, in this case you could run something like this: <your_search> | transaction startswith="Start Processing" maxevents=2 | table Process Person_Name Ciao. Giuseppe  
Hi @indeed_2000 , exctly the same because the field exraction is performed at search time. ciao. Giuseppe
Hi @indeed_2000, for my knowledge Application Performance Monitoring is a premium App that you can download from Splunkbase if you buy a license. If you want to see it, you can have a free trial in... See more...
Hi @indeed_2000, for my knowledge Application Performance Monitoring is a premium App that you can download from Splunkbase if you buy a license. If you want to see it, you can have a free trial in the page you shared, I suppose on Splunk Cloud. Ciao. Giuseppe
@gcusello How about performance?
Hi @indeed_2000, if you extract a field using the rex command you have this extraction only in the search, if you have a field extraction (even if done with athe same regex) in conf file (that mean... See more...
Hi @indeed_2000, if you extract a field using the rex command you have this extraction only in the search, if you have a field extraction (even if done with athe same regex) in conf file (that means save the regex as field extraction), you can use the field extractions in all searches (related to the permission of the knowledge object). Ciao. Giuseppe
Hi What is the different between Extract fields in query with rex or in config file. Pros and cons? how about performance?   Thanks,
Splunk does not have a feature to link reports so that one will not start before another completes.  The best we can do is schedule them such that the first report is sure to be finished before the n... See more...
Splunk does not have a feature to link reports so that one will not start before another completes.  The best we can do is schedule them such that the first report is sure to be finished before the next starts. Go to https://ideas.splunk.com to make a case for linking reports.
@gcusello currently i have installed Splunk enterprises, now need to install APM product or app on it. Here is the link https://www.splunk.com/en_us/products/apm-application-performance-monitoring.html
Hi @sekhar463, usually json events are a single event, if you want to separate, you have to define the LINE_BREAKER, the TIME_FORMAT and the TIME_PREFIX for your sourcetype [your_sourcetype] LINE_B... See more...
Hi @sekhar463, usually json events are a single event, if you want to separate, you have to define the LINE_BREAKER, the TIME_FORMAT and the TIME_PREFIX for your sourcetype [your_sourcetype] LINE_BREAKER = \{ TIME_FORMAT = %m/%d/%Y %I:%M:%S %p TIME_PREFIX = \"WhenCreated\": \" Ciao. Giuseppe
Maybe something like this? | multisearch [ | search index=messages* MSG_src="AAAAA" MSG_DOMAIN="BBBBBB" MSG_TYPE="CC *" | rename MSGID as MSGID1 ]... See more...
Maybe something like this? | multisearch [ | search index=messages* MSG_src="AAAAA" MSG_DOMAIN="BBBBBB" MSG_TYPE="CC *" | rename MSGID as MSGID1 ] [ | search index=messages* MSG_src="CCCCCC", MSG_DOMAIN="DDDDDDD", MSG_TYPE="Workflow Start" | rex field=_raw "<pmt>(?<pmt>.*)<\/pmt>" | rex field=_raw "<EventId>(?<MSGID1>.*)<\/EventId>" | search pmt="EEEEEEE" ] | stats ``` first occurrence timestamp of msg_id in search_1 ``` earliest(eval(case(match(MSG_TYPE, "^C{2}\s+"), _time))) as first_event_epoch, ``` first occurrence timestamp of msg_id in search_2 ``` earliest(eval(case('MSG_TYPE'=="Workflow Start", _time))) as second_event_epoch by MSGID1 ``` calculate the time difference between the msg_id showing up in each search ``` | eval diff_seconds=if( ``` if the msg_id didn't show up in the second search but did show up in the first ``` isnull(second_event_epoch) AND isnotnull(first_event_epoch), ``` calculate how long ago from now the msg_id was seen in search_1 ``` now()-'first_event_epoch', ``` msg_id exists in both searches, calculate the time difference between them in seconds ``` 'second_event_epoch'-'first_event_epoch' ), ``` convert time difference to hours``` diff_hours='diff_seconds'/(60*60), ``` human readable format ``` duration_seconds=tostring(diff_seconds, "duration") ``` filter off everything the has less than a 1 hour difference ``` | where 'diff_hours'>1
That setting probably has not changed over the years since 6.6.0.  The docs don't specify a maximum value so it may be limited only by the amount of memory available on your server.
I want to get the result of the next line of the log message when I encounter  a key word. Example log: ----error in checking status-------- ----Person Name: abcd, Status=active--------- -----Che... See more...
I want to get the result of the next line of the log message when I encounter  a key word. Example log: ----error in checking status-------- ----Person Name: abcd, Status=active--------- -----Check for Status------ ------success : true-------- -----Start  Processing XXX---------- ----Person Name: abcd, Status=active--------- -----Check for Status------ ------success : true-------- -----Start  Processing XXX---------- ----Person Name: abcd, address:yzgj--------- -----Check for Person------ ------success : true-------- -----Start  Processing XXX----------   In the above log I want to  capture the person name  after the  "Check for Person". The log is indexed by _time.  I want to display the following result:   _time             Process                           Person Name                                                        XXX                                       abcd I don't want to use map or transactions as those are expensive as there are lot of events. Thank you for the help.  
It sounds like you timestamps "created" and "last_login" have the format "%Y-%m-%d" in the events. Trying to convert them to epoch using a different format will not work. For example If... See more...
It sounds like you timestamps "created" and "last_login" have the format "%Y-%m-%d" in the events. Trying to convert them to epoch using a different format will not work. For example If you have a situations where your events have these field in a mixture of both formats, maybe you could adjust your eval to be something more like this? | eval dormancy=if( last_login="(never)", round((now()-case(match(created, "^\d{4}\-\d{2}\-\d{2}"), strptime(created,"%Y-%m-%d"), match(created, "^\d{4}\/\d{2}\/\d{2}"), strptime(created,"%Y/%m/%d")))/86400), round((now()-case(match(last_login, "^\d{4}\-\d{2}\-\d{2}"), strptime(last_login,"%Y-%m-%d"), match(last_login, "^\d{4}\/\d{2}\/\d{2}"), strptime(last_login,"%Y/%m/%d")))/86400) )   This seem to extract both formats properly  
Hi, communities, I am doing a calculation or eval command.       | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y-%m-%d"))/86400),round((now()-strptime(last_login,"%Y-%m-... See more...
Hi, communities, I am doing a calculation or eval command.       | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y-%m-%d"))/86400),round((now()-strptime(last_login,"%Y-%m-%d"))/86400))     The above calculate dormancy number correctly but, soon as I change the following code:     | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y/%m/%d"))/86400),round((now()-strptime(last_login,"%Y/%m/%d"))/86400))     from "-" to "/" strptime doesn't calculate the dormancy days.  Is this limit of strptime or am I doing something wrong?