All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Main question is: is there any splunk apm on-premises available or it just available in cloud version?
Hi @t_splunk_d , let me understand: you have each row in a different event and you're sure that the event are in this sequence. I suppose that you already extracted Process and Person_Name fields, ... See more...
Hi @t_splunk_d , let me understand: you have each row in a different event and you're sure that the event are in this sequence. I suppose that you already extracted Process and Person_Name fields, in this case you could run something like this: <your_search> | transaction startswith="Start Processing" maxevents=2 | table Process Person_Name Ciao. Giuseppe  
Hi @indeed_2000 , exctly the same because the field exraction is performed at search time. ciao. Giuseppe
Hi @indeed_2000, for my knowledge Application Performance Monitoring is a premium App that you can download from Splunkbase if you buy a license. If you want to see it, you can have a free trial in... See more...
Hi @indeed_2000, for my knowledge Application Performance Monitoring is a premium App that you can download from Splunkbase if you buy a license. If you want to see it, you can have a free trial in the page you shared, I suppose on Splunk Cloud. Ciao. Giuseppe
@gcusello How about performance?
Hi @indeed_2000, if you extract a field using the rex command you have this extraction only in the search, if you have a field extraction (even if done with athe same regex) in conf file (that mean... See more...
Hi @indeed_2000, if you extract a field using the rex command you have this extraction only in the search, if you have a field extraction (even if done with athe same regex) in conf file (that means save the regex as field extraction), you can use the field extractions in all searches (related to the permission of the knowledge object). Ciao. Giuseppe
Hi What is the different between Extract fields in query with rex or in config file. Pros and cons? how about performance?   Thanks,
Splunk does not have a feature to link reports so that one will not start before another completes.  The best we can do is schedule them such that the first report is sure to be finished before the n... See more...
Splunk does not have a feature to link reports so that one will not start before another completes.  The best we can do is schedule them such that the first report is sure to be finished before the next starts. Go to https://ideas.splunk.com to make a case for linking reports.
@gcusello currently i have installed Splunk enterprises, now need to install APM product or app on it. Here is the link https://www.splunk.com/en_us/products/apm-application-performance-monitoring.html
Hi @sekhar463, usually json events are a single event, if you want to separate, you have to define the LINE_BREAKER, the TIME_FORMAT and the TIME_PREFIX for your sourcetype [your_sourcetype] LINE_B... See more...
Hi @sekhar463, usually json events are a single event, if you want to separate, you have to define the LINE_BREAKER, the TIME_FORMAT and the TIME_PREFIX for your sourcetype [your_sourcetype] LINE_BREAKER = \{ TIME_FORMAT = %m/%d/%Y %I:%M:%S %p TIME_PREFIX = \"WhenCreated\": \" Ciao. Giuseppe
Maybe something like this? | multisearch [ | search index=messages* MSG_src="AAAAA" MSG_DOMAIN="BBBBBB" MSG_TYPE="CC *" | rename MSGID as MSGID1 ]... See more...
Maybe something like this? | multisearch [ | search index=messages* MSG_src="AAAAA" MSG_DOMAIN="BBBBBB" MSG_TYPE="CC *" | rename MSGID as MSGID1 ] [ | search index=messages* MSG_src="CCCCCC", MSG_DOMAIN="DDDDDDD", MSG_TYPE="Workflow Start" | rex field=_raw "<pmt>(?<pmt>.*)<\/pmt>" | rex field=_raw "<EventId>(?<MSGID1>.*)<\/EventId>" | search pmt="EEEEEEE" ] | stats ``` first occurrence timestamp of msg_id in search_1 ``` earliest(eval(case(match(MSG_TYPE, "^C{2}\s+"), _time))) as first_event_epoch, ``` first occurrence timestamp of msg_id in search_2 ``` earliest(eval(case('MSG_TYPE'=="Workflow Start", _time))) as second_event_epoch by MSGID1 ``` calculate the time difference between the msg_id showing up in each search ``` | eval diff_seconds=if( ``` if the msg_id didn't show up in the second search but did show up in the first ``` isnull(second_event_epoch) AND isnotnull(first_event_epoch), ``` calculate how long ago from now the msg_id was seen in search_1 ``` now()-'first_event_epoch', ``` msg_id exists in both searches, calculate the time difference between them in seconds ``` 'second_event_epoch'-'first_event_epoch' ), ``` convert time difference to hours``` diff_hours='diff_seconds'/(60*60), ``` human readable format ``` duration_seconds=tostring(diff_seconds, "duration") ``` filter off everything the has less than a 1 hour difference ``` | where 'diff_hours'>1
That setting probably has not changed over the years since 6.6.0.  The docs don't specify a maximum value so it may be limited only by the amount of memory available on your server.
I want to get the result of the next line of the log message when I encounter  a key word. Example log: ----error in checking status-------- ----Person Name: abcd, Status=active--------- -----Che... See more...
I want to get the result of the next line of the log message when I encounter  a key word. Example log: ----error in checking status-------- ----Person Name: abcd, Status=active--------- -----Check for Status------ ------success : true-------- -----Start  Processing XXX---------- ----Person Name: abcd, Status=active--------- -----Check for Status------ ------success : true-------- -----Start  Processing XXX---------- ----Person Name: abcd, address:yzgj--------- -----Check for Person------ ------success : true-------- -----Start  Processing XXX----------   In the above log I want to  capture the person name  after the  "Check for Person". The log is indexed by _time.  I want to display the following result:   _time             Process                           Person Name                                                        XXX                                       abcd I don't want to use map or transactions as those are expensive as there are lot of events. Thank you for the help.  
It sounds like you timestamps "created" and "last_login" have the format "%Y-%m-%d" in the events. Trying to convert them to epoch using a different format will not work. For example If... See more...
It sounds like you timestamps "created" and "last_login" have the format "%Y-%m-%d" in the events. Trying to convert them to epoch using a different format will not work. For example If you have a situations where your events have these field in a mixture of both formats, maybe you could adjust your eval to be something more like this? | eval dormancy=if( last_login="(never)", round((now()-case(match(created, "^\d{4}\-\d{2}\-\d{2}"), strptime(created,"%Y-%m-%d"), match(created, "^\d{4}\/\d{2}\/\d{2}"), strptime(created,"%Y/%m/%d")))/86400), round((now()-case(match(last_login, "^\d{4}\-\d{2}\-\d{2}"), strptime(last_login,"%Y-%m-%d"), match(last_login, "^\d{4}\/\d{2}\/\d{2}"), strptime(last_login,"%Y/%m/%d")))/86400) )   This seem to extract both formats properly  
Hi, communities, I am doing a calculation or eval command.       | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y-%m-%d"))/86400),round((now()-strptime(last_login,"%Y-%m-... See more...
Hi, communities, I am doing a calculation or eval command.       | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y-%m-%d"))/86400),round((now()-strptime(last_login,"%Y-%m-%d"))/86400))     The above calculate dormancy number correctly but, soon as I change the following code:     | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y/%m/%d"))/86400),round((now()-strptime(last_login,"%Y/%m/%d"))/86400))     from "-" to "/" strptime doesn't calculate the dormancy days.  Is this limit of strptime or am I doing something wrong?
The records are linked via ID in the first search its MSGID in the second search its extracted from  | rex field=_raw "<EventId>(?<MSGID1>.*)</EventId>"
I'm migrating my Splunk Instance from an outdated OS. I want to increase the buffer size for my Splunk forwarder so that it can withstand all the logs when the receiver/ Indexer is down. We are using... See more...
I'm migrating my Splunk Instance from an outdated OS. I want to increase the buffer size for my Splunk forwarder so that it can withstand all the logs when the receiver/ Indexer is down. We are using Splunk version 6.6.0, I'm unable to find relevant documentation for referring to the configuration file changes.
Hello Members and richgallowy,   Thanks for the tip. It has been a while since I have needed to apply my limited "Splunk" skills, I appreciate this suggestion, and will try it out;.   Regards, ... See more...
Hello Members and richgallowy,   Thanks for the tip. It has been a while since I have needed to apply my limited "Splunk" skills, I appreciate this suggestion, and will try it out;.   Regards, EWHolz
@letsgopats39 , @PhoebeOh - Unfortunately, there is no option right now with Dashboard Studio. There is an option with a Simple XML dashboard called "hideSplunkBar" But for Dashboard Studio, the id... See more...
@letsgopats39 , @PhoebeOh - Unfortunately, there is no option right now with Dashboard Studio. There is an option with a Simple XML dashboard called "hideSplunkBar" But for Dashboard Studio, the idea is submitted to add that in the future - https://ideas.splunk.com/ideas/EID-I-1063 (In good news is that the feature is "In Development".)   I hope this helps!!! Kindly upvote and accept the answer if this is helpful!!!!
I was thinking something like this would work but its probably not the best way?   index=messages* earliest=-2h MSG_src="AAAAA" MSG_DOMAIN="BBBBBB" MSG_TYPE="CC *" | rename MSGID AS MSGID1 | append... See more...
I was thinking something like this would work but its probably not the best way?   index=messages* earliest=-2h MSG_src="AAAAA" MSG_DOMAIN="BBBBBB" MSG_TYPE="CC *" | rename MSGID AS MSGID1 | append [search index=messages* MSG_src="CCCCCC", MSG_DOMAIN="DDDDDDD", MSG_TYPE="Workflow Start" | rex field=_raw "<pmt>(?<pmt>.*)</pmt>" | rex field=_raw <EventId>(?<MSGID1>.*)</EventId> | search pmt=EEEEEEE] | stats count by MSGID1 | search count<2   The problem I see in testing is that this triggers on new IDs that have come in but are still within the hour timeframe.