All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thanks @inventsekar  it solved!
Hi @Labuser43 , at first Splunk isn't a database so you don't need an inner join to extract some data like the ones you want, forget all you know about databases and reset your mind (I did it 13 yea... See more...
Hi @Labuser43 , at first Splunk isn't a database so you don't need an inner join to extract some data like the ones you want, forget all you know about databases and reset your mind (I did it 13 years ago!). You have to correlate different events to extract e.g. the timestamps of logins and logouts and find the duration of a transaction. So please, see my approach and adapt it to your requirements: index=allsessions ("*login*" OR "*logout*") | stats earliest(eval(if(searchmatch("*login*"),_time,"") AS earliest latest(eval(if(searchmatch("*logout*"),_time,"") AS latest BY SESSION.ID | eval earliest=strgtime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strgtime(latiest,"%Y-%m-%d %H:%M:%S") Ciao. Giuseppe
Hi Team, I am trying to integrate jenkins/cloudbees with Splunk using the splunk plugin. But I do not want to store the HEC_TOKEN as plain text or hard-coded value in the splunk configuration unde... See more...
Hi Team, I am trying to integrate jenkins/cloudbees with Splunk using the splunk plugin. But I do not want to store the HEC_TOKEN as plain text or hard-coded value in the splunk configuration under Manage jenkins --> System --> Splunk for Jenkins Configuration. I am trying to store it as a credential or environment variable and then use it in Jenkinsfile but it does not work, Is there any work around for this? Please let me know. Thanks.
Hi @Bracha Pls check this: | makeresults | eval end_time="2024-09-24 08:17:13.014337+00:00" |eval end_timeepoch = strptime(end_time, "%Y-%m-%d %H:%M:%S.%6Q+00:00") |eval _time = now() |eval diff = (... See more...
Hi @Bracha Pls check this: | makeresults | eval end_time="2024-09-24 08:17:13.014337+00:00" |eval end_timeepoch = strptime(end_time, "%Y-%m-%d %H:%M:%S.%6Q+00:00") |eval _time = now() |eval diff = (end_timeepoch-_time)/60 | table end_time end_timeepoch _time diff  
I'm trying to calculate the minute difference between two times and get an empty field   .........base search here......... |end_time = 2024-09-24 08:17:13.014337+00:00 |eval end_time = strptime(e... See more...
I'm trying to calculate the minute difference between two times and get an empty field   .........base search here......... |end_time = 2024-09-24 08:17:13.014337+00:00 |eval end_time = strptime(end_time_epoch, "%Y:%m:%d %H:%M:%S") |eval _time = now() |eval time_epoch = strptime(time_epoch, "%Y:%m:%d %H:%M:%S") |eval diff = (time_epoch-end_time)/60  
@ITWhisperer After executing  suggested command I am getting below results. The count should 2 only. 1 for the storage and 1 for the retrieval.  
Sure @Labuser43 , got it now.. pls try this inner join (or if you want to test other two joins, "type=left" or "type=outer")     index=allsessions "*login*" | join type=inner SESSION_ID [search in... See more...
Sure @Labuser43 , got it now.. pls try this inner join (or if you want to test other two joins, "type=left" or "type=outer")     index=allsessions "*login*" | join type=inner SESSION_ID [search index=allsessions "*logout*"]     still, the join can be avoided i feel. maybe pls check: EDIT - included the OR portion   index=allsessions "*login*" OR "*logout*" | stats list(OPERATION) by SESSION_ID OR index=allsessions "*login*" OR "*logout*"| stats values(OPERATION) by SESSION_ID  
@inventsekar my requirement is to get SESSION_IDs where both login AND logout occur in that session. To explain more, let's use an example of a session that would fit this criteria:   OPERATION S... See more...
@inventsekar my requirement is to get SESSION_IDs where both login AND logout occur in that session. To explain more, let's use an example of a session that would fit this criteria:   OPERATION SESSION_ID login 1234 add_to_cart 1234 checkout 1234 logout 1234   If I use OR, I think it may return a session that only has login or logout.
Greetings, Please help!! I need to extract the ID value from the two events below, and I’m kinda banging my head here… . I just need to list Q123456789 and each ID in my dashboard. But it I can’t... See more...
Greetings, Please help!! I need to extract the ID value from the two events below, and I’m kinda banging my head here… . I just need to list Q123456789 and each ID in my dashboard. But it I can’t get past all of the special characters. I’ve tried using different combinations like this: | eval msg=”the event” | rex "msg =(?< policyId >\w+)” | table policyId But what I would really like to have something like this in my dashboard: Starting Controller Q123456789 CallStatus=Success Q123456789 Starting Controller Q123456788 CallStatus=Success Q123456788 Starting Controller Q123456787 CallStatus=Success Q123456787 And so on. Is this possible? Your help is always appreciated. Thanks     Starting Controller=Fall Action=GetFallReportAssessment data={"policyId":"Q123456789","inceptionDate":"20250501","postDate":"1900-01-01T12:00:00"}   API=/api/Fall/reportAssessment/ CallStatus=Success Controller=Fall Action=GetFallReportAssessment Duration=27 data={"policyId":"Q123456789","inceptionDate":"20250501","postDate":"1900-01-01T12:00:00"}
Hi Team, I hope this email finds you well. I am currently working on a task to monitor long-running Apex classes in Salesforce and would like to write a query to help track these. Could you pleas... See more...
Hi Team, I hope this email finds you well. I am currently working on a task to monitor long-running Apex classes in Salesforce and would like to write a query to help track these. Could you please suggest the best approach or share a sample query that would assist in identifying and monitoring these classes effectively? Your guidance on this matter would be greatly appreciated. Thank you for your support. Regards
Hi @Labuser43  if i understand your requirement correctly, you may not need the join at all. simply try the OR option: index=allsessions "*login*" OR "*logout*"  
Hello, I'm just trying to learn SPL and am currently trying to find all sessions with login and logout requests, identified by the SESSION_ID field. So basically I'm trying to find all SESSION_ID va... See more...
Hello, I'm just trying to learn SPL and am currently trying to find all sessions with login and logout requests, identified by the SESSION_ID field. So basically I'm trying to find all SESSION_ID values where within the session the user performs a login and logout operation. Coming from the relational database world, my first step was to write some sort of join operation but I quickly found out that joins are not the best thing to do in Splunk.  This is what I tried:   index=allsessions "*login*" | join type=inner left=L right=R where L.SESSION_ID=R.SESSION_ID [search index=allsessions "*logout*"]   Can someone help me write a better query for the above problem? Thanks!
Hi @LearningGuy  the sendemail command reference:  https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Sendemail#Examples updated your command with "to=" and message. thanks.  | se... See more...
Hi @LearningGuy  the sendemail command reference:  https://docs.splunk.com/Documentation/Splunk/9.3.1/SearchReference/Sendemail#Examples updated your command with "to=" and message. thanks.  | search1 | join [search2 | stats count | where count > 50000 | eval this = "search 2"] | sendemail to="test@testemail.com" message="50k reached"  
1) >>> I am trying to find out Server Up time & Downtime or offline  the logs got a field "Uptime", may i know if the unit is seconds?  2) how to find out the downtime or offline? 3)  this command... See more...
1) >>> I am trying to find out Server Up time & Downtime or offline  the logs got a field "Uptime", may i know if the unit is seconds?  2) how to find out the downtime or offline? 3)  this command will give you number of hours before the logline was received | eval Uptime = round((now() - _time) / (60 * 60), 1)  pls suggest us how you like to use this value 4)  may i know why you use the  | search Uptime="4.0"  
  Sorry what information are you looking for  
Hi @jaibalaraman  You have a field "Uptime" and then using the eval you are calculating the same field.  Could you pls suggest us with more details, thanks.  | mstats max(System.System_Up_Time) AS... See more...
Hi @jaibalaraman  You have a field "Uptime" and then using the eval you are calculating the same field.  Could you pls suggest us with more details, thanks.  | mstats max(System.System_Up_Time) AS "Uptime" WHERE index="permon_metrics" host=system1* BY host span=1m | dedup host | rex field=host "\w{6}(?<function_abbr>\w{4})" | search function_abbr=ADDS | sort Uptime asc | eval UptimeNew = round((now() - _time) / (60 * 60), 1) | table Uptime UptimeNew function_abbr host  
Hi All  I am trying to find out Server Up time & Downtime or offline  However i am using the below command which i am not getting what i want  | mstats max(System.System_Up_Time) AS "Uptime... See more...
Hi All  I am trying to find out Server Up time & Downtime or offline  However i am using the below command which i am not getting what i want  | mstats max(System.System_Up_Time) AS "Uptime" WHERE index="permon_metrics" host=system1* BY host span=1m | dedup host | rex field=host "\w{6}(?<function_abbr>\w{4})" | search function_abbr=ADDS | sort Uptime asc | eval Uptime = round((now() - _time) / (60 * 60), 1) | search Uptime="4.0" I would like to see the output in a single tile like HH:MM:SS
Hi @PickleRick  The data is actually also available in Splunk using an index=contact, but it's a time based combined with other data, it makes the data even larger. It is derived from the original D... See more...
Hi @PickleRick  The data is actually also available in Splunk using an index=contact, but it's a time based combined with other data, it makes the data even larger. It is derived from the original DB, so it's better off obtain the data directly from DB. Either way, both cases (data pulling dbxquery and index) will face the same problem  (see below) We are aware that permanent solution is to join the data in the backend, but for now as a workaround I need to pull the data using SPL join subsearch. I only need to find a way to alert me if it exceeds 50k. Thanks Same problem 50k: | search1 | join [search index=contact | ip="10.0.0.0/16" | eval source=search2] | join [search index=contact | ip="10.1.0.0/16" | eval source=search3] | join [search index=contact | ip="10.2.0.0/16" | eval source=search4] | join [search index=contact | ip="10.3.0.0/16" | eval source=search5]  
Hi @yuanliu  1) a)  I got this when using sendemail.  I think the reason is I am not an admin command="sendemail", 'rootCAPath' while sending mail to:       b)   This is the search, correct? | ... See more...
Hi @yuanliu  1) a)  I got this when using sendemail.  I think the reason is I am not an admin command="sendemail", 'rootCAPath' while sending mail to:       b)   This is the search, correct? | search1 | join [search2 | stats count | where count > 50000 | eval this = "search 2"] | sendemail test@testemail.com   2)  I found another option is to use "alerts" I did some tests, but it didn't work.  I have total counts about 40k Under "Trigger Conditions", I set Trigger alert when number of results is greater than 30,000.   Please suggest. Thanks
One approach is to have a separate panel for each search then have the selected token make the appropriate panel appear.