All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Find the difference between two timestamps by converting each into epoch (integer) format using the strptime function and then subtract them. | eval eStartTime=strptime('Start-Time', "%Y-%m-%dT%H:%M... See more...
Find the difference between two timestamps by converting each into epoch (integer) format using the strptime function and then subtract them. | eval eStartTime=strptime('Start-Time', "%Y-%m-%dT%H:%M:%S.%6N%Z") | eval eEndTime=strptime('End-Time', "%Y-%m-%dT%H:%M:%S.%6N%Z") P.S.  Avoid using hyphens in field names as they can be mis-interpreted as the subtraction operator.
Hi, Did you ever managed to get this bottom?
Hi  Can someone please let me know how i can find the difference between the 2 fields Start-Time and End-Time in the below search.  Format of time extracted by the query is :  Start-Time = 202... See more...
Hi  Can someone please let me know how i can find the difference between the 2 fields Start-Time and End-Time in the below search.  Format of time extracted by the query is :  Start-Time = 2024-01-23T11:38:59.0000000Z End-Time = 2024-01-23T11:39:03.0000000Z Query :  `macro_events_prod_srt_shareholders_esa` eocEnv = PRO * "MICROSOFT.DATAFACTORY" activityName = Merge_Disclosure_Request 741b5db8-da47-468b-b883-a06ef137519a | eval Dreqid=case('category'="PipelineRuns",'properties.Parameters.DisclosureRequestId','category'="ActivityRuns",'properties.Input.storedProcedureParameters.DisclosureRequestId.value',1=1,"") | eval end_time=case('end'="1601-01-01T00:00:00.0000000Z", "Still-Running",1=1,'end') | table eocEnv , start , end_time , pipelineName , activityName, pipelineRunId,level , status , category , Type , Dreqid, properties.Error.errorCode , properties.Error.message | rename Dreqid as "Disclosure request id" , eocEnv as "Environment" , EOC_ResourceGroup as " Resource_Group" , activityName as "Activity Name" , pipelineName as "Pipeline Name" , operationName as "Operation Name" , pipelineRunId as "Run_Id" , level as "Level" , status as "STATUS" , category as "Category" , start as "Start-Time" , end_time as "End-Time" , properties.Error.errorCode as "Error-Code" , properties.Error.message as "Error-Message" | sort -"Start-Time"          
No. Remember that Splunk is "just" a data processing solution. In order to process the data it must have that data. The logon events only contain so much data. If you don't have any external source o... See more...
No. Remember that Splunk is "just" a data processing solution. In order to process the data it must have that data. The logon events only contain so much data. If you don't have any external source of information that you could correlate with it, you simply don't have that data. But if you know you have a closed list of accounts you want to check (for example userA, userB and Administrator), you can explicitly look for only those logins.
Close. You don't need to restart the DS. Just reload the deployment classes. (if you're doing it via CLI, if I remember correctly, the GUI takes care of that automatically)
So basically we can do the following: Create a dummy app and assign system/UFs we wish to restart Flag that app to restart with the checkbox Restart depoyment server Dummy app gets pushed to UF ... See more...
So basically we can do the following: Create a dummy app and assign system/UFs we wish to restart Flag that app to restart with the checkbox Restart depoyment server Dummy app gets pushed to UF UF reads it and restarts/reloads itself?
Hello everyone. Is it possible to configure a BT to capture snapshots from just a single tier? For example: I want my BT called: instance/instance{id} to capture  transaction snapshots from a sing... See more...
Hello everyone. Is it possible to configure a BT to capture snapshots from just a single tier? For example: I want my BT called: instance/instance{id} to capture  transaction snapshots from a single tier called: portal-api. Can I exclude the other ones? Or can i select specific tiers for the bt? Thank you!  
My goal is to ensure the daily driver is used and admin accounts are only logged into for admin purposes. I have the query to trigger for the successful logon, but I was hoping I could get around hav... See more...
My goal is to ensure the daily driver is used and admin accounts are only logged into for admin purposes. I have the query to trigger for the successful logon, but I was hoping I could get around having to list out every admin account and user account individually to iterate through. The people Im interested in have two accounts and I want to know how often they're using each.
Have a look at the timewrap command timewrap command overview - Splunk Documentation
This is more of a windows question than Splunk. But if you're using TA_windows you have a nice predefined eventtype to easily find successful logins index IN (your,windows,events,index(es)) eventtyp... See more...
This is more of a windows question than Splunk. But if you're using TA_windows you have a nice predefined eventtype to easily find successful logins index IN (your,windows,events,index(es)) eventtype=windows_logon_sucessful If you can easily distinguish your admin users because, for example, by convention they have a "-adm" postfix, you can easily add additional condition to that to match only those values of Account_Name. I'm not sure if windows reports a group membership or privilege level of the user logging in so if your account schema is more complicated you might need to correlate that with an external source of inventory (for example, keep a list of admin accounts to perform a lookup against).
I'm using Splunk Enterprise 9.x  with Universal Forwarders 9.x on Windows 2019. All my forwarders are connected to a deployment server. I notice the following for example: I update a deployment ser... See more...
I'm using Splunk Enterprise 9.x  with Universal Forwarders 9.x on Windows 2019. All my forwarders are connected to a deployment server. I notice the following for example: I update a deployment server app (say update inputs.conf with a new input stanza) I restart the deployment server I view the inputs at the forwarder using btool and see that my changes have propagated However, even though the updated inputs.conf file seems to have landed at the forwarder I do not see the events defined by my new inputs.conf hitting the indexer until I restart the forwarder. Perhaps this is expected based on this When to restart Splunk Enterprise after a configuration file change - Splunk Documentation ? Is this expected and if so is there any way to restart the forwarder remotely using Splunk itself? 
I upgrade Splunk enterprise to 9.1.2 after doing the upgrde I see high CPU utization. Is anyone encounter simmilar issue after upgrading. Splunk running on window server.    
We are using Splunk 9 and are seeing a situation where a file gets re-ingested entirely each time the vendor product trims the older lines from the top of the file. The customer does not have any con... See more...
We are using Splunk 9 and are seeing a situation where a file gets re-ingested entirely each time the vendor product trims the older lines from the top of the file. The customer does not have any control over the how the vendor product does the file trimming. Splunk seems to lose track of its pointer and processes each line again even though they have been read previously. This is happening on a Windows client. Any ideas on how to handle this issue?
Hi I have been trying to deploy opentelemetry collector in my aws EKS cluster to send logs to Splunk enterprise, I have deployed this using SignalFx open telemetry collector helm chart: https://githu... See more...
Hi I have been trying to deploy opentelemetry collector in my aws EKS cluster to send logs to Splunk enterprise, I have deployed this using SignalFx open telemetry collector helm chart: https://github.com/signalfx/splunk-otel-collector But it seems to be an issue, what it is doing is not sending logs that have older timestamps than Otel pods it starts sending logs once new logs start ingesting any log files, old logs it is not ingesting, seems to be an issue with some configuration, want help in this 
Add the props.conf file to an app and upload the app to Splunk Cloud.
Hi, I have the below SPL and I would like to get the comparison for 15 mints time span i.e if we run today at 5 am  then we should expect the table like for every 15 mints data count vs yesterday s... See more...
Hi, I have the below SPL and I would like to get the comparison for 15 mints time span i.e if we run today at 5 am  then we should expect the table like for every 15 mints data count vs yesterday same time count. Please could you help? Current SPL: basesearch earliest=-3d@d latest=now | eval date_wday=strftime(_time,"%A") |search NOT (date_wday=Saturday OR date_wday=Sunday) | eval last_weekday=strftime(now(),"%A") | eval previous_working_day=case(match(last_weekday,"Monday"),"Friday",match(last_weekday,"Tuesday"),"Monday",match(last_weekday,"Wednesday"),"Tuesday",match(last_weekday,"Thursday"),"Wednesday",match(last_weekday,"Friday"),"Thursday") | where date_wday=last_weekday OR date_wday=previous_working_day | eval DAY=if(date_wday=last_weekday,"TODAY","YESTERDAY") | chart count by Name,DAY | eval percentage_variance=abs(round(((YESTERDAY-TODAY)/YESTERDAY)*100,2)) | table Name TODAY YESTERDAY percentage_variance
I want to create an alert that notifies when Windows admins login and the accounts they are using. I want to ensure they are not using admin accounts for daily drivers. I want the search top produce ... See more...
I want to create an alert that notifies when Windows admins login and the accounts they are using. I want to ensure they are not using admin accounts for daily drivers. I want the search top produce a count of the logins and to which account they are utilizing. Can someone give me some direction on this please?
Hi @_pravin, if in your logs, you have the external IP used to connect to the VPN, you can use the iplocation command that finds the contry of this IP. You need a custom lookup if you have internal... See more...
Hi @_pravin, if in your logs, you have the external IP used to connect to the VPN, you can use the iplocation command that finds the contry of this IP. You need a custom lookup if you have internal vlans to use for mapping, for external IPs you can use the lookup used in the iplocation command. Ciao. Giuseppe
Honestly, I have no idea what was the point of this thread. You asked some vague question (apparently wih some assumptions known only to you). People tried to help you by asking more precise question... See more...
Honestly, I have no idea what was the point of this thread. You asked some vague question (apparently wih some assumptions known only to you). People tried to help you by asking more precise questions (because often there is more than one way to do something in Splunk and depending on the data and requirements some solutions may be way way more effective than others) and you kept throwing more and more remarks leading nowhere. Even if you simply asked it as a purely theoretical exercise - to find out what approaches you can take to solve this kind of problem, you should have said so clearly - the descriptive answer about general possible approaches is something completely different than a solution to a specific problem. And no, the set operators are not very effective and are very rarely used since there are usually much better solutions - either filtering out by a subsearch as @yuanliu showed or by classifying, statsing and filtering (or even faster - tstatsing if you can use indexed fields).
Hi @gcusello ,   I agree with you. But I still don't know how accurate this can be as its uses a look up and what would be the case when the person logs in from an another country not mentioned in ... See more...
Hi @gcusello ,   I agree with you. But I still don't know how accurate this can be as its uses a look up and what would be the case when the person logs in from an another country not mentioned in the lookup.  Since, we use SAML I was hoping to get the information from the internal team to check if they have some sort of logs to capture such details. If they have, I might have an accurate technique to track details. If not then, lookup is the solution.   Thanks, Pravin