All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am using the Sideview App trying to monitor usage by users.  There is a Pain field in the User Activity report.  Does anyone know what this Pain field is trying to show?
| tstats count where index=<your-index-here> earliest=-3d@d latest=now() by _time span=15m | eval date_wday=strftime(_time,"%A"), date_hourmin=strftime(_time,"%H:%M") | search date_wday!=Saturday ... See more...
| tstats count where index=<your-index-here> earliest=-3d@d latest=now() by _time span=15m | eval date_wday=strftime(_time,"%A"), date_hourmin=strftime(_time,"%H:%M") | search date_wday!=Saturday date_wday!=Sunday | eval current_weekday=strftime(now(),"%A") | eval previous_working_day=case(match(current_weekday,"Monday"),"Friday",match(current_weekday,"Tuesday"),"Monday",match(current_weekday,"Wednesday"),"Tuesday",match(current_weekday,"Thursday"),"Wednesday",match(current_weekday,"Friday"),"Thursday") | table _time count date_wday date_hourmin current_weekday previous_working_day | where date_wday=current_weekday OR date_wday=previous_working_day | chart sum(count) as count by date_hourmin, date_wday Ok that is the closest I could get to what you originally tried.  However, there are some flaws with this solution you may want to consider.  Specifically partial time bins can not be filtered out without using the time chart command.  So it could look like the count for the most recent time span has dangerously dropped when in reality you only have 2 or 3 minutes of the 15 minute window to measure. Working with the timewrap command is more correct way to do this as you can leverage timechart which allows you to disable partial windows.  You will find though that filtering out week-ends and the -3d@d makes for odd visualizations. index=<your-index-here> date_wday!=saturday date_wday!=sunday earliest=-3d@d latest=+1d@d | timechart span=15m partial=f count | timewrap 1day align=end Splunk time extracts date_* fields for you already.  The +1d@d is only important if you want your graph to go midnight to midnight, replace with now() if you are ok with the visualization start and end moving as the day progresses.
Hi @Derson , I don't call this behavior as a bug. eval command works for strings and numbers. For example, if the values are strings it concatenates if secondUID values are numbers it makes a calcul... See more...
Hi @Derson , I don't call this behavior as a bug. eval command works for strings and numbers. For example, if the values are strings it concatenates if secondUID values are numbers it makes a calculation. I think some of your secondUID values are being processed as numbers, that is why lookup cannot match.  Your solution is wise and assures that the values are processed as strings.
Hi @pranay03, By default, the file_input receiver doesn’t read logs from a file that is not actively being written to because start_at defaults to end. This setting will be ignored if previously re... See more...
Hi @pranay03, By default, the file_input receiver doesn’t read logs from a file that is not actively being written to because start_at defaults to end. This setting will be ignored if previously read file offsets are retrieved from a persistence mechanism.  So this behavior will not be a problem on normal running, only on the first installation otel will start reading new files.  If you want to read old files too, you can configure start_at parameter as beginning Here is the related document; https://docs.splunk.com/observability/en/gdi/opentelemetry/components/filelog-receiver.html#settings   
Use the strptime and strftime functions to convert time formats. | eval timeField=strftime(strptime(timeField,"%H:%M:%S.%6Q"), "%H:%M:%S") You also can use string manipulation to cut off the last 7... See more...
Use the strptime and strftime functions to convert time formats. | eval timeField=strftime(strptime(timeField,"%H:%M:%S.%6Q"), "%H:%M:%S") You also can use string manipulation to cut off the last 7 characters.  
What is it you expect the Deployment Server to do? A DS has no use for props.conf, transforms.conf, or inputs.conf.  It uses outputs.conf to send its logs to the indexer(s).
I was facing the same issue, I used the following condition and is working fine  search result_of_search > 10
I ended up just adding the hex code of my preferred colors to the options section of the visualization:   "viz_AbCd12if": {             "type": "splunk.table",             "dataSources": {   ... See more...
I ended up just adding the hex code of my preferred colors to the options section of the visualization:   "viz_AbCd12if": {             "type": "splunk.table",             "dataSources": {                 "primary": "ds_xNY7uyLU"             },             "title": "Title of Table",             "options": {                 "columnFormat": {                     "sparkline": {                         "data": "> table | seriesByName(\"sparkline\") | formatByType(sparklineColumnFormatEditorConfig)",                         "sparklineColors": [                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9"
Hi,  UF etc/apps/remo/local  placed the inputs,outputs,props and tranforms configuration files  and search the data in indexer+SearchHead  servers , Events  are received Successfully. [monitor://E... See more...
Hi,  UF etc/apps/remo/local  placed the inputs,outputs,props and tranforms configuration files  and search the data in indexer+SearchHead  servers , Events  are received Successfully. [monitor://E:\KS Application GBR (GR)\sbxLogs\] index = ks_dev sourcetype = ks_logs crcSalt = <SOURCE>   [tcpout:bprserver] server = 1.2.3.4:9997 useACK = true [ks_logs] TRANSFORMS--null = EXCLUDE_INFO_WARN_events [EXCLUDE_INFO_WARN_events] REGEX = ^[\d|-]*\s[\d|:|,]*\s(INFO|WARN).*$ DEST_KEY = queue FORMAT = nullQueue   Same configuration updated in the deployment server etc\deploymentapps\ksapp\local [monitor://E:\KS Application GBR (GR)\sbxLogs\] index = ks_dev sourcetype = ks_logs crcSalt = <SOURCE> [tcpout:bprserver] server = 1.2 3.4:9997 useACK = true [ks_logs] TRANSFORMS--null = EXCLUDE_INFO_WARN_events [EXCLUDE_INFO_WARN_events] REGEX = ^[\d|-]*\s[\d|:|,]*\s(INFO|WARN).*$ DEST_KEY = queue FORMAT = nullQueue   Events are receiving  the SH+indexer server Note: in my account there is no HeavyForwarder instance. please help how to do configuration in deployment server.                
HI  Can someone please let me know how to convert the time from the format hh:mm:ss.6Q  to hh:mm:ss ??     
Hi @nnkreddy, if you're confident that you received an event in the last 24 hours, you could run something like this: index = index1 earliest=-24h latest=now source IN (dev-*api.log) ("testapi" AN... See more...
Hi @nnkreddy, if you're confident that you received an event in the last 24 hours, you could run something like this: index = index1 earliest=-24h latest=now source IN (dev-*api.log) ("testapi" AND "HEARTBEAT") | stats latest(_time) AS latest BY APIName, JVM | where latest>now()-300 If you're not sure that you received at least one event in the last 24 hours, you have to create a lookup  (called e.g. perimeter.csv) containing all the APIName and JVM to monitor, then you can run something like this: index = index1 earliest=-m5 latest=now source IN (dev-*api.log) ("testapi" AND "HEARTBEAT") | stats count BY APIName, JVM | append [ | inputlookup perimeter.cv | eval count=0 | fields APIName JVM count ] | stats sum(count) AS total BY APIName, JVM | where total=0 The second search is less heavy and long to execute and gives more control, but requires to manage the lookup. Ciao. Giuseppe
Splunk will check the first 256 (configurable) bytes of a monitored file to see if the entire file has been changed rather than new lines added to the end.  If it sees the beginning of the file is di... See more...
Splunk will check the first 256 (configurable) bytes of a monitored file to see if the entire file has been changed rather than new lines added to the end.  If it sees the beginning of the file is different then it assumes the entire file is new and re-ingests it. I see no workaround for this.  Splunk has no way to know how many older lines were trimmed and so has to treat the whole file as new.
Hello, I've a simple requirement but new to Splunk so facing some challenges and hoping for some luck! My application writes HEARTBEAT messages every 2 min to log files to multiple sources. I'm jus... See more...
Hello, I've a simple requirement but new to Splunk so facing some challenges and hoping for some luck! My application writes HEARTBEAT messages every 2 min to log files to multiple sources. I'm just trying to create an alert and send email if heartbeat messages aren't written in last 5 min.  It may look simple but I also need to know which sources doesn't have heartbeat messages.  I've tried with below query which works but sometimes giving me incorrect results. So, looking for an better and simple solution.   index = index1 earliest=-5m latest=now source IN (dev-*api.log) ("testapi" AND "HEARTBEAT") | fields source | append [ search index = index1 earliest=-2w@w0 latest=now source IN (dev-*api.log) ("testapi" AND "HEARTBEAT") | stats dc(source) as source_list by source | fields source ] | rex field=_raw "HEARTBEAT for (?<APIName>.*).jar (?<Version>.*)" | stats count as #heartbeats, latest(Version) as Versions by APIName, JVM | eval Status=case(('#heartbeats' <= 1 OR isnull('#heartbeats')), "NOT RUNNING", '#heartbeats' > 1, "RUNNING") | table APIName, Versions, Status   Appreciate the help! Thanks.
Any app containing inputs.conf should have the "Restart splunkd" option enabled.  Do that in the Forwarder Management section of the Deployment Server.  That will tell the UF to restart itself each t... See more...
Any app containing inputs.conf should have the "Restart splunkd" option enabled.  Do that in the Forwarder Management section of the Deployment Server.  That will tell the UF to restart itself each time it gets an updated copy of the app.
Hello there,  I was having the same issue, and it turned out to be a problem with the installation. So, I just did a yum remove splunk* and removed the /opt/splunkforwarder home directory completely... See more...
Hello there,  I was having the same issue, and it turned out to be a problem with the installation. So, I just did a yum remove splunk* and removed the /opt/splunkforwarder home directory completely.  After, uninstalling and removing the splunk home directory, and I started spunk just fine and I was able to run the add monitor command without any issues.  I'm running RHEL 8.x and issuing all of these commands via the Linux CLI. Splunk version is 9.1.2. I hope this helps.  Respectfully.  Guillermo  Washington, DC
Find the difference between two timestamps by converting each into epoch (integer) format using the strptime function and then subtract them. | eval eStartTime=strptime('Start-Time', "%Y-%m-%dT%H:%M... See more...
Find the difference between two timestamps by converting each into epoch (integer) format using the strptime function and then subtract them. | eval eStartTime=strptime('Start-Time', "%Y-%m-%dT%H:%M:%S.%6N%Z") | eval eEndTime=strptime('End-Time', "%Y-%m-%dT%H:%M:%S.%6N%Z") P.S.  Avoid using hyphens in field names as they can be mis-interpreted as the subtraction operator.
Hi, Did you ever managed to get this bottom?
Hi  Can someone please let me know how i can find the difference between the 2 fields Start-Time and End-Time in the below search.  Format of time extracted by the query is :  Start-Time = 202... See more...
Hi  Can someone please let me know how i can find the difference between the 2 fields Start-Time and End-Time in the below search.  Format of time extracted by the query is :  Start-Time = 2024-01-23T11:38:59.0000000Z End-Time = 2024-01-23T11:39:03.0000000Z Query :  `macro_events_prod_srt_shareholders_esa` eocEnv = PRO * "MICROSOFT.DATAFACTORY" activityName = Merge_Disclosure_Request 741b5db8-da47-468b-b883-a06ef137519a | eval Dreqid=case('category'="PipelineRuns",'properties.Parameters.DisclosureRequestId','category'="ActivityRuns",'properties.Input.storedProcedureParameters.DisclosureRequestId.value',1=1,"") | eval end_time=case('end'="1601-01-01T00:00:00.0000000Z", "Still-Running",1=1,'end') | table eocEnv , start , end_time , pipelineName , activityName, pipelineRunId,level , status , category , Type , Dreqid, properties.Error.errorCode , properties.Error.message | rename Dreqid as "Disclosure request id" , eocEnv as "Environment" , EOC_ResourceGroup as " Resource_Group" , activityName as "Activity Name" , pipelineName as "Pipeline Name" , operationName as "Operation Name" , pipelineRunId as "Run_Id" , level as "Level" , status as "STATUS" , category as "Category" , start as "Start-Time" , end_time as "End-Time" , properties.Error.errorCode as "Error-Code" , properties.Error.message as "Error-Message" | sort -"Start-Time"          
No. Remember that Splunk is "just" a data processing solution. In order to process the data it must have that data. The logon events only contain so much data. If you don't have any external source o... See more...
No. Remember that Splunk is "just" a data processing solution. In order to process the data it must have that data. The logon events only contain so much data. If you don't have any external source of information that you could correlate with it, you simply don't have that data. But if you know you have a closed list of accounts you want to check (for example userA, userB and Administrator), you can explicitly look for only those logins.
Close. You don't need to restart the DS. Just reload the deployment classes. (if you're doing it via CLI, if I remember correctly, the GUI takes care of that automatically)