All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk Community, Can anyone help me build a query based on the below; I have a batch job that has multiple steps logged as separate events. How can I calculate the total duration of the batc... See more...
Hello Splunk Community, Can anyone help me build a query based on the below; I have a batch job that has multiple steps logged as separate events. How can I calculate the total duration of the batch job (Step 1 Start - Step 5 End). Example of my output format (Dummy Data Used): Step Start_Time End_Time Duration (Hours) 1 2021-09-11 22:45:00 2021-09-11 22:45:01 00:00:01 2 2021-09-11 22:45:01 2021-09-11 22:45:20 00:00:19 3 2021-09-11 22:45:20 2021-09-11 22:58:15 00:12:55 4 2021-09-11 22:58:15 2021-09-11 22:58:39 00:00:24 5 2021-09-11 22:58:39 2021-09-11 24:20:31 01:21:52   THANK YOU!
Hello Splunk Community,  Can anyone help me build a query based on the below; I want to convert a field (Fri Oct 8 23:15:05 AEDT 2021) into time format & then calculate the duration by subtracting ... See more...
Hello Splunk Community,  Can anyone help me build a query based on the below; I want to convert a field (Fri Oct 8 23:15:05 AEDT 2021) into time format & then calculate the duration by subtracting the end time by the start time.  Appreciate your help
Hi, I have another request similar to my previous post but with a variation Here is the multi-valued field ColY. ColY has only two values ON or OFF. I need to find all rows which changed values fro... See more...
Hi, I have another request similar to my previous post but with a variation Here is the multi-valued field ColY. ColY has only two values ON or OFF. I need to find all rows which changed values from ON to OFF or vice-versa in any order. Below is the example ColX ColY A123456 ON ON ON A123457 ON OFF ON OFF A123458 ON ON OFF ON ON ON OFF A123459 OFF OFF OFF A123460 ON ON ON OFF OFF OFF   Required output ColX ColY totalChanges A123457 ON OFF ON OFF 3 A123458 ON ON OFF ON ON ON OFF 3 A123460 ON ON ON OFF OFF OFF 1
Hi All, Can someone help me with the following  ColY represents multi-value field. I want to search all rows which have null, 0 and someother values in ColY Based on the below example output rows ... See more...
Hi All, Can someone help me with the following  ColY represents multi-value field. I want to search all rows which have null, 0 and someother values in ColY Based on the below example output rows should be for A123456, A123461  ColX ColY A123456 null 0 56789 987654 A123457 4332 A123458 54322 0 A123459 null 0 A123460 2345667 7665443 A123461 null 788765 0 A123462 876543 null    
I want to add a dropdown selection menu inside a dashboard panel using html and then based on the selected item from the dropdown , need to change the search query accordingly and results are updated... See more...
I want to add a dropdown selection menu inside a dashboard panel using html and then based on the selected item from the dropdown , need to change the search query accordingly and results are updated in the chart visualization. Example:  Select Month:   All, Jan, Feb, ...Dec. Now when user selects all, the query will display results for entire year (jan -dec) and when jan is selected then chart is displayed only showing the entire january data. Similarly for others. I have other different panels within the dashboard under differet rows, and  I need to insert the drop down menu only for a specific panel. Also, my base search remains same irrespective of the drop-down selection, only the panel search string changes with change in the dropdown selection. - any suggestion or guide would be a great help. It would be nice if it can be done with no or minimal use of JavaScript. TIA thanks
We know the amount of data ingested daily from the Splunk internal logs and the License dashboard, but we're trying to find if there's a way to find the amount of data purged on daily based on the ou... See more...
We know the amount of data ingested daily from the Splunk internal logs and the License dashboard, but we're trying to find if there's a way to find the amount of data purged on daily based on the our data retention policy.  Appreciate any help on this. 
Hi All,            As part of one of my SRE objectives I was trying to find out the following in splunk.  The High(Max) count of ERRORs within a given time period (1hr / 24hr/ 144hr) compared to mo... See more...
Hi All,            As part of one of my SRE objectives I was trying to find out the following in splunk.  The High(Max) count of ERRORs within a given time period (1hr / 24hr/ 144hr) compared to monthly 99 Percentile I was starting off with baby steps assuming that the count is obviously zooming in on anything 'ERROR' index=myIndex ERROR source="/test.log" | timechart count by status | addtotals | addtotals fieldname=ERROR | eval ErrorRate=round(Errors/Total*100,2) | fields _time 5* ErrorRate But that doesn't seem to even work.  Help would be really appropriated .   Thanks in advance team !   
I created an Authentication data model that has default, Insecure, and Priviledge Authentication Data model. It also uses action=success and action=failures.  Please see screenshot below:   I c... See more...
I created an Authentication data model that has default, Insecure, and Priviledge Authentication Data model. It also uses action=success and action=failures.  Please see screenshot below:   I can see the data coming in from different sources but the issue is  that we have so many windows authentication failures. Please how can I fix this configurations issues? Has anybody come across such issues?
Hi,  I created the Splunk cloud free trial account yesterday, when I logged in with the mail logins, I was prompt to change it which I did, but today am trying to login to the cloud service using th... See more...
Hi,  I created the Splunk cloud free trial account yesterday, when I logged in with the mail logins, I was prompt to change it which I did, but today am trying to login to the cloud service using the same logins I created yesterday and it now shows me access denied with no area on where one can even go about retrieving the login details.   I need help on how to retrieve my cloud login details please ..
This is really a log4net question but I'm hoping the folks here can help; I have been unsuccessful at searching online for a solution. ----------------- We have a custom application which generates... See more...
This is really a log4net question but I'm hoping the folks here can help; I have been unsuccessful at searching online for a solution. ----------------- We have a custom application which generates local logs in JSON format via the log4net module. We then have a Splunk UF installed to collect said logs. In general that all works fine. The problem is that some log messages include a nested JSON 'message' field -- but log4net is misformatting it as a string and so Splunk doesn't parse the nested part. You can see the issue (below) where log4net is unnecessarily adding quote-marks around the nested part: CURRENT/INVALID   "message":"{"command":"Transform271ToBenefitResponse","ms":1}"   PROPER   "message":{"command":"Transform271ToBenefitResponse","ms":1}   -------------------------- I'm not entirely sure of the log4net configuration but here's what I was told by one of our developers: ORIGINAL LOG4NET CONFIG <conversionPattern value="%utcdate [%property{CorrelationId}] [%property{companyId}] [%property{userId}] [%thread] [%level] %logger - %message%newline" /> UPDATED CONFIG; STILL FAILS <conversionPattern value="{&quot;date&quot;:&quot;%date{ISO8601}&quot;, &quot;correlationId&quot;:&quot;%property{CorrelationId}&quot;, &quot;companyId&quot;:&quot;%property{companyId}&quot;, &quot;userId&quot;:&quot;%property{userId}&quot;, &quot;thread&quot;:&quot;%thread&quot;, &quot;level&quot;:&quot;%level&quot;, &quot;logger&quot;:&quot;%logger&quot;, &quot;message&quot;:&quot;%message&quot;}%newline" />                
I have filed called serial_id which have value ABC2022100845001  I need count with contain 45  in last 5 & 6 th bytes 
I have 2 major questions: 1) I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B an... See more...
I have 2 major questions: 1) I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B and then return Results that are common to both sourcetype.   2)  I have 2 Sourcetypes A and B with 2 important Fields Category and Enviroment.  I want to compare all of the Category and Environment from Sourcetype A to Sourcetype B and then return Results that does not show up on both sourcetypes.      
I am pulling data from multiple locations and a new field threshold has been introduced. The issue is threshold  is common but has different values depending if it is cpuPerc or memoryCons  etc.. Th... See more...
I am pulling data from multiple locations and a new field threshold has been introduced. The issue is threshold  is common but has different values depending if it is cpuPerc or memoryCons  etc.. There are 4 of them.   | mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold     On option I have if i want to display all the thresholds differently is to write a 3 joins - but this is heavy on CPU.   | mstats min("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_cpuPerc | join _time service.name replica.name [| mstats min("mx.process.threads") as nbOfThreads WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_nbOfThreads ] | join _time service.name replica.name [| mstats min("mx.process.memory.usage") as memoryCons WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_memoryCons ] | join _time service.name replica.name [| mstats min("mx.process.file_descriptors") as nbOfOpenFiles WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | rename threshold as T_nbOfOpenFiles ]   I am trying this way but I am not sure how to merge the rows by time in the end - any ideas.   | mstats min("mx.process.cpu.utilization") as cpuPerc min("mx.process.threads") as nbOfThreads min("mx.process.memory.usage") as memoryCons min("mx.process.file_descriptors") as nbOfOpenFiles min("mx.process.up.time") as upTime avg("mx.process.creation.time") as creationTime WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold | rename service.name as service_name | search service_name IN (*cache*) | eval threshold_nbOfThreads=if(isnull(nbOfThreads),"",$threshold$) | eval threshold_memoryCons=if(isnull(memoryCons),"",$threshold$) | eval threshold_nbOfOpenFiles=if(isnull(nbOfOpenFiles),"",$threshold$) | table _time threshold threshold_nbOfOpenFiles threshold_memoryCons threshold_nbOfThreads   The issue is the data is now on different rows  - where I need them to be on the same by _time. SO in the image below, we have 3 lines for each time. How can i get them to merge per timestamp?    
I am new to splunk cloud and I would like to install an enterprise security app  ( below screenshot) on my splunk.  and after open the app its should be like below  and finally, I ... See more...
I am new to splunk cloud and I would like to install an enterprise security app  ( below screenshot) on my splunk.  and after open the app its should be like below  and finally, I should be able to see the below screen. Can anyone please help me with this - If you have any doubts about my question please let me know Thanks in advance.
Hi All, I am onboarding data from a heavy forwarder using Splunk TA.  Is it possible to 1) index all logs into one index and route to group A indexers  2) index subset of logs into another index ... See more...
Hi All, I am onboarding data from a heavy forwarder using Splunk TA.  Is it possible to 1) index all logs into one index and route to group A indexers  2) index subset of logs into another index and route to group B indexers? Thanks.
Hi here is the log: 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000003123456] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2... See more...
Hi here is the log: 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000003123456] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2 23:50:26.698 app module1: CHKIN: Total:[100000] from table Total:[C000000000030] from PC1 23:33:39.389 app module2: CHKOUT: Total:[10] from table Total:[C000000000000] from PC2 need to sum values in brackets. expected output: items            total1           total2                    from  CHKIN         200000       3123486           PC1 CHKOUT    20                     0                              PC2   Thanks  
Hi , I  have a index sensitive_data that contains sensitive data. I want to ensure that ONLY one particular user with roles power, user has access to this index, but other users with same roles shou... See more...
Hi , I  have a index sensitive_data that contains sensitive data. I want to ensure that ONLY one particular user with roles power, user has access to this index, but other users with same roles should not have access to this  particualr index.  How to do this reliably? What have I done is I created a LDAP group and mapped the role with this group and allowed the index access to particular role. Please someone helps whether the approach is correct.   [role_power] cumulativeRTSrchJobsQuota = 10 cumulativeSrchJobsQuota = 200 list_storage_passwords = enabled schedule_search = disabled srchDiskQuota = 1000 srchMaxTime = 8640000 rtsearch = disabled srchIndexesAllowed = * srchIndexesDisallowed = sensitive_data [role_user] schedule_search = enabled srchMaxTime = 8640000 srchDiskQuota = 500 srchJobsQuota = 8 srchIndexesAllowed = * srchIndexesDisallowed = sensitive_data [role_sensitive-data-power] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 importRoles = power srchIndexesAllowed = sensitive_data srchMaxTime = 8640000 [role_sensitive-data-user] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 importRoles = user srchIndexesAllowed = sensitive_data srchMaxTime = 8640000 Thanks        
How can I group the start and end time of an station like attachment shows? The startime with X I want to skip, 
Hi Splunkers,   Hopefully I am posting on the correct place, apologies if not! I have the following code/SPL from inside the XML form. It looks inside a lookup, and then gives information about ... See more...
Hi Splunkers,   Hopefully I am posting on the correct place, apologies if not! I have the following code/SPL from inside the XML form. It looks inside a lookup, and then gives information about a specific field (field name taken from variable "FieldName") which matches the value of SearchString (value taken from variable "SearchString").   | inputlookup $lookup_name$ | search $FieldName$=$SearchString$   Those experienced you will see that it doesn't work this way. I am assuming that to make this XML code to work and give me the search result I expect I need to expand the variables?   If so, any idea how to do that? Regards, vagnet
Hey All, I get no results found for a tag that looks for fields created by a rex. So... sourcetype=DataServices | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" i get the following field with r... See more...
Hey All, I get no results found for a tag that looks for fields created by a rex. So... sourcetype=DataServices | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" i get the following field with results Now i want to bunch some field values together so i create a tag containing the field values i care about added to my search but i get no results found... sourcetype=DataServices tag=GB1_BIME | rex "JOB: Job (?<BIMEJob><(?<=<).*(?=>)>)" Greatly appreciated if someone could help?