All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to get my query to work correctly and display it in a table format for easy analysis. The fields I am using are: Host device_active Device_enabled _time   I am t... See more...
I am trying to get my query to work correctly and display it in a table format for easy analysis. The fields I am using are: Host device_active Device_enabled _time   I am trying to track changes from device_active being enabled ("2") to becoming disabled ("1").  I want to display a table that shows which hostnames, within the last 2-4hrs, have changed from enabled to disabled.  If possible add traceability.  Device_active="1" ----->disabled Device_active="2" ------>enabled I tried following some tutorials but could not get it work correctly: https://splunkonbigdata.com/find-out-the-errors-occurring-2-or-more-times-consecutively/ https://community.splunk.com/t5/Splunk-Search/How-to-count-how-many-times-a-field-value-has-changed-from-one/td-p/202299 _______________________________________ Currently, I have the following query: index="log-main" sourcetype=monitoring device_active earliest=-4h latest=-2h | table host, device_active, device_enabled, _time | dedup host | streamstats current=f window=1 max(device_active) as prev_status | eval isConsecutive = if (device_active == Previous_error, 1, 0) | streamstats count as count by device_active reset_before=(isConsecutive==0) | streamstats count(eval(isConsecutive==0)) as #ofdisconnects   Which is producing the following: Host device_enabled device_active time #ofdisconnects count isconsecutive prev_status   This is currently showing "all" hostnames and not filtering out "just" the ones that have changed statuses.  I'd like to display the following information, but filtered down to just the hosts that have "device_active" disabled, but recently were enabled.
Hi, I have a json coming from CI with this template : {"source":"1","sourcetype":"json","event":{"type":"build","id":"061","durartion":"48","run_id":"1","paths":["value1",".value2","value3"]} t... See more...
Hi, I have a json coming from CI with this template : {"source":"1","sourcetype":"json","event":{"type":"build","id":"061","durartion":"48","run_id":"1","paths":["value1",".value2","value3"]} the filed are listed in splunk as: id, duration, sourcetype, paths{} and i can list all the values but my issue is i want to count paths{} (more then 11k values)  I tried using mvcount as  | eval totalpaths = mvcount(paths) retuns nothing | eval totalpaths = mvcount(paths{}) return 1 is there a way how i can return the number of total path ?  how i can list all paths ? I tried using  | stats values(paths{}) as paths | stats count(eval(paths)) AS totalbazelpaths returns 378 while the actual value is above 11k.  when expanding paths{} field I can see all 11k paths. what im doing wrong here? thanks    
I am trying to extract the _time from the log Jul 28 12:00:49 104.128.100.1 420391: Jul 28 06:30:25.023: %Sample: Sample: cp :  QFP:0.0 but the Splunk is extracting the _time as 2022-07-28T12:0... See more...
I am trying to extract the _time from the log Jul 28 12:00:49 104.128.100.1 420391: Jul 28 06:30:25.023: %Sample: Sample: cp :  QFP:0.0 but the Splunk is extracting the _time as 2022-07-28T12:00:49.000+05:30 I want it to extract the second time from log i.e Jul 28 06:30:25.023 i tried the approach  In props.conf file added [sourcetype] TIME_PREFIX = ^\S{3}\s\d{1,2}\s[^\s]+\s[^\s]+\s[^\s]+\s TIME_FORMAT = %b %d %H:%M:%S.%3Q MAX_TIMESTAMP_LOOKAHEAD = 30 TZ = UTC but not able to extract can someone pls help
I have newly installed latest version of the Splunk Supporting Add-on for Microsoft Active Directory on Splunk ES. Firstly, no matter how many times, I click on the "+" sign to add new domain, nothi... See more...
I have newly installed latest version of the Splunk Supporting Add-on for Microsoft Active Directory on Splunk ES. Firstly, no matter how many times, I click on the "+" sign to add new domain, nothing happens. Even if I add details on the default section and click "test connection", literally nothing happens. The last time I had worked on this addon was 4-5 years ago, even then its UI behavior was weird, but at least then the connection test use to be working but not it seems to have become even more terrible. Despite so many versions, I don't know what improvements they have exactly made. Can anyone please help in how to fix this ?
hi all,     I knows Splunk ODBC can use in win or mac operator system. And i tried success. but in linux, Splunk ODBC can not support.      if Tableau install in linux,how can i to use Splunk ODBC?
Hi, I have a multi-value field numbers with each of its values in the format of two numbers separated by a comma (for example 52,29).  For all of these values, I want to have an if statement that d... See more...
Hi, I have a multi-value field numbers with each of its values in the format of two numbers separated by a comma (for example 52,29).  For all of these values, I want to have an if statement that does a comparison on both the first number and second number and then return either "true" or "false".  Currently I have been using the foreach loop with the multi-value mode. However, when debugging why I am receiving the error below, I found that the default template value <<ITEM>>  appears to always return null instead of the values of numbers (isnotnull('<<ITEM>>') returns False). Shown below is how I am trying to extract the leftmost number using regex with replace and then check if it is greater than 5. Is there something wrong with this search? | foreach mode=multivalue numbers     [| eval results=if(tonumber(replace('<<ITEM>>'),  ",\d+",  "")) > 5, "true", "false")]   This is the error I get for the search above:   Thanks in advance.
I run this query to extract all IP address from the events. There are multi ip based on one event. index=* | rex max_match=0 field=_raw "(?<ipaddr>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | dedup ipad... See more...
I run this query to extract all IP address from the events. There are multi ip based on one event. index=* | rex max_match=0 field=_raw "(?<ipaddr>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | dedup ipaddr | table _time, ipaddr   The result is as below, My question is, how to exclude private IP from the the result? Thanks!
Hello, Anyone have any idea why a dropdown filter would only show results for one word field result? I need the dropdown to work with field results of more than one word. 
Are there any best practices with respect to sending OCI GovCloud logs over to Splunk? We're primarily planning to get the Oracle API Gateway logs sent to Splunk. According to this documentation (h... See more...
Are there any best practices with respect to sending OCI GovCloud logs over to Splunk? We're primarily planning to get the Oracle API Gateway logs sent to Splunk. According to this documentation (https://docs.oracle.com/en/solutions/logs-stream-splunk/index.html#GUID-8D87CAA4-CD41-4E90-A333-5B04E23DBFAA), there appears to be a good solution, however...the Splunk add-on/plugin referenced in this document has been archived/deprecated. I'm wondering if the API Gateway API could be used in some manner to send the logs over to Splunk?
Hi. I have a classic dashboard and am using a bar chart with       | timechart span=15m count       And I can pass that with $click.value$ to the drilldown dashboard, but thoughts o... See more...
Hi. I have a classic dashboard and am using a bar chart with       | timechart span=15m count       And I can pass that with $click.value$ to the drilldown dashboard, but thoughts on how to use that? Unlike 'earliest' and 'latest' it's just a single value and the data does not have a 15m epoch value, so I think (at a high level), I'd need to 1. eval the _time column to create 15m buckets and then search for those, but probably also 2. pass the global parms (as well) to filter my results? So, ya, I'm used to traditional SQL, so I could say 'WHERE time BETWEEN this AND that' but here I have to figure out how to 'match if the _time value is in this 15m epoch time' so I'm lost. Thank you for thoughts!    
I have a query for dropdown dashboard which is not working index=main reqUser="*" source="logs"| dedup reqUser and token is userReq from this i select one of it which will be used to search eg reqUs... See more...
I have a query for dropdown dashboard which is not working index=main reqUser="*" source="logs"| dedup reqUser and token is userReq from this i select one of it which will be used to search eg reqUser="local \abc" index=main reqUser={userReq} | stats avg(reqUser) by source, but its not showing any logs. i noticed the problem is local \abc which only accepts local \\abc. anyway i can rewrite this query?      
Hi All, I have created one panel where I am showing  ERRORS for the app in bar chart format Below is my query. I have also made drilldown value as clickable  <query>index=eabc ns=blazepsfpublis... See more...
Hi All, I have created one panel where I am showing  ERRORS for the app in bar chart format Below is my query. I have also made drilldown value as clickable  <query>index=eabc ns=blazepsfpublish CASE(ERROR) |rex field=_raw "(?&lt;!LogLevel=)ERROR(?&lt;Error_Message&gt;.*)" | eval Date=strftime(_time, "%Y-%m-%d") |dedup Error_Message |timechart span=1d count(Error_Message) by app_name</query> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option><option name="charting.drilldown">all</option> <drilldown> <set token="show">true</set> <set token="selected_value">$click.value$</set> </drilldown> Currently I am only getting Number of Error in Bar chart format. I want when I click on bar chart it should show raw logs for the Error. I have created dependent panel with below query but its not showing raw logs: <panel depends="$show$"> <table> <title>Errors</title> <search> <query>index=abc ns=blazepsfpublish CASE(ERROR)|rex field=_raw "(?&lt;!LogLevel=)ERROR(?&lt;Error_Message&gt;.*)" | eval Date=strftime(_time, "%Y-%m-%d") |dedup Error_Message |timechart span=1d count(Error_Message) by app_name $selected_value$ </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> </table> </panel> Can someone guide me here.
Looking to create a chart that can separate results into groups of how often they appear in a time range.  We're looking to figure out attendance in our office and find out which employees are show... See more...
Looking to create a chart that can separate results into groups of how often they appear in a time range.  We're looking to figure out attendance in our office and find out which employees are showing up 1, 2, 3 times per week to get a better understanding. Is this something that can be done?
Solved: How to seperate different Sourcetype logs from sin... - Splunk Community Configure Unified Access Gateway System Settings (vmware.com) Syslog Formats and Events (vmware.com) Trying to over... See more...
Solved: How to seperate different Sourcetype logs from sin... - Splunk Community Configure Unified Access Gateway System Settings (vmware.com) Syslog Formats and Events (vmware.com) Trying to override syslog and created props.conf & transform.conf.  It is not working.  What I am doing wrong?  initially getting an error:  Undocumented key used in transforms.conf; stanza='vmware:uag:admin' setting='DEST_KEY' key='MetaData:SourceType'  but, found link here that help solve. but, still not working.  I am not search at HF.  I set the setting the HF. props.conf     [syslog::/var/log/%hostname%/syslog] TRANSFORMS-sourcetype = vmware:uag:admin, vmware:uag:audit, vmware:uag:esmanager     transforms.conf     [vmware:uag:admin] REGEX = :\d\d\s+\w{5}\w{4}\suag-admin\:(.+)\n FORMAT = sourcetype::vmware:uag:admin DEST_KEY = MetaData:Sourcetype [vmware:uag:audit] REGEX = :\d\d\s+\w{5}\w{4}\suag-audit\:(.+)\n FORMAT = sourcetype::vmware:uag:admin DEST_KEY = MetaData:Sourcetype [vmware:uag:esmanager] REGEX = :\d\d\s+\w{5}\w{4}\suag-esmanager\:(.+)\n FORMAT = sourcetype::vmware:uag:esmanager DEST_KEY = MetaData:Sourcetype      
Hey everyone,   I am pretty unfamiliar with all of the functionality Splunk has to offer and am wondering if Splunk has the ability to look for specific events and if found, send that in an API t... See more...
Hey everyone,   I am pretty unfamiliar with all of the functionality Splunk has to offer and am wondering if Splunk has the ability to look for specific events and if found, send that in an API to a third party ticketing system in JSON format.  I see Splunk has its own APIs to look for data that it has ingested but looking to send that data elsewhere.  Any help would be appreciated.  Thanks.
For the Windows events/logs that end up in the storage buckets on Splunk Enterprise servers, is Splunk copying the original Windows event log files to its own storage, or is it just ingesting events ... See more...
For the Windows events/logs that end up in the storage buckets on Splunk Enterprise servers, is Splunk copying the original Windows event log files to its own storage, or is it just ingesting events from the forwarders?  The reason I ask is because our CISO wants our secuity team to retain the original log files for fidelity in case we ever get audited or sued and I feel like we can just setup Splunk to alert us if log files have been cleared. Additionally, if an insider threat were to pull a workstation offline, then clear the logs, we wouldn't have the orignal logs, anyway...  Does anyone know if there are regulations in any industries that require the original log files?
I am working to develop a search that groups multiple event source IPs that match a single event's destination IP. Additionally, the group of events would occur within the span of say 5s after the ev... See more...
I am working to develop a search that groups multiple event source IPs that match a single event's destination IP. Additionally, the group of events would occur within the span of say 5s after the event of interest occurs (inclusive).  I have tried using transaction but this only seems to group based on the same field. Coalesce does not work either as the sourceip field is present for most or all events, so destip info will not be included.         "Initial search" | eval mySingleField=coalesce(sourceip, destip) | transaction mySingleField maxspan=5s | where eventcount > 1         I have tried using localize and map and I am also having trouble implementing it here too. As I am understanding localize, it only creates "an event region" which is a period of time in which consecutive events are separated. I'm having trouble understanding if this actually passess the events or if it only passes the event region/time to the map function. I was hoping that localize would limit the search space similarly as maxspan does for the transact command as there are millions of searches.        "Initial Search defining event region" | localize timeafter=5s | map search="search srcip starttimeu=$starttime$ endtimeu=$endtime$"        
I want to temporarily disable alerts on servers while they are being patched or put into maintenance mode. Is it possible to do this via PowerShell/REST?
we  have a Splunk environment using indexer clustering (9.0.0) and we have once which is acting as like DS,CM and LM, we are planning to remove LM from existing instance and we need to add to search ... See more...
we  have a Splunk environment using indexer clustering (9.0.0) and we have once which is acting as like DS,CM and LM, we are planning to remove LM from existing instance and we need to add to search peer is it possible.