All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Anyone have any idea why a dropdown filter would only show results for one word field result? I need the dropdown to work with field results of more than one word. 
Are there any best practices with respect to sending OCI GovCloud logs over to Splunk? We're primarily planning to get the Oracle API Gateway logs sent to Splunk. According to this documentation (h... See more...
Are there any best practices with respect to sending OCI GovCloud logs over to Splunk? We're primarily planning to get the Oracle API Gateway logs sent to Splunk. According to this documentation (https://docs.oracle.com/en/solutions/logs-stream-splunk/index.html#GUID-8D87CAA4-CD41-4E90-A333-5B04E23DBFAA), there appears to be a good solution, however...the Splunk add-on/plugin referenced in this document has been archived/deprecated. I'm wondering if the API Gateway API could be used in some manner to send the logs over to Splunk?
Hi. I have a classic dashboard and am using a bar chart with       | timechart span=15m count       And I can pass that with $click.value$ to the drilldown dashboard, but thoughts o... See more...
Hi. I have a classic dashboard and am using a bar chart with       | timechart span=15m count       And I can pass that with $click.value$ to the drilldown dashboard, but thoughts on how to use that? Unlike 'earliest' and 'latest' it's just a single value and the data does not have a 15m epoch value, so I think (at a high level), I'd need to 1. eval the _time column to create 15m buckets and then search for those, but probably also 2. pass the global parms (as well) to filter my results? So, ya, I'm used to traditional SQL, so I could say 'WHERE time BETWEEN this AND that' but here I have to figure out how to 'match if the _time value is in this 15m epoch time' so I'm lost. Thank you for thoughts!    
I have a query for dropdown dashboard which is not working index=main reqUser="*" source="logs"| dedup reqUser and token is userReq from this i select one of it which will be used to search eg reqUs... See more...
I have a query for dropdown dashboard which is not working index=main reqUser="*" source="logs"| dedup reqUser and token is userReq from this i select one of it which will be used to search eg reqUser="local \abc" index=main reqUser={userReq} | stats avg(reqUser) by source, but its not showing any logs. i noticed the problem is local \abc which only accepts local \\abc. anyway i can rewrite this query?      
Hi All, I have created one panel where I am showing  ERRORS for the app in bar chart format Below is my query. I have also made drilldown value as clickable  <query>index=eabc ns=blazepsfpublis... See more...
Hi All, I have created one panel where I am showing  ERRORS for the app in bar chart format Below is my query. I have also made drilldown value as clickable  <query>index=eabc ns=blazepsfpublish CASE(ERROR) |rex field=_raw "(?&lt;!LogLevel=)ERROR(?&lt;Error_Message&gt;.*)" | eval Date=strftime(_time, "%Y-%m-%d") |dedup Error_Message |timechart span=1d count(Error_Message) by app_name</query> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option><option name="charting.drilldown">all</option> <drilldown> <set token="show">true</set> <set token="selected_value">$click.value$</set> </drilldown> Currently I am only getting Number of Error in Bar chart format. I want when I click on bar chart it should show raw logs for the Error. I have created dependent panel with below query but its not showing raw logs: <panel depends="$show$"> <table> <title>Errors</title> <search> <query>index=abc ns=blazepsfpublish CASE(ERROR)|rex field=_raw "(?&lt;!LogLevel=)ERROR(?&lt;Error_Message&gt;.*)" | eval Date=strftime(_time, "%Y-%m-%d") |dedup Error_Message |timechart span=1d count(Error_Message) by app_name $selected_value$ </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> </table> </panel> Can someone guide me here.
Looking to create a chart that can separate results into groups of how often they appear in a time range.  We're looking to figure out attendance in our office and find out which employees are show... See more...
Looking to create a chart that can separate results into groups of how often they appear in a time range.  We're looking to figure out attendance in our office and find out which employees are showing up 1, 2, 3 times per week to get a better understanding. Is this something that can be done?
Solved: How to seperate different Sourcetype logs from sin... - Splunk Community Configure Unified Access Gateway System Settings (vmware.com) Syslog Formats and Events (vmware.com) Trying to over... See more...
Solved: How to seperate different Sourcetype logs from sin... - Splunk Community Configure Unified Access Gateway System Settings (vmware.com) Syslog Formats and Events (vmware.com) Trying to override syslog and created props.conf & transform.conf.  It is not working.  What I am doing wrong?  initially getting an error:  Undocumented key used in transforms.conf; stanza='vmware:uag:admin' setting='DEST_KEY' key='MetaData:SourceType'  but, found link here that help solve. but, still not working.  I am not search at HF.  I set the setting the HF. props.conf     [syslog::/var/log/%hostname%/syslog] TRANSFORMS-sourcetype = vmware:uag:admin, vmware:uag:audit, vmware:uag:esmanager     transforms.conf     [vmware:uag:admin] REGEX = :\d\d\s+\w{5}\w{4}\suag-admin\:(.+)\n FORMAT = sourcetype::vmware:uag:admin DEST_KEY = MetaData:Sourcetype [vmware:uag:audit] REGEX = :\d\d\s+\w{5}\w{4}\suag-audit\:(.+)\n FORMAT = sourcetype::vmware:uag:admin DEST_KEY = MetaData:Sourcetype [vmware:uag:esmanager] REGEX = :\d\d\s+\w{5}\w{4}\suag-esmanager\:(.+)\n FORMAT = sourcetype::vmware:uag:esmanager DEST_KEY = MetaData:Sourcetype      
Hey everyone,   I am pretty unfamiliar with all of the functionality Splunk has to offer and am wondering if Splunk has the ability to look for specific events and if found, send that in an API t... See more...
Hey everyone,   I am pretty unfamiliar with all of the functionality Splunk has to offer and am wondering if Splunk has the ability to look for specific events and if found, send that in an API to a third party ticketing system in JSON format.  I see Splunk has its own APIs to look for data that it has ingested but looking to send that data elsewhere.  Any help would be appreciated.  Thanks.
For the Windows events/logs that end up in the storage buckets on Splunk Enterprise servers, is Splunk copying the original Windows event log files to its own storage, or is it just ingesting events ... See more...
For the Windows events/logs that end up in the storage buckets on Splunk Enterprise servers, is Splunk copying the original Windows event log files to its own storage, or is it just ingesting events from the forwarders?  The reason I ask is because our CISO wants our secuity team to retain the original log files for fidelity in case we ever get audited or sued and I feel like we can just setup Splunk to alert us if log files have been cleared. Additionally, if an insider threat were to pull a workstation offline, then clear the logs, we wouldn't have the orignal logs, anyway...  Does anyone know if there are regulations in any industries that require the original log files?
I am working to develop a search that groups multiple event source IPs that match a single event's destination IP. Additionally, the group of events would occur within the span of say 5s after the ev... See more...
I am working to develop a search that groups multiple event source IPs that match a single event's destination IP. Additionally, the group of events would occur within the span of say 5s after the event of interest occurs (inclusive).  I have tried using transaction but this only seems to group based on the same field. Coalesce does not work either as the sourceip field is present for most or all events, so destip info will not be included.         "Initial search" | eval mySingleField=coalesce(sourceip, destip) | transaction mySingleField maxspan=5s | where eventcount > 1         I have tried using localize and map and I am also having trouble implementing it here too. As I am understanding localize, it only creates "an event region" which is a period of time in which consecutive events are separated. I'm having trouble understanding if this actually passess the events or if it only passes the event region/time to the map function. I was hoping that localize would limit the search space similarly as maxspan does for the transact command as there are millions of searches.        "Initial Search defining event region" | localize timeafter=5s | map search="search srcip starttimeu=$starttime$ endtimeu=$endtime$"        
I want to temporarily disable alerts on servers while they are being patched or put into maintenance mode. Is it possible to do this via PowerShell/REST?
we  have a Splunk environment using indexer clustering (9.0.0) and we have once which is acting as like DS,CM and LM, we are planning to remove LM from existing instance and we need to add to search ... See more...
we  have a Splunk environment using indexer clustering (9.0.0) and we have once which is acting as like DS,CM and LM, we are planning to remove LM from existing instance and we need to add to search peer is it possible.
I have two host. I need to compare the fields values. Field names are same for both the host.
Hi, I need to add a Role Restriction Search filter on a field which is only available in one index. My problem is that I am not sure the proper way to force this restriction on only this index... See more...
Hi, I need to add a Role Restriction Search filter on a field which is only available in one index. My problem is that I am not sure the proper way to force this restriction on only this index? If I add a restriction like this     "field_name"="field_value"   it works fine for the index containing the value but the others indexes return nothing.   If I add a restriction like this:   ((NOT "field_name"=* ) OR ( "field_name"="field_value"))   the result seems false. Do you have an idea of the correct field to restrict this field? Thanks, Regards, David  
How do access the splunk service from using an unbuntu OS? Yes i already enables boot at start up when i first installed it. PLEASE HELP!!!!!!
Architecture: Distributed - Deployment Server pushes changes to forwarders (1 hvyfwd, 2 universal) - Clustered indexers,  there is a cluster master    I made a change in inputs.conf file stanza t... See more...
Architecture: Distributed - Deployment Server pushes changes to forwarders (1 hvyfwd, 2 universal) - Clustered indexers,  there is a cluster master    I made a change in inputs.conf file stanza to edit IP address and hostname for one of the logsources. I tried to push that change to forwarders through restarting the server class on deployment server gui. However, I am unable to see those logs come in from that IP and source name.  Is there a step that I am missing?
Hi all, I'm currently using the Modular REST API to pull data from a REST API which allows me to specify the earliest time for events through an argument in the URL. I'm using the dynamic token f... See more...
Hi all, I'm currently using the Modular REST API to pull data from a REST API which allows me to specify the earliest time for events through an argument in the URL. I'm using the dynamic token functionality to put a unix timestamp into the URL, all works well. My python code in tokens.py just gets the current linux time and takes 80 seconds from it. My interval is then set to 60 seconds and in theory I shouldn't lose any data from the API. However the REST API Add-on seems to always issue the same request to the API. If I restart splunk then it seems to update and the API call uses the correct time, however then it just keeps using the same time, although the Python code should be updating. Here's the Python code. def eightySecondsAgo(): unixEpochTimeNow = time.time() timeEightySecondsAgo = int(unixEpochTimeNow) - 80 return str(timeEightySecondsAgo) Any my inputs.conf [rest://Intercom_admin_events] activation_key = <redacted> endpoint=https://api.intercom.io/admins/activity_logs?created_at_after=$eightySecondsAgo$ http_header_propertys = authorization=Bearer <redacted>,accept=application/json,content-type=application/json http_method = GET auth_type= none response_type = json streaming_request=0 verify=0 sourcetype=intercom.admin.events polling_interval=60 It's like the dynamic token response is being cached or something? Really strange. Any ideas?
Hello, I am currently testing Splunk for our Cisco backbone network and I would like to filter out two scenarios. 1.) When a power outage occurs, several power supply failures occur at the same t... See more...
Hello, I am currently testing Splunk for our Cisco backbone network and I would like to filter out two scenarios. 1.) When a power outage occurs, several power supply failures occur at the same time 2.) When there is a fiber cut, multiple pseudowires go down at the same time How can I filter these reasonably so that such a failure is detected immediately? Thanks a lot
Hi, we're looking into bulding use case for hashicorp vault. We've had a brief look at their app that is mentioned here:  https://www.hashicorp.com/blog/splunk-app-for-monitoring-hashicorp-vault ... See more...
Hi, we're looking into bulding use case for hashicorp vault. We've had a brief look at their app that is mentioned here:  https://www.hashicorp.com/blog/splunk-app-for-monitoring-hashicorp-vault The app seems to offer quite some insight into operational data from vault. We're interested in use cases that could show a potential abuse of vault. Does anyone have any experience or hints  so we do not have to start from scratch? Thanks Chris
Hi, I have about 100 rules and I want to count the number of logs are related to each rule. When I used "stats count" it counted those rules that have 1 or more logs, but didn't show all the rule... See more...
Hi, I have about 100 rules and I want to count the number of logs are related to each rule. When I used "stats count" it counted those rules that have 1 or more logs, but didn't show all the rules with zero hits. I tried to import csv file that contains all the rules and to remove the rows that contains rules with 1 or more hits. Moreover, I tried the suggestion here with no luck: Solved: Using Splunk to Find Unused Firewall Policies - Splunk Community Any suggestion? Thanks