All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have installed the required apps to get Splunk App for Windows Infrastructure to work. I have the inputs.conf configured with the following:  ###### OS Logs ###### [WinEventLog://Application] di... See more...
I have installed the required apps to get Splunk App for Windows Infrastructure to work. I have the inputs.conf configured with the following:  ###### OS Logs ###### [WinEventLog://Application] disabled = 0 index = wineventlog start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true [WinEventLog://Security] disabled = 0 index = wineventlog start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=true [WinEventLog://System] disabled = 0 index = wineventlog start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true   If i search for the index wineventlog i see items that have been indexed from all desktops, but for some reason i cant seem  to get the information to show up on the Splunk App for Windows Infrastructure under Windows > Event Monitoring all i get is No Results found....    Any ideas why this would be?  I have tried to run the Build lookups again etc and its the same.  Thanks
I'm going to stats through two lookups. srcip.csv field src_ip , subnetmaks dest.csv field dest_ip,subnetmaks src_ip , dest_ip , is intended to be used in stats. ex) index="myindex" | [ | inpu... See more...
I'm going to stats through two lookups. srcip.csv field src_ip , subnetmaks dest.csv field dest_ip,subnetmaks src_ip , dest_ip , is intended to be used in stats. ex) index="myindex" | [ | inputlookup destip.csv] [ | inputlookup srcip.csv] stats values(src_ip) AS src_ip by dest_ip Or is there another way, and if it's different from my index field, ex) csv = src_ip myfield = srcip csv = dest_ip myfield = destip What should I do if it is?
I tried to find a solution in order to parse some URL to obtain the base but it seems that I cannot succeed. For the between GET/POST and HTTP I want to return the baseurl as in the examples below ... See more...
I tried to find a solution in order to parse some URL to obtain the base but it seems that I cannot succeed. For the between GET/POST and HTTP I want to return the baseurl as in the examples below GET /gw/api/aaa/v1/ HTTP - to return /gw/api/aaa/v1 GET /gw/api/abc/v3 HTTP - to return /gw/api/abc/v3 POST /gw/api/cba/ HTTP - to return /gw/api/cba POST /gw/transactions/swaggers/v2 HTTP - to return /gw/transactions/swaggers/v2 POST /gw/api/swaggers/v1/asd?dssa HTTP - to return /gw/api/swaggers/v1/asd POST /api/swaggers/ HTTP - to return /api/swaggers GET /api/cashAccountOpenings/v3/sadsa-123312-1312 HTTP - to return /api/cashAccountOpenings/v3   I added this examples to regex101.com to be easier to find a solution. https://regex101.com/r/oLXtw8/1/  
Hi, One of our client interested to ingest the open telemetry data in to splunk and  we want to monitor Application performance and I could see Splunk APM module is used to perform this task.  When I... See more...
Hi, One of our client interested to ingest the open telemetry data in to splunk and  we want to monitor Application performance and I could see Splunk APM module is used to perform this task.  When I was going through the Splunk Distribution open telemetry collector documents, I got the following doubts. 1) Do I need to buy any additional modules in-order to ingest the Open telemetry data in to splunk, as we have Splunk Enterprise Licensed version.   2)  Is it possible to ingest the open telemetry data without using the Splunk Observability Cloud module.  3) If it is possible to ingest open telemetry data into splunk than could you please provide the link to access the document. Thanks in advance. 
Hi Team, Current table column row1 row2 status failure success   My Requirement- 1------if the row 1 has value as failure and if row 2 itself itself does not exists then row1 has to... See more...
Hi Team, Current table column row1 row2 status failure success   My Requirement- 1------if the row 1 has value as failure and if row 2 itself itself does not exists then row1 has to be renamed to failure 2------if the row 1 has value as success and if row 2 itself itself does not exists then row1 has to be renamed to success 3------if the row 1 has value as success and if row 2 has the value as failure ,then row 1 wants to renamed to as success and row 2 as failure. 4------if the row 1 has value as failure and if row 2 has the value as success, then row 1 wants to renamed to as failure and row 2 as success.   All these cases need in single query. Can anyone help on this? Regards, Madhu R
We're currently using Splunk ES, and would like to grab the link to a notable event's drilldown link on the ES Incident Review page without having to manually copy it.  The closest solution that I... See more...
We're currently using Splunk ES, and would like to grab the link to a notable event's drilldown link on the ES Incident Review page without having to manually copy it.  The closest solution that I've come across is automatically building the URL by using a `notable` search and piecing together the earliest/latest times and drilldown search, but I feel like there might be a more elegant solution out there.
from a SOC perspective what health checks are important for them to perform? i understand the basic checks from splunk monitoring console but are there better checks or more relevant checks a SOC sho... See more...
from a SOC perspective what health checks are important for them to perform? i understand the basic checks from splunk monitoring console but are there better checks or more relevant checks a SOC should be focusing on? 
Hello Splunk Community, I've a query which lists accountNumber , targetAccountNumber, eventType, eventTime The query is working just fine..but it is displaying empty rows with eventTime being displ... See more...
Hello Splunk Community, I've a query which lists accountNumber , targetAccountNumber, eventType, eventTime The query is working just fine..but it is displaying empty rows with eventTime being displayed accountNumber   targetAccountNumber    eventType      eventTime 123456                         789123                               apple               09/02/2020:12:00                                                                                         banana           09/02/2020:13:00   111111                        763333                                 mango               09/03/2020:15:00                                                                                        watermelon       09/03/2020:16:00   212121                         999999                              Texas                   09/04/2020 :18:00                                                                                                                        09/04/2020:19:00                                                                                                                       09/04/2020:20:00                                                                                                                       09/04/2020:21:00 I do not want the empty eventTime being displayed. It should display only there is an eventType for particular time. How should I exclude the empty eventTime My query is as follows index=newyork data | stats values(eventType) as eventTypes,values(eventTime) as "Event Time" by accountNumber, targetAccountNumber   Please help                                                                                                                        
Trying to install Tripwire app and getting the following errors : error:335 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in ... See more...
Trying to install Tripwire app and getting the following errors : error:335 - Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 687, in _do_respond response.body = self.handler() File "/opt/splunk/lib/python3.7/site-packages/cherrypy/lib/encoding.py", line 219, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/htmlinjectiontoolfactory.py", line 75, in wrapper resp = handler(*args, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/routes.py", line 383, in default return route.target(self, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-486>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 40, in rundecs return fn(*a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-484>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 118, in check return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-483>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 166, in validate_ip return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-482>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 245, in preform_sso_check return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-481>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 284, in check_login return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-480>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 304, in handle_exceptions return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-475>", line 2, in listEntities File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 359, in apply_cache_headers response = fn(self, *a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/admin.py", line 1738, in listEntities app_name = eai_acl.get('app') AttributeError: 'NoneType' object has no attribute 'get'
Hello, I am trying to create an Analytic query that would show the top 5 BTs with Error counts. e.g. TransactionName  3 I'm not great at building SQL or Analytics queries. : ) Thanks f... See more...
Hello, I am trying to create an Analytic query that would show the top 5 BTs with Error counts. e.g. TransactionName  3 I'm not great at building SQL or Analytics queries. : ) Thanks for any help, much appreciated. Tom 
Please share a SPL to show if a certain event code ( Windows) from Security logs is being ingested into Splunk. I appreciate a response in advance.
Hello All,   So i have a field like below with JSON file       {"results_appcodes": [{"count": 2, "app_code": "XYZ", "group": "", "instance": "PQ1", "job_names": ["XYZ#cmd#johntest1", "XYZ#... See more...
Hello All,   So i have a field like below with JSON file       {"results_appcodes": [{"count": 2, "app_code": "XYZ", "group": "", "instance": "PQ1", "job_names": ["XYZ#cmd#johntest1", "XYZ#cmd#remetest"]}, {"count": 2, "app_code": "ZZZ", "group": "ABC1234", "instance": "PQ1", "job_names": ["ZZZ#ADM#cmd#pac", "ZZZ#cmd#GET_APP_CODE"]}, {"count": 1, "app_code": "ZZZ", "group": "", "instance": "PQ1", "job_names": ["ZZZ#cmd#mila3098"]}, {"count": 192, "app_code": "GKU", "group": "CAD45678", "instance": "PQ1", "job_names": ["ZZZ#cmd#test123"] ,["ZZZ#cmd#test890"], ["ZZZ#cmd#gola456"], ["ZZZ#cmd#test9990"] }}        Im using below query to break down the JSON file above  All the fields  count,app_code, group, instance are getting as expected but for  job_names  im unable to break down and that particular attrbute has a list of jobs underneath it Im looking for a query to get jobnames also        <<mysearch>>| spath input=results|rename unique_appcodes{}.* as * | eval x = mvzip(count,mvzip(app_code,mvzip(group,mvzip(instance,mvzip(instance,job_names))))) | mvexpand x | eval x = split(x, ",")| eval job_count=mvindex(x,0), app_code = mvindex(x,1) ,group=mvindex(x,2), instance = mvindex(x,3),job_names = mvindex(x,4) |table app_code job_count group instance job_names           Expected output       app_code count group instance job_names XYZ 2 PQ1 XYZ#cmd#johntest1,XYZ#cmd#remetest ZZZ 2 ABC1234 PQ2 ZZZ#ADM#cmd#pac,ZZZ#cmd#GET_APP_CODE              
When I move to the latest version of the Okta App (2.25.x) knowledge bundle replication  breaks with the following errors: "Failed to untar the bundle="D:\Program Files\Splunk\var\run\searchpeers\<... See more...
When I move to the latest version of the Okta App (2.25.x) knowledge bundle replication  breaks with the following errors: "Failed to untar the bundle="D:\Program Files\Splunk\var\run\searchpeers\<name>.bundle." Unable to distribute to peer named <indexer>at uri https://<indexer>:8089 because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_DATA_TRANSMIT_FAILURE I've noticed that prior to the installation knowledge bundle replication happened rather infrequently but after the install it's constant. My search bundles are all around 500MB.  Here's a couple of things I've tried I've tried increasing the bundle timeouts Checking for large lookup files in the Okta App -- they dont exist.  removing all knowledge bundle files on indexers and the search and letting them regenerate.  Completely removing the app from the searchhead and starting fresh.  When I remove the Okta app the error goes away. Okta and Splunk support have been little help so far.  Any suggestions on what could be causing this would be appreciated. Thanks. 
I want to do a multi-search on various systems looking for the src_ip and comparing that src_ip from those systems to a threat intel feed lookup (KV store). When I run the SPL, I get the following er... See more...
I want to do a multi-search on various systems looking for the src_ip and comparing that src_ip from those systems to a threat intel feed lookup (KV store). When I run the SPL, I get the following error: Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command). This is the lookup: lookup TF-ip-add indicator as src_ip OUTPUTNEW
1. How do I make my search string more readable?  It only works if it's all on one line.  I tried escaping new lines but splunk complains. 2. My query creates tables ok.  I want an alert to happen i... See more...
1. How do I make my search string more readable?  It only works if it's all on one line.  I tried escaping new lines but splunk complains. 2. My query creates tables ok.  I want an alert to happen if one of the table entries is zero.   so I added. blah blah | sort + "Appointments Processed" | where 'Appointments Processed' = 0 and "save as" an alert but when it is zero, it doesnt send me email alert?
I am using timewrap function to compare data for a particular day of the week with same day of the week for last 4 weeks. i.e comparing current wednesday to last 4 wednesdays. when i see the graph i... See more...
I am using timewrap function to compare data for a particular day of the week with same day of the week for last 4 weeks. i.e comparing current wednesday to last 4 wednesdays. when i see the graph it gives current week's wednesday as timescale on x axis, also when i hover the charts for previous wednesdays it mentions current wednesday (date) + 2 week's before or 1 week before or 3 weeks before. Is this splunk's limitation or is there a way that previous wednesday's data should show exact date of those previous 4 wednesdays.
Greetings to all, I'm having an issue with the Microsoft Teams TA.  After setting up the Subscription, I'm getting this error: message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Tea... See more...
Greetings to all, I'm having an issue with the Microsoft Teams TA.  After setting up the Subscription, I'm getting this error: message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" subscription = create_subscription(helper, access_token, webhook_url, graph_base_url) ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" ERROR400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions If I'm using the subscription and test with curl on the HF where the TA is installed, I'm getting this answer once I got the token: {"@odata.context":"https://graph.microsoft.com/v1.0/$metadata#subscriptions","value":[]}
Need help with KVstore status. Why do I get "This health check item is not applicable" in MC in my ES while I have many KVstores?
Hello I have a dashboard with this search  sourcetype="Perfmon:Windows Time Service" counter="Computed Time Offset" if $host$="yes" [| inputlookup windows_hosts_srv_2016.csv | fields host | format... See more...
Hello I have a dashboard with this search  sourcetype="Perfmon:Windows Time Service" counter="Computed Time Offset" if $host$="yes" [| inputlookup windows_hosts_srv_2016.csv | fields host | format] | timechart max(Value) by host span=5m I want to add checkbox so the defualt search will be sourcetype="Perfmon:Windows Time Service" counter="Computed Time Offset" if $host$="yes" host=* | timechart max(Value) by host span=5m and if the client clicks on the checkbox then it will search by the first query  how it can be done ? thanks
Hello I develop a Splunk apps on a DEV platform In this apps, I am doing field extractions and log file parsing. As a consequence,  props.conf and transforms.conf files are regularly modified.  I... See more...
Hello I develop a Splunk apps on a DEV platform In this apps, I am doing field extractions and log file parsing. As a consequence,  props.conf and transforms.conf files are regularly modified.  If I want that my Splunk apps works fine in Production, do I have also to update props.conf and transforms.conf in my forwarder? Or on my indexer? Or both in forwarder and indexer ? Thanks a lot