All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  Everyone, I have one requirement . I am creating Incidents from splunk. Below is my search query: index=abc  ns=blazepsfpublish ("NullPointerException" OR "IllegalStateException" OR "RuntimeE... See more...
Hi  Everyone, I have one requirement . I am creating Incidents from splunk. Below is my search query: index=abc  ns=blazepsfpublish ("NullPointerException" OR "IllegalStateException" OR "RuntimeException" OR "NumberFormatException" OR "NoSuchMethodException" OR "ClassCastException" OR "ParseException" OR "InvocationTargetException" OR "OutOfMemoryError")| rex "message=(?<ExceptionMessage>[^\n]+)"|eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")|cluster showcount=t t=0.9|table app_name, ExceptionMessage,cluster_count,_time, environment, pod_name,ns|dedup ExceptionMessage,pod_name|rename app_name as APP_NAME, _time as Time, environment as Environment, pod_name as Pod_Name,cluster_count as Count while creating Incident through Sahara I am putting this in my $result.table$. but I ma getting incident like this: actual_reporter_group: GL origin_source: splunk monitor_source: Splunk COE pipeline_source: Sahara, packet_id: a72f20ac-51b6-42b2-96de-e99b00a0daa4 time_stamp: 2021-03-31T06:30:36.0316552Z sahara_severity: Minor enriched_workgroups: GL incident_key: item.source=Splunk COE;item.ticketingKey=Splunk-SAHARA-Forwarder-Alert-Action :: Incident Testing Alert :: E3 :: splunk ::   can someone guide me why I am not getting proper data. What should I enter in uniqueID to get the proper data. Thanks in advance    
i want to index data with 1st line as header and index data from second row as new line vice versa   CONTAINER ID,IMAGE,COMMAND,CREATED,STATUS,PORTS,NAMES 8c0e092b6815,thomsch98/kafdrop:latest,"/us... See more...
i want to index data with 1st line as header and index data from second row as new line vice versa   CONTAINER ID,IMAGE,COMMAND,CREATED,STATUS,PORTS,NAMES 8c0e092b6815,thomsch98/kafdrop:latest,"/usr/local/bin/mvn-…",3 days ago,Up 3 days,PAServices_kafdrop.1,yen4hgju18kkfgq9bvud7e1w8 35fe9af4efa0,thomsch98/kafdrop:latest,"/usr/local/bin/mvn-…",3 days ago,Exited (137) 3 days ago,PAServices_kafdrop.1,x2r2tuosozdd9uzfnq7ejdi70  
Hi, guys. I have a big trouble here.  I'm using rex to get ip-adresses.  |rex max_match=0 "(?P<ip0>((?:[0-9]{1,3}\.){3}[0-9]{1,3}.[0-9]{1,9}))" field1: 255.255.255.255/1 255.255.255.25... See more...
Hi, guys. I have a big trouble here.  I'm using rex to get ip-adresses.  |rex max_match=0 "(?P<ip0>((?:[0-9]{1,3}\.){3}[0-9]{1,3}.[0-9]{1,9}))" field1: 255.255.255.255/1 255.255.255.255/2 255.255.255.255/3 255.255.255.255/4 How can i do this? Please, help   I want to get something like this  field1: field2: field3:  field4: 255.255.255.255/1 255.255.255.255/2 255.255.255.255/3 255.255.255.255/4
Hi, I am ingesting Watchguard firebox events into my Splunk Enterprise but I just get the firewall traffic logs, I need to have its internal logs to check its health etc. Has  anyone an idea and ca... See more...
Hi, I am ingesting Watchguard firebox events into my Splunk Enterprise but I just get the firewall traffic logs, I need to have its internal logs to check its health etc. Has  anyone an idea and can advise how can I ingest the watchguard firebox internal logs to my splunk instance? Thank you in advance.
Hello all.  I am trying to find the average by closed_month, but I want the average duration to include events from previous months in its average. So, average for Feb should include Jan + Feb.  Ave... See more...
Hello all.  I am trying to find the average by closed_month, but I want the average duration to include events from previous months in its average. So, average for Feb should include Jan + Feb.  Average for March should include Jan + Feb + Mar.  I figured out how to get the average for each month, but I don't know how to get include the previous months average_duration along with the current month. Sample Table of Data Case opened closed closed_month duration aaa Jan-01 Jan-31 Jan 30 bbb Feb-10 Feb-26 Feb 16 ccc Feb-13 Feb-28 Feb 15 ddd Feb-14 Feb-28 Feb 14 eee Feb-17 Mar-01 Mar 11 fff Feb-24 Mar-13 Mar 17 ggg Mar-03 Mar-11 Mar 8 hhh Mar-11 Mar-16 Mar 5 iii Mar-22 Mar-24 Mar 2 Avg Jan = (30) = 30 Avg Feb = (30+16+15+14)/4 = 18.8 Avg Mar = (30+16+15+14+11+17+8+5+2)/9 = 13.1  The desired result is a column chart, with 3 columns, one for each resolved month.  Then each would go have the value 30, 18.8, and 13.1 respectively.
Using the extract function, I can arrive with the below columns:          I need to compare the values, and come up with a new field like r1, r2, r3 which says whether it's same or not.  ... See more...
Using the extract function, I can arrive with the below columns:          I need to compare the values, and come up with a new field like r1, r2, r3 which says whether it's same or not.  I'm thinking of using eval function and then if statements to compare the two values but I'm not sure how to do it in such a way that will apply to all columns with titles beginning with "q" and "a". I was thinking of using foreach loop but it seems that the foreach loop has very specific usecases that doesn't apply to mine. The dilemma is that I need to do this dynamically, because it's possible that in other rows, there will be data reaching up to q5... q10... etc. Is there a specific command for what I want to do? 
Dear Users, Need some insights here to solve the issue with consolidating logs into one instance.  I have multiple splunk instances hosted into different servers and the distributed application log... See more...
Dear Users, Need some insights here to solve the issue with consolidating logs into one instance.  I have multiple splunk instances hosted into different servers and the distributed application logs the respective data into these servers. Now, i would like to get all these logs from different splunk instances to one single instance so that i can have end to end monitoring established and generate report/dashboard. 
In transforms.conf I can use DELIMS to extract the field by fixed format. My question is, if one of the field is changeable, how can we resolve that? Thanks, Michael
Hi, My current query for splunk dashboard is as: ........| eval ErrorMsg=_raw | stats count by Application, ErrorMsg | sort -count | table count, Application, ErrorMsg My able looks like this: ... See more...
Hi, My current query for splunk dashboard is as: ........| eval ErrorMsg=_raw | stats count by Application, ErrorMsg | sort -count | table count, Application, ErrorMsg My able looks like this: count Application ErrorMsg 5 abc {"severity" : "ERROR", "exception" : "xyz abc asd......."........"time" : "12:00:00"><there are mutiple key value pairs with data in multiple lines>........} 10 abc {"severity" : "ERROR", "exception" : "xyz abc asd......."........."time" : "12:01:00"<there are mutiple key value pairs with data in multiple lines>........}   How can I get table like this: 15 abc "exception" : "xyz abc asd 
Hello Guys, Below is my initial event and i want to break each from the staring of this event. As i tried various attributes in props.conf but no luck to break the event from this line. I used as o... See more...
Hello Guys, Below is my initial event and i want to break each from the staring of this event. As i tried various attributes in props.conf but no luck to break the event from this line. I used as of now: LINE_BREAKER = ^\*{22}\n\w+\s\w+\s\w+\sstart\n\Start\stime\:\s\d{14} TIME_PREFIX = ^\*{22}\n\w+\s\w+\s\w+\sstart\n\Start\stime\:\s TIME_FORMAT= %Y%m%d%H%M%S   ********************** Windows PowerShell transcript start Start time: 20210223060505   Please suggest me what i did wrong in above props.
Hi, I have a data source that lists phone calls. Each call record will list a set of values, in defined fields The key information I’m interesting in, is a field called Phone_Number And a field c... See more...
Hi, I have a data source that lists phone calls. Each call record will list a set of values, in defined fields The key information I’m interesting in, is a field called Phone_Number And a field called Result. There are about 6 valid values for Result which I wish to remap as follows A,B = Good_Result C,D,E=Bad_Result     I want to list the Phone_Numbers based on a count (by percentage of Bad calls)   Phone_Number                % Bad_Result 800123455                           80 800444666                           77 800781711                           23 800372728                           4 800312711                           2
Hi all, I have recently upgraded the Splunk_TA_New_Relic to v 2.2.0 on Splunk 8.0.7. Version 2.1.0 worked fine, but after the upgrade to 2.2.0 I started getting a lot of python errors. Has anyone ... See more...
Hi all, I have recently upgraded the Splunk_TA_New_Relic to v 2.2.0 on Splunk 8.0.7. Version 2.1.0 worked fine, but after the upgrade to 2.2.0 I started getting a lot of python errors. Has anyone found this before or know a solution? I'm going to try to set the python version to use 2.7, but wondering if there is something else we need to do to get the new version up and running. Here's a sample of the python error:     2021-03-30 13:18:42,644 ERROR The script at path=/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/Splunk_TA_New_Relic_rh_new_relic_insights.py has thrown an exception=Traceback (most recent call last): File "/opt/splunk/bin/runScript.py", line 82, in <module> exec(open(REAL_SCRIPT_NAME).read()) File "<string>", line 4, in <module> File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/splunktaucclib/rest_handler/endpoint/validator.py", line 8, in <module> from past.builtins import basestring File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/past/__init__.py", line 88, in <module> from past.translation import install_hooks as autotranslate File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/past/translation/__init__.py", line 42, in <module> from lib2to3.refactor import RefactoringTool File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/lib2to3/refactor.py", line 25, in <module> from .fixer_util import find_root File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/lib2to3/fixer_util.py", line 7, in <module> from .pygram import python_symbols as syms File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/lib2to3/pygram.py", line 32, in <module> python_grammar = driver.load_packaged_grammar("lib2to3", _GRAMMAR_FILE) File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/lib2to3/pgen2/driver.py", line 156, in load_packaged_grammar return load_grammar(grammar_source) File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/lib2to3/pgen2/driver.py", line 131, in load_grammar g.load(gp) File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/aob_py3/lib2to3/pgen2/grammar.py", line 108, in load d = pickle.load(f) AttributeError: 'collections.OrderedDict' object has no attribute 'append'     Thanks in advance.
Hi, I want to do a predict command in conjunction with my login logs to see if there's any anomalous behaviour user by user. How am I able to do a predict command that can do a predict line per user... See more...
Hi, I want to do a predict command in conjunction with my login logs to see if there's any anomalous behaviour user by user. How am I able to do a predict command that can do a predict line per user? Here's the search so far: | from datamodel:"Authentication"."Successful_Authentication" | search sourcetype=mysourcetype | timechart span=2h count(action) by user I want to adjust it to fit with the MLK Numeric Outliers search: | inputlookup logins.csv | predict logins as prediction algorithm=LLP future_timespan=150 holdback=0 | where prediction!="" AND logins!="" | eval residual = prediction - logins | streamstats window=72 current=true median("residual") as median | eval absDev=(abs('residual'-median)) | streamstats window=72 current=true median(absDev) as medianAbsDev | eval lowerBound=(median-medianAbsDev*exact(9)), upperBound=(median+medianAbsDev*exact(9)) | eval isOutlier=if('residual' < lowerBound OR 'residual' > upperBound, 1, 0)
Hi I'm trying to remove the 100s from the y-axis label. Using chrome, and inspecting the element, when use the css noted below it works. When i use the in the simplexml it doesn't         <html... See more...
Hi I'm trying to remove the 100s from the y-axis label. Using chrome, and inspecting the element, when use the css noted below it works. When i use the in the simplexml it doesn't         <html> <style> #trellis g.highcharts-yaxis-labels &gt; text{ visibility: hidden; !important; display: none !important; } </style> </html>          
I am currently testing the Cisco Security Suite dashboards using data collected through the Splunk Add-on for Cisco WSA and I have noticed that some of the searches are not using any of the configura... See more...
I am currently testing the Cisco Security Suite dashboards using data collected through the Splunk Add-on for Cisco WSA and I have noticed that some of the searches are not using any of the configuration in props.conf/transforms.conf, but when I run the same search under Search & Reporting, all the fields are extracted/evaluated. I have tried changing the scope of all the knowledge objects to global through Splunk Web, as well as the corresponding metadata/local.meta files (using export = system, as well as making sure that all users have read or write permissions), but no success. In order to narrow down the issue, I would like to be able to make every knowledge object and configuration available to all apps. Why are the permissions not working as intended? I am using Splunk Enterprise version 7.3.6 hosted in a RHEL server.
Hi, we have an API that we are starting to send data to HEC. When I place      [httpServer] crossOriginSharingHeaders = "http://<FQDN>:<PORT>"     in the server.conf under /etc/system/local, I... See more...
Hi, we have an API that we are starting to send data to HEC. When I place      [httpServer] crossOriginSharingHeaders = "http://<FQDN>:<PORT>"     in the server.conf under /etc/system/local, I get an error upon restarting Splunk.    Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 39, in <module> from splunk.rcUtils import makeRestCall, CliArgError, NoEndpointError, InvalidStatusCodeError File "/opt/splunk/lib/python3.7/site-packages/splunk/rcUtils.py", line 17, in <module> from splunk.search import dispatch, getJob, listJobs File "/opt/splunk/lib/python3.7/site-packages/splunk/search/__init__.py", line 2238, in <module> TEST_NAMESPACE = splunk.getDefault('namespace') File "/opt/splunk/lib/python3.7/site-packages/splunk/__init__.py", line 79, in getDefault setDefault() File "/opt/splunk/lib/python3.7/site-packages/splunk/__init__.py", line 66, in setDefault getLocalServerInfo() File "/opt/splunk/lib/python3.7/site-packages/splunk/__init__.py", line 36, in getLocalServerInfo mergeHostPath(hostpath, True) File "/opt/splunk/lib/python3.7/site-packages/splunk/__init__.py", line 136, in mergeHostPath setDefault('port', int(port)) ValueError: invalid literal for int() with base 10: '4200""\nhttps://127.0.0.1:8089' What would cause Splunk from starting? Thanks in advince, ~John
Is there a limit to the number of conditions we can use in a case() statement? I've reached a point where my ORs and ANDs are no longer being highlighted syntactically and neither are my parenthesis... See more...
Is there a limit to the number of conditions we can use in a case() statement? I've reached a point where my ORs and ANDs are no longer being highlighted syntactically and neither are my parenthesis being highlighted with their corresponding opening/closing counterpart when I move the cursor one space beyond one. Thank you!
Hello! I have multiple events that have the same field values, but are not necessarily in the same order. I want to be able to grab the earliest time for the most recent field value in consecutive o... See more...
Hello! I have multiple events that have the same field values, but are not necessarily in the same order. I want to be able to grab the earliest time for the most recent field value in consecutive order. For instance, my events might look like this for User 1: 2021-03-30 13:23:42  User: 1 Chooses To Go To Room #4 2021-03-30 13:23:22  User: 1 Chooses To Go To Room #4 2021-03-30 13:23:05  User: 1 Chooses To Go To Room #4 2021-03-30 13:22:47  User: 1 Chooses To Go To Room #4 2021-03-30 13:22:33  User: 1 Leaves Room  #12 2021-03-30 13:22:19  User: 1 Chooses To Go To Room #12 2021-03-30 13:22:09  User: 1 Chooses To Go To Room #12 2021-03-30 13:21:58  User: 1 Leaves Room #4 2021-03-30 13:21:43  User: 1 Chooses To Go To Room #4   In this case, I am trying to grab the values pertaining to the fourth event (with timestamp 2021-03-30 13:22:47) since it is the last consecutive event with the most recent field value (room number). Currently, my results would be grabbing the last event, even though it is not consecutive. My query looks like the following:   index=INDEX host=HOSTNAME sourcetype=SOURCE  | rex field=_raw "User:\s(?<user_id>\d+)\s\Leaves\sRoom\s\#(?<room_id>\d+)" | rex field=_raw "User:\s(?<user_id>\d+)\sChooses\sTo\sGo\sTo\sRoom\s\#(?<room_id>\d+)" | eval action=if(like(_raw, "%Chooses%"), "Choose", null) | where isnotnull(action) | eventstats latest(room_id) as latest_room by user_id | streamstats count as count_value by room_id reset_on_change=true | where room_id=latest_room | | stats earliest(room_id) as room_id earliest(_time) as chosen_time by user_id   How might I rewrite this to only get the last consecutive event with the most recent field value?
What are some recommended Splunk Ent. / ES user provisioning in corporate world. How do you assign user accts., Roles to use Splunk / ES
Hello, Can I know how to get the last Sunday of each month? For example, 31st is last Sunday of Jan 2021, 28th is last Sunday of Feb 2021, 28th is last Sunday of March 2021... Thank you.