All Topics

Top

All Topics

I have a multi-site indexer cluster with 3 sites and 2 indexers in each site. RF and SF set to 3 RF = origin:1, total:3 SF = origin:1, total:3 However, I am getting numerous such errors missing e... See more...
I have a multi-site indexer cluster with 3 sites and 2 indexers in each site. RF and SF set to 3 RF = origin:1, total:3 SF = origin:1, total:3 However, I am getting numerous such errors missing enough suitable candidates to create replicated copy in order to meeting replicaiton policy. Missing={site3:1} missing enough suitable candidates to create replicated copy in order to meeting replicaiton policy. Missing={site2:1} missing enough suitable candidates to create replicated copy in order to meeting replicaiton policy. Missing={site1:1}  I am suspecting this might be due to incorrect RF/SF.   Can anyone please help confirm ?
I have commonly seen in various deployments where there have been separate partition for hot/warm data, however, I am keen to know, if I am using Smartstore and if I have Splunk homePath, that is, ho... See more...
I have commonly seen in various deployments where there have been separate partition for hot/warm data, however, I am keen to know, if I am using Smartstore and if I have Splunk homePath, that is, hot/warm bucket directory on the same file system as Splunk software installation, would that cause any issue or would that be against Splunk recommendation?
Hi All, I'm wondering if Automated Root Cause Analysis is supported in an On-premise environment? Where can I find a list of features that are supported in an on-prem environment? Thanks
Has any one yet created or written any Regex for it. Thanks  
I'm having issue with a search of mine. I've been trying to organize the matrix so that it will be ready for my pivot and then eventually a dashboard visual, but there are three columns that seem to ... See more...
I'm having issue with a search of mine. I've been trying to organize the matrix so that it will be ready for my pivot and then eventually a dashboard visual, but there are three columns that seem to be troublesome.  It seem as though my eval command is only working with one of the start_DateNo and returning results for only one instance (see pictorial below). Is there a order of operations that I'm missing with my formula, or is there a better command to get the data to what I want? In addition it seems like my "slaName" isn't being reflected accurately as well.  Below I have a snip-it of the error, and then a row/column matrix goal to what I'm ultimately trying to get the data to.  Error:    Goal: key team_name start_DateNo start_weekNo start_yearNo slaName   UNIQUE_SLA_Count ADVANA-104 ADVANA 2020-6-11 24 20 DSDE Pending Approval SLA ADVANA-104 / 24 / 20 / DSDE Pending Approval SLA ADVANA-104 ADVANA 2020-6-11 24 20 DSDE Ready to Start SLA ADVANA-104 / 24 / 20 / DSDE Ready to Start SLA ADVANA-104 ADVANA 2021-5-14 19 21 DSDE In Progress SLA ADVANA-104 / 19 / 21 / DSDE In Progress SLA   Any help would be much appreciated, I've been going back a forth for a few hours now trying to get this to where I need it.    For editing purposes, here is the SPL from the picture above: index=jira sourcetype="jira:sla:json" OR sourcetype="jira:issues:json" | rex field=startDate "(?P<start_DateNo>\d+-\d+-\d+)" | rex field=startDate "(?P<start_TimeNo>\d+:\d+:\d+)" | eval start_weekNo=strftime(strptime(start_DateNo,"%Y-%m-%d"),"%V") | eval start_yearNo=strftime(strptime(start_DateNo,"%Y-%m-%d"),"%y") | eval key=coalesce(key,issueKey) | stats values(team_name) as team_name values(start_DateNo) as start_DateNo values(start_weekNo) as start_weekNo values(start_yearNo) as start_yearNo values(slaName) as slaName values(fields.status.name) as fields.status.name by key | mvexpand slaName | mvexpand start_DateNo | mvexpand start_weekNo | mvexpand start_yearNo | where team_name="ADVANA" | where key="ADVANA-104" | strcat key " / " start_weekNo " / " start_yearNo " / " slaName UNIQUE_SLA_Count | search UNIQUE_SLA_Count="ADVANA-104 / 19 / 20 / DSDE Pending Approval SLA "   Thank you!  
I need to use federated search which does not support search time lookup at this time in splunk 8.2.2.1. I came across splunk doc to add fields at ingest time (index time) based on ingest time looku... See more...
I need to use federated search which does not support search time lookup at this time in splunk 8.2.2.1. I came across splunk doc to add fields at ingest time (index time) based on ingest time lookup.  https://docs.splunk.com/Documentation/Splunk/8.2.3/Data/IngestLookups What I am trying to do is during event ingestion I am looking for value of field "application" and match that with the CSV file as shown below and trying to add fields APP and COMP based on application value.  e.g. if incoming event has application=Linux add APP field with value 9001 and COMP field as 8001. But it does not work.  Please help.  Here are the following files I created as documented.  more /opt/splunk/etc/system/lookups/APP_COMP.csv application,APP,COMP Linux,9001,8001 Console,9002,8002 Windows,9003,8003 more /opt/splunk/etc/system/local/props.conf  [access_combine_wcookie] TRANSFORMS = Active_Events /opt/splunk/etc/system/local/transforms.conf [Active_Events] INGEST_EVAL= APPCOMP=lookup("APP_COMP.csv", json_object("application", application), json_array("APP", "COMP")) more /opt/splunk/etc/system/local/fields.conf [APP] INDEXED = True [COMP] INDEXED = True
I have a SplunkBase app for a few years and noticed the install count has decreased (maybe it was reset at one time?) however my download count has continued increasing. what explains this happening... See more...
I have a SplunkBase app for a few years and noticed the install count has decreased (maybe it was reset at one time?) however my download count has continued increasing. what explains this happening? For what it’s worth, my app has more than 1 version (ie v1, v2)
Anyone know where I can download v21.1.1.31776 java agent. 
I have nested events that look like this in Splunk: container_id: 13243d84e63d8d5b56c5 container_name: /ecs-stg-compute-instances-226-ur-2-c499f4ac log: {"module": "ur.uhg", "functions": ["unlock_... See more...
I have nested events that look like this in Splunk: container_id: 13243d84e63d8d5b56c5 container_name: /ecs-stg-compute-instances-226-ur-2-c499f4ac log: {"module": "ur.uhg", "functions": ["unlock_user_processing"], "session-id": "XUHWnDAAkR3AwrsXxtL339z9rEf-l", "email": "xxx@gmail.com", "user-id": 3, "user-account-id": 3, "start-time": "2021-11-08T19:59:36.711483", "end-time": null, "callback-function": "calculate_metrics", "emails-processed": 316, "emails-left-to-process": 0, "images-processed": 316, "iterations": 5, "iteration-times": [56.61728, 162.878587, 43.512794, 24.918005, 0.954233], "event": "chained_functions() called.", "level": "debug", "timestamp": "2021-11-08T20:04:25.905376Z"}  source: stdout and a 'log' value is seen like a string even so it's a JSON object. How can I  parse "log" value into key/value pairs?? 
Hello There has anyone done this by now Deliver IBM z/OS RACF, ACF2, & Top Secret User and Db2 Access I know they are called SMF Recods Logged I also know they are called SMF and have numbers Ha... See more...
Hello There has anyone done this by now Deliver IBM z/OS RACF, ACF2, & Top Secret User and Db2 Access I know they are called SMF Recods Logged I also know they are called SMF and have numbers Has anyone able to move logs into Splunk  I am looking to find out the easy way to move this data into Splunk enterprise If have any idea or even little knowledge its welcome Any tips and hints are great full to me Thanks
Has anyone sent logs from BMC AMI Defender to Splunk I would like to know Thanks
I have been using Microsoft Azure add-on for Splunk to ingest Azure sign in logs for over a year and today I see the following error:     021-11-08 16:31:12,318 ERROR pid=4614 tid=MainThread fi... See more...
I have been using Microsoft Azure add-on for Splunk to ingest Azure sign in logs for over a year and today I see the following error:     021-11-08 16:31:12,318 ERROR pid=4614 tid=MainThread file=base_modinput.py:log_error:309 | Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/splunklib/binding.py", line 1262, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- b'{"messages":[{"type":"ERROR","text":"Unexpected error \\"<class \'splunktaucclib.rest_handler.error.RestError\'>\\" from python handler: \\"REST Error [400]: Bad Request -- HTTP 400 Bad Request -- int() argument must be a string, a bytes-like object or a number, not \'NoneType\'\\". See splunkd.log for more details."}]}'       Has anyone else experiencing this issue?
We have a relatively small set of devices that emit daily in the vicinity of a million events each.  Each device has unique ID (Serial #) which is included in events. What would be an efficient meth... See more...
We have a relatively small set of devices that emit daily in the vicinity of a million events each.  Each device has unique ID (Serial #) which is included in events. What would be an efficient method of collecting a list of unique IDs?  index=abc | stats count by ID   index=abc | stats values(id) as IDs | mvexpand IDs index-abc | fields ID | dedup ID Anything else?  
I am trying to find out if anyone has done this by now SMF 110 SMF 100-102 SMF 120 SMF 115-116 SMF 92 SMF 70-79 SMF 80 and SMF 30 SMF 118-119 and others into sp
My event returns the following: 1@test.com/test/2_0" xmlns:d4p1="http://www.w3.org/1999/xlink"> <eb:Description xml:lang="en">test document ref 000000000000.rtf but I want to strip the following st... See more...
My event returns the following: 1@test.com/test/2_0" xmlns:d4p1="http://www.w3.org/1999/xlink"> <eb:Description xml:lang="en">test document ref 000000000000.rtf but I want to strip the following string and replace with a colon space @test.com/test/2_0" xmlns:d4p1="http://www.w3.org/1999/xlink"> <eb:Description xml:lang="en"> Ideal Result 1: test document ref 000000000000.rtf Is there a way of doing this?
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate i... See more...
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate is above a certain threshold for example 20% failure rate however unsure how to do that: index="main" source="C:\\inetpub\\logs\\LogFiles\\*" |eval Time = (time_taken/1000)|eval status=case(Time>20,"TimeOut",(sc_status!=200),"HTTP_Error",true(),"Success")|stats sum(Time) as sum_sec,max(Time) as max_sec,count by status,sc_status,host,_time|chart sum(count) by host,status| addcoltotals labelfield=host label="(TOTAL)"| addtotals fieldname=total|eval successes=(total-(timeout+HTTP_Error))|eval failures=(TimeOut+HTTP_Error)|eval success_rate=round((successes/total)*100,2)|eval failure_rate=round((failures/total)*100,2)|table successes failures success_rate failure_rate total   Any help would be greatly appreciated.   Thanks,   Joe
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate i... See more...
Hi,   I have the bellow search which works out the successes, failures, success_rate, failure_rate and total however I would like to add a field to work out the amount of minutes the failure rate is above a certain threshold for example 20% failure rate however unsure how to do that: index="main" source="C:\\inetpub\\logs\\LogFiles\\*" |eval Time = (time_taken/1000)|eval status=case(Time>20,"TimeOut",(sc_status!=200),"HTTP_Error",true(),"Success")|stats sum(Time) as sum_sec,max(Time) as max_sec,count by status,sc_status,host,_time|chart sum(count) by host,status| addcoltotals labelfield=host label="(TOTAL)"| addtotals fieldname=total|eval successes=(total-(timeout+HTTP_Error))|eval failures=(TimeOut+HTTP_Error)|eval success_rate=round((successes/total)*100,2)|eval failure_rate=round((failures/total)*100,2)|table successes failures success_rate failure_rate total   Any help would be greatly appreciated.   Thanks,   Joe
Hello!  I have a lookup table that looks like the following:  host timestamp host1 10:33 host2 4:24   What I would like to do is "iterate" through the lookup table using the host f... See more...
Hello!  I have a lookup table that looks like the following:  host timestamp host1 10:33 host2 4:24   What I would like to do is "iterate" through the lookup table using the host field for host, and the timestamp for the search. Does anyone have any opinions/thoughts? 
What happened to the ES Sandbox? I can no longer find it to sign up for it.
Hi, I am using the Splunk API 8.1.1 and using a script to UPDATE an existing dashboard. When i try to send these characters ; & > < i got an error 400 bad request saying "...is not supported by this... See more...
Hi, I am using the Splunk API 8.1.1 and using a script to UPDATE an existing dashboard. When i try to send these characters ; & > < i got an error 400 bad request saying "...is not supported by this handler". curl --location --request POST 'https://localhost:8089/servicesNS/nobody/myapp/data/ui/views/mydashboard' \ --header 'Authorization: Basic YWRtaW46cGFzc3dvcmQ=' \ --header 'Content-Type: text/html' \ --data-raw 'eai:data=<dashboard><label>My Report</label><description>My characters are: ; & > < and more...</description></dashboard>' How can I send these characters so I can display them on my dashboard? Thanks in advance!