All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

index=epaas_epaas2_idx ns=xyz365 (app_name="abc" OR app_name="xyz" OR app_name="lmn" OR app_name="deg") method!=GET (process=start OR (process=end AND (status="500"OR status="429" OR status="506"))) ... See more...
index=epaas_epaas2_idx ns=xyz365 (app_name="abc" OR app_name="xyz" OR app_name="lmn" OR app_name="deg") method!=GET (process=start OR (process=end AND (status="500"OR status="429" OR status="506"))) NOT("C360-GraphiQL-Postman") NOT("C360-GraphiQL-UI") NOT(MATCHBOX) NOT(TEST) | bucket span=h _time | eval app_name = replace(app_name,"-a","") | eval app_name = replace(app_name, "-b","") | stats count(eval(process="start")) as total count(eval(process="end")) as error by _time app_name | eval rate=round ((1-(error/total))*100,4) | xyseries _time app_name rate error | sort _time app_name error     Query: I want to generate the chart based on Error and rate overlapping each other.  I am getting issue when applying Trellis, its not overlapping based on Error and rate. instead its generating individual chart separately.  I am looking like above is mentioned in image. 
Hello awesome community! I got help from here once before so I will try again. I have two indexes, Index A and Index B. Fields: Index A: id Index B: pid, address I want to retrieve the top 10 c... See more...
Hello awesome community! I got help from here once before so I will try again. I have two indexes, Index A and Index B. Fields: Index A: id Index B: pid, address I want to retrieve the top 10 count ids from Index A and then join these 10 with Index B to retrieve the address info. This is my search (simplified) to retrieve the top 10 ids from Index A: index = A | stats count by id | sort -count | head 10 Index B is huge so I only want to search it for the top 10 ids found from Index A search above. How can I use the results found from the above search into a search from Index B? Many thanks
I'm looking for another way to run the search below and expand the computer field. This search is pulling systems belonging to a specific group in AD and then cleaning up the name from the member_dn ... See more...
I'm looking for another way to run the search below and expand the computer field. This search is pulling systems belonging to a specific group in AD and then cleaning up the name from the member_dn field.  It them puts it into a lookup table to use in ES. Mvexpand is running into limitations with memory and I cannot adjust it high enough to extract all of the values.   |ldapsearch domain=default search="(&(objectclass=group)(cn=Eng_Computers))" | table cn,distinguishedName | ldapgroup| table cn,member_dn,member_type | rex field=member_dn "CN\=(?P<computer>[\w\-\_]+)(?=\,\w{2}\=)" |mvexpand computer |table computer | sort computer |outputlookup eng_systems.csv Suggestions are appreciated.  
Hi All, We need to write a python script  to pull data for below query ,using script below but no output is showing. Please advice how we can do it with python script as AND operation seems not wor... See more...
Hi All, We need to write a python script  to pull data for below query ,using script below but no output is showing. Please advice how we can do it with python script as AND operation seems not working    Index="ti-p_plasma" sourcetype="plasma: ops-gateway" earliest=-1h source ="/home/zsvg9ky/deployments/ops-gateway/ops-gateway/logs/access*" | search ogw_uri!=.js AND ogw_uri!=.css AND ogw_uri!=.gif AND ogw_uri!=.jpeg AND ogw_uri!=.png AND ogw_uri!=.jpg AND ogw_uri!=.fonts AND ogw_uri!=.assets/ | Rex field=ogw_uri " ^/(?<end_point_services>[A-Za-z0-9_-]+)[/|?].*$" | chart count by end_point_services, ogw_status_code | field -"201","405","206"       I am using below python script but output is not producing nothing   from __future__ import print_function from future import standard_library standard_library.install_aliases() import urllib.request, urllib.parse, urllib.error import httplib2 from xml.dom import minidom baseurl = 'https://3.131.162.26:8089' userName = 'admin' password = 'India@nic' searchQuery = 'index=main host="splunk1" source="/var/log/secure"|stats' # Authenticate with server. # Disable SSL cert validation. Splunk certs are self-signed. serverContent = httplib2.Http(disable_ssl_certificate_validation=True).request(baseurl + '/services/auth/login', 'POST', headers={}, body=urllib.parse.urlencode({'username':userName, 'password':password}))[1] sessionKey = minidom.parseString(serverContent).getElementsByTagName('sessionKey')[0].childNodes[0].nodeValue # Remove leading and trailing whitespace from the search searchQuery = searchQuery.strip() # If the query doesn't already start with the 'search' operator or another # generating command (e.g. "| inputcsv"), then prepend "search " to it. if not (searchQuery.startswith('search') or searchQuery.startswith("|")): searchQuery = 'search ' + searchQuery print(searchQuery) # Run the search. # Again, disable SSL cert validation. print(httplib2.Http(disable_ssl_certificate_validation=True).request(baseurl + '/services/search/jobs','POST', headers={'Authorization': 'Splunk %s' % sessionKey},body=urllib.parse.urlencode({'search': searchQuery}))[1])     Which genereting job sid then using below script to show the output    curl -k -H "Authorization:Splunk $token" https://3.131.35.127:8089/services/search/jobs/$jobid/results_preview --get -d output_mode=csv  
Hi Splunk Ninjas,  I wanted to know why we installed app in Splunk SH, can you suggest few test case on this. Regards ~A
Dear, I was following along the Lab 8 exercise of Splunk Fundamentals 2. However the field extraction failed and an unexpected result was observed. Please see the screenshots below IP has been high... See more...
Dear, I was following along the Lab 8 exercise of Splunk Fundamentals 2. However the field extraction failed and an unexpected result was observed. Please see the screenshots below IP has been highlighted as 'src' One row selects the wrong result   Add sample event and highlighting fails the extraction   I am still able to continue with the training but this might be something you want to look into.  Kind regards, Olaf
my search query returns list of _time values for multiple dates and below is start and end times for a each date 2021-02-23 12:27:13.173 2021-02-23 16:18:20.129 2021-02-24 09:18:06.191 2021-02-24... See more...
my search query returns list of _time values for multiple dates and below is start and end times for a each date 2021-02-23 12:27:13.173 2021-02-23 16:18:20.129 2021-02-24 09:18:06.191 2021-02-24 13:22:48.285 2021-02-25 09:02:38.042 2021-02-25 13:04:52.313 in the above list i need to display like below. i have tried multiple ways but unable to get the output in below format. is there any i can extract like below Date Start_time End_time difference in minutes 2/23/2021 2/23/21 12:27 2/23/21 16:18 231.11593  
Problem statement: Monitor the event sequence and trigger an alert when any transaction failure due to error code (http_5xx) as technical failure within 10m interval.  Use-case 1: 10 transactions... See more...
Problem statement: Monitor the event sequence and trigger an alert when any transaction failure due to error code (http_5xx) as technical failure within 10m interval.  Use-case 1: 10 transactions -->5  consecutive success and 5 consecutive technical failure with error code(http_5xx) considered for alert Use-case 2: 5 transactions  --> 3 consecutive technical failure with errorCode(http_5xx) --> 1 validation failure with errorCode(http_4xx)--> technical failure with errorCode(http_5xx) should be considered for alert Use-case 3: 10 transaction --> 2 consecutive success -->4 consecutive technical failure ---> 1 success ---> 2 tech failure --> Considered for alert Use-case 4: Tech failure or No Response ======================================================== Application Event Flow Sequence for different Scenario For a particular Transactionid:517923784 Success-Sequence MOSRequest --> Validation Passed -->ITAM Request -->ITAM Response (code202) --> SLF Request --> SLF Response(202) -->SIPN Request --> SIPN Response(202) --> MOSResponse(202) Validation Failure Sequence for SLF System & Technical Failure for MOS Response Sequence MOSRequest --> Validation Passed --> ITAM Request --> ITAM Response(code202) -->SLF Request -->SLF Error(404) -->SLF Request -->SLF Error(400) --> MOSError(500) Validation Failure Sequence for SIPN System & MOS system MOSRequest --> Validation Passed --> ITAM Request --> ITAM Response(code202) -->SLF Request -->SLF Response(202) -->SIPN Request -->SIPN Error(400) --->SIPN Request -->SIPN Error(400) -->SIPN Request -->SIPN Error(400) -->MOSError(400) Validation Failure Sequence for MOS System MOS Request -->MOS Error(400) Technical Failure Sequence for SIPN System MOSRequest --> Validation Passed --> ITAM Request --> ITAM Response(code202) -->SLF Request -->SLF Response(202) -->SIPN Request -- SIPN Error(500) -->SIPN Request -- SIPN Error(500) Technical Failure Sequence for SLF System MOSRequest --> Validation Passed --> ITAM Request --> ITAM Response(code202) -->SLF Request -->SLF Error(500)-->SLF Request -->SLF Error(500)    Technical Failure for MOS Response Sequence MOSRequest --> Validation Passed --> ITAM Request --> ITAM Response(code202) -->SLF Request -->SLF Error(404) -->SLF Request -->SLF Error(400) --> MOSError(500) ====================================================================== Use case : When No Response from MOS/SLF/SIPN  of the above application should be considered as technical failure NO Response Sequence for MOS system MOSRequest --> Validation Passed -->ITAM Request -->ITAM Response (code202) --> SLF Request --> SLF Response(202) -->SIPN Request --> SIPN Response(202) --> MOSResponse(202) ---> Success event will have MOS Response at last transaction. MOSRequest --> Validation Passed --> ITAM Request --> ITAM Response(code202) -->SLF Request -->SLF Error(404) -->SLF Request -->SLF Error(400) --> MOSError(500) --> Failure Event will have MOSError(500) at last transaction. But when there is any network glitch happened between transactions then there is a high chance of not getting any response or Error Event captured in the Splunk then it should be treated as Technical Error. Similarly for SIPN and SLF After SLF Request  when there is no sequence of events with SLF Response OR SLF Error then this is qualified as technical failure and same for SIPN events and it should be alerted for more 5 consecutive events.    Query details:   index=X sourcetype=x source="mos_api"     --- > Filtered mos specific data   | rex field=_raw "^(?:[^ \n]* ){3}\[(?P<EventSeq>[^\]]+)"     --> Created field extraction to understand the flow of data sequence    | rex field=_raw "\[(?P<errorCode1>[\d]+)\]"   --->  extracted the errorCode details  | eval EventSeq_Code=EventSeq."_".errorCode1  ---> Concacinate both EventSeq and errorCode to understand the eventFlow and the errorCode for the event sequence. | eval time=strft(_time,%d-%m-%d %H:%M:%S)  --> Convert  epoch time format to human readable format | stats values (time) as time values(EventSeq) as EventSeq values(errorCode1) as errorCode values(EventSeq_Code) as EventSeq_Code , values(API) as API by transactionid    ---  used stats  to get the unique values for the fields that will be used in the final result    | eval alert_mos=case(EventSeq="MOSRequest" AND EventSeq="Error" AND like(EventSeq_Code,"Error_500%),"MOSTechnicalFailure",EventSeq="MOSRequest" AND EventSeq="Error" AND like(EventSeq_Code,"%Error_%400%"),"MOSValidationFailure", EventSeq="MOSRequest" AND EventSeq!="MOSResponse" AND EventSeq!="MOSError","No Response from MOS")   | eval alert_ITAM=case(EventSeq="ITAMRequest"  AND EventSeq="ITAMResponse" AND like(EventSeq_Code,"%200%"),"NA")   | eval alert_SLF=case(EventSeq="SLFRequest" AND EventSeq="Error" AND like(EventSeq_Code,"Error_500%),"SLFTechnicalFailure",EventSeq="SLFRequest" AND EventSeq="Error" AND like(EventSeq_Code,"%Error_%400%"),"SLFValidationFailure", EventSeq="SLFRequest" AND EventSeq!="SLFResponse" AND EventSeq!="SLFError","No Response from SLF")     | eval alert_SIPN=case(EventSeq="SIPNRequest" AND EventSeq="Error" AND like(EventSeq_Code,"Error_500%),"SIPNTechnicalFailure",EventSeq="SIPNRequest" AND EventSeq="Error" AND like(EventSeq_Code,"%Error_%400%"),"SIPNValidationFailure", EventSeq="SIPNRequest" AND EventSeq!="SIPNResponse" AND EventSeq!="MOSError","No Response from SIPN")   | sort _time |  streamstats reset_on_change time_window=10m count by alert_mos,alert_ITAM,alert_SIPN,alert_SFL   The above  query covering most of the Scenarios and used stream stats but unable to get the output. Can you please anyone guide me whether the approach is right or query should be completely changed to get the result.
Hi, I am a beginner in Splunk, need help to resolve dashboad related issue Scenario:I have a table whose data is coming from multiple .csv files, now I want to add two more columns which coming from... See more...
Hi, I am a beginner in Splunk, need help to resolve dashboad related issue Scenario:I have a table whose data is coming from multiple .csv files, now I want to add two more columns which coming from an Index. Finding it difficult to add Index data into existing table. Unable to write SPL to fix this issue. Please help me to know how can I achieve my optput.
Hi,  We have 3 search head in a cluster and 3 indexers in non clustered environment. Whenever we do a rolling restart of the SH, the distsearch.conf in etc/system/local and some lookup csv in some o... See more...
Hi,  We have 3 search head in a cluster and 3 indexers in non clustered environment. Whenever we do a rolling restart of the SH, the distsearch.conf in etc/system/local and some lookup csv in some of apps change. It does not happen always but very often. Can anyone help in figuring why this happens and what needs to be corrected. There is no other distsearch.conf anywhere on the SH.   Thanks for your help....
Hi, I need an alert to be created, which should trigger only if we receive continuous failures for 5 times within a span of 10 mins. 1) Trigger alert failure failure failure failure  failure ... See more...
Hi, I need an alert to be created, which should trigger only if we receive continuous failures for 5 times within a span of 10 mins. 1) Trigger alert failure failure failure failure  failure   2) do not trigger alert failure failure failure failure success failure failure failure failure success
I have the final result which looks like below: Host Date Total_1 Total_2 To_be_removed Prod 02-26-2021 456 784 [X,Y,Z]   I want something like below : Host... See more...
I have the final result which looks like below: Host Date Total_1 Total_2 To_be_removed Prod 02-26-2021 456 784 [X,Y,Z]   I want something like below : Host Date Summary Prod 02-26-2021 Total_1:456 Total_2:784 To_be_removed:[X,Y,Z]  How Can I achieve this in splunk search query?
Hello, I have 3 queries as below and all 3 return starid, I need to check if starid from query 1 exists on starid from (Query2 + Query3) and return those star ids that does not exist for Query 2 + Qu... See more...
Hello, I have 3 queries as below and all 3 return starid, I need to check if starid from query 1 exists on starid from (Query2 + Query3) and return those star ids that does not exist for Query 2 + Query 3, I wrote the join query but returning incorrect results , How can this query be written to return correct results for starid exists on Query1 only that does exist in Query2+ Query3 Query 1 - [search earliest=-3d latest=-2d index=_* OR index=* sourcetype=OPENAPI_STARS_LOGS "Processing AwardStarsRequest complete" *prmen* | rename Star_TranID as starid] Query 2 - [ search earliest =-4d environment=test8 sourcetype=FEI_Utility Level=Information SessionM | search messagetype=transaction | eval MessageTemplate=replace(MessageTemplate,"\\\\\"","\"") | spath MessageTemplate | mvexpand MessageTemplate | rex field=MessageTemplate "message\":\"(?<msg>.*)" | search msg="*" | eval _raw=replace(msg,"\\\\\"","\"") | spath | table transaction.sourceTransactionId | rename transaction.sourceTransactionId as starid] Query 3-  [ search earliest =-4d environment=test8 sourcetype=FEI_Utility Level=Information *prmen* | search messagetype=accrual | spath MessageTemplate | mvexpand MessageTemplate | rex field=field3 "message\":\"(?<msg>.*)" | search msg="*" | eval _raw=replace(msg,"\\\\\"","\"") | spath | table "accrual.sourceTransactionId" | rename "accrual.sourceTransactionId" as starid ]
Hi Everyone, I have below query. index="abc*" OR index="xyz*" | eval raw_len=len(_raw) | stats sum(raw_len) as total_bytes by sourcetype |eval MB=total_bytes/pow(1024,2)| stats sum(MB) I have want... See more...
Hi Everyone, I have below query. index="abc*" OR index="xyz*" | eval raw_len=len(_raw) | stats sum(raw_len) as total_bytes by sourcetype |eval MB=total_bytes/pow(1024,2)| stats sum(MB) I have want to convert it into Trend line for Today. How can I change my query . Can someone guide me on that  As of now its showing sum
Hi All, I recently installed the Recorded Future App Version 1.0.13 on Splunk Search Head running version 8.0.3. Followed the instructions mentioned in the integration guide of Recorded Future, ht... See more...
Hi All, I recently installed the Recorded Future App Version 1.0.13 on Splunk Search Head running version 8.0.3. Followed the instructions mentioned in the integration guide of Recorded Future, https://go.recordedfuture.com/hubfs/splunk-integration-guide.pdf Initially I did not enable Enterprise Security however after some time I enabled the check and restarted the Search Head.  I've been receiving a warning on Search Head i.e. "Health Check: One or more apps("TA-recordedfuture") that had previously been imported are not exporting configurations globally to system. Configuration objects not exported to System will be unavailable in Enterprise Security. When I checked the messages which Splunk shows after restart in console, one of them was, Invalid Key in stanza [proxy] in /opt/splunk/etc/apps/TA-recordedfuture/local/recordedfuture_settings.conf line # and when I checked that line # the content was proxy_rdns = 0.  I've enabled the proxy settings and have verified that it is working alright so I'm not clear why this warning message is being shown on Search Head Web and on restarting I get the Invalid Key in stanza message. I'd appreciate if anyone could help me understand this situation. Thanks.  
While integrating JIRA with Splunk could see the JIRA logs in splunk but with an error Event Writer got tear down signal. Can someone suggest 
Hi Everyone, Can anyone tell me how can we convert byte into Giga byte . I am using below to convert into Megabyte is that correct   eval MB=total_bytes/pow(1024,2) Can someone guide me how we c... See more...
Hi Everyone, Can anyone tell me how can we convert byte into Giga byte . I am using below to convert into Megabyte is that correct   eval MB=total_bytes/pow(1024,2) Can someone guide me how we can covert byte to gigabyte   Thanks i
Hi, I am trying to understand a bit on how searches impact CPU usage on indexers. Does one search uses one CPU core by default or does it depend on indexes being searched Sometimes I have seen hig... See more...
Hi, I am trying to understand a bit on how searches impact CPU usage on indexers. Does one search uses one CPU core by default or does it depend on indexes being searched Sometimes I have seen high CPU usage when large index is being searched or when users have multiple indexes as default and they do not specify, so multiple indexes are searched. Note: It is single query, no subqueries.
Hi All, I want to filter out few of the lines from the events for different sourcetypes but for the same index.So that i can save some licenses. index=abc and Where "x" denotes numbers. Case 1: F... See more...
Hi All, I want to filter out few of the lines from the events for different sourcetypes but for the same index.So that i can save some licenses. index=abc and Where "x" denotes numbers. Case 1: From sourecetype=def I want to filter out the lines from the event if it comes in sequence like this. SourceType = def (xx:xx:xxx): Version: x.x.x.x, Inside thread x. . MessageQueueException Timeout for the requested operation has expired. (xx:xx:xxx): Version: x.x.x.x, Inside thread x. . Timeout for the requested operation has expired. -------------------------------------------------------------------------------------------------------------------------------------------------------- Case 2: Similarly for sourecetype=ghi I want to filter out the lines from the event if it comes in sequence like this. SourceType = ghi (xx:xx:xxx): Version: x.x.xx.x, Thread x,CmdID na,Timeout for the requested operation has expired. (xx:xx:xxx): Version: x.x.xx.x, Thread x,CmdID na,Finished execution. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Case 3: Similarly for sourecetype=jkl I want to filter out the lines from the event if it comes in sequence like this. SourceType = jkl   12/08/2020-12:00:00.2246074| Version: x.x.x.xxxxx| Information: exception type: System.Exception| message: System.Exception: Testingmaterialin::TestinExecutionThread() - Running - Begin| thread: 8 12/08/2020-12:00:01.2896317| Version: x.x.x.xxxxx| Information: exception type: System.Exception| message: System.Exception: Testingmaterialin::TestinExecutionThread() - Message queue has no messages, will try again.| thread: 8 12/08/2020-12:00:01.2896317| Version: x.x.x.xxxxx| Information: exception type: System.Exception| message: System.Exception: Testingmaterialin::TestinExecutionThread() - Running - End| thread: 8 So kindly help with the props and transforms so that I can filter those logs before ingestion.Thanks.
Hi All, I have a log which has below lines in it: "Results":{"Elapsed":"0","Message":"No of Application to Obsolete in Teradata : 4","TraceLevel":"INFO"},"Security":{"Vendor":"CRAB"}} "Result... See more...
Hi All, I have a log which has below lines in it: "Results":{"Elapsed":"0","Message":"No of Application to Obsolete in Teradata : 4","TraceLevel":"INFO"},"Security":{"Vendor":"CRAB"}} "Results":{"Elapsed":"0","Message":"Total Application Asset in Teradata : 1696","TraceLevel":"INFO"},"Security":{"Vendor":"CRAB"}} "Results":{"Elapsed":"0","Message":"Total Application count from SPAM : 1694","TraceLevel":"INFO"},"Security":{"Vendor":"CRAB"}} "Results":{"Elapsed":"0","Message":" Application/s to Obsolete in Teradata : [PA00007618, PA00007617, PA00007619, PA00007620]","TraceLevel":"INFO"},"Security":{"Vendor":"CRAB"}}   I want the output to have the below fields: No of Application to Obsolete in Teradata : 4 Total Application Asset in Teradata : 1696 Total Application count from SPAM : 1694 Application/s to Obsolete in Teradata : [PA00007618, PA00007617, PA00007619, PA00007620] I have built below query but it's only giving me one record : ExecutionDate Host Total Application count from SPAM : 1694 index=hdt sourcetype=Teradata_SPAM_logs | fields -_raw | where match(_raw, "Host_cdc") and (match(_raw,"Total\sApplication\scount\sfrom\sSPAM\s*") OR match(_raw,"Total\sApplication\sAsset\sin\sTeradata\s*") OR match(_raw,"No\sof\sApplication\sto\sObsolete\sin\sTeradata\s*") OR match(_raw,"List\sof\sApplications\sin\sTeradata\sto\sbe\smarked*") ) | rex "(?<Summary>\"Message\":(.*\w+)\s:.*)" | rex "(?<Host>\"Host\":(.*\",))" | rex "(?<ExecutionDate>\d{4}\-\d{2}\-\d{2})" | rex field=Summary mode=sed "s/\"Message\":\"/ /" | rex field=Summary mode=sed "s/\"TraceLevel.*/ /" | rex field=Summary mode=sed "s/\".*$//" | rex field=Host mode=sed "s/\"Channel.*/ /" | rex field=Host mode=sed "s/\"Host\":\"/ /" | rex field=Host mode=sed "s/\/.*/ /" | eval Host = replace(Host,"Host_cdc.cdc.CRAB.com", "PRODUCTION") | eval Host = replace(Host,"Host_DEV.cdc.CRAB.com", "PROFILING") | eval Host = replace(Host,"Host_PP.cdc.CRAB.com", "VALIDATION") | stats values(Summary) as Summary by ExecutionDate, Host | where isnotnull(Summary) Can anyone tell me where is the problem here?