All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I updated to 8.2.2.1 and suddenly all of our unit test output is polluted with hundreds of Authorization Failed messages,  each coming from various calls to splunk.rest.simpleRequest. The Authorizat... See more...
I updated to 8.2.2.1 and suddenly all of our unit test output is polluted with hundreds of Authorization Failed messages,  each coming from various calls to splunk.rest.simpleRequest. The Authorization failures themselves are perfectly normal - many of our tests actually assert that ownership and permissions are set the right way, and testing that involves trying to do things with the wrong user and asserting that the thing fails.   What's problematic is how formerly nice clean unit test output to the console or to stdout is now polluted with all this stuff about these normal failures. for example,  picture dozens or hundreds of these: Authorization Failed: b'{"success":false,"messages":[{"text":"It looks like your Splunk Enterprise\\nuser account does not have the correct capabilities to be able to post licenses.\\nReach out to your local Splunk admin(s) for help, and/or contact Sideview support\\nfor more detail."}]} Curious if anyone has run into this or knows where the messages might be coming from.
Hi team,      I am new to the splunk. I am just running a splunk query with an ID name to get the file assocaited with it from the logs. Event logs are looks below from the splunk, Logs:   the log... See more...
Hi team,      I am new to the splunk. I am just running a splunk query with an ID name to get the file assocaited with it from the logs. Event logs are looks below from the splunk, Logs:   the log files are    RequestId: abcd File uploaded to: s3://test/sample/file.json   source: source name   host: host name   etc   how can i run a query to search splunk event logs and look for .JSON file and return the full JSON file name? Any help is appreciated.   Thanks.
Hi Team, If  I have to write CIM Data Model use cases for Malware / Authentication, etc., what are the Rules / Logic have to follow initially. If there is any documentation / source can you please ... See more...
Hi Team, If  I have to write CIM Data Model use cases for Malware / Authentication, etc., what are the Rules / Logic have to follow initially. If there is any documentation / source can you please provide me the link. Thank You.
Hi,     I am running a basic search query in splunk search directly with command such as query: sourcetype=aws*-cloudwatch-logs file.txt | rex "RequestId: (?<reqid>[\S+]*)\s" | table reqid | dedu... See more...
Hi,     I am running a basic search query in splunk search directly with command such as query: sourcetype=aws*-cloudwatch-logs file.txt | rex "RequestId: (?<reqid>[\S+]*)\s" | table reqid | dedup reqid   it return the output reqid as "72830b96-c6g3-21fg-0063-ae728c68c1fc"   But if i run the same command through the splunk dashboard splunk returing the complete raw data which i dont need. why splunk dashboard return a raw result rather than a single output?   Any help is appreciated.   Thanks.
Our scrum team used to have a single Splunk dashboard, and a link to it on our Jira board, so that the product manager could easily jump to it when he was checking on our progress.  Now I've added 6 ... See more...
Our scrum team used to have a single Splunk dashboard, and a link to it on our Jira board, so that the product manager could easily jump to it when he was checking on our progress.  Now I've added 6 more dashboards, but would rather not add 6 more links if I can help it.  Is there a way to link to all of them at once?  The dashboard names share a common prefix for easy filtering, but filtering doesn't affect the URL so it doesn't seem like that will help.  Can I create a dashboard that contains links to other dashboards?  Or some sort of home page that's specific to our team?
Hi Team, Could someone help me with the field extraction for the below complex data(1000 lines of data I concised to 10 lines of data ) : columns to be extracted are statement_text , cnt, total_rea... See more...
Hi Team, Could someone help me with the field extraction for the below complex data(1000 lines of data I concised to 10 lines of data ) : columns to be extracted are statement_text , cnt, total_reads, total_writes, db_name statement_text="insert into #pt_queryhistory_time ( [sample_time],command_id,cnt,total_time,[db_name],sqlhandle,hash_char) select top 500 [sample_time] = convert(smalldatetime,'2021-09-27 18:55:00'), total_time = qs.total_elapsed_time/1000, avg_cpu = case when qs.execution_count = 0 then 0 else qs.total_worker_time/qs.execution_count/1000 end, db_name = case convert(int, pa.value) when null then '--unknown--' when 0 then '--unknown--' when 32767 then 'Resource' else db_name(convert(int, pa.value)) end, [db_id] = coalesce(convert(int, pa.value),0), hash_char = '' from sys.db_stats (nolock) as qs cross apply sys.dm_exec_plan_attributes(qs.plan_handle)as pa where pa.attribute = N'dbid' and isnull(convert(int,pa.value),0) = 8 order by qs.total_elapsed_time desc", cnt="1", total_reads="1888", total_writes="29", avg_writes="29",db_name="db1"  I couldn't able to extract the statement_text column completely and the remaining columns are working fine index="index" source="source1"| rex field=_raw "statement_text\=\"(?<statement_text>[@ ( ) $ . , \"A-Z ! ^ | \" - _ : { } A-Z a-z _ 0-9]+]+)\""   | rex field=_raw "cnt\=\"(?<cnt>[0-9]+)\"" | rex field=_raw "diff_reads\=\"(?<diff_reads>[0-9]+)\""| rex field=_raw "total_writes\=\"(?<total_writes>[0-9]+)\"" | rex field=_raw "db_name\=\"(?<db_name>[A-Z a-z _ 0-9]+)\"" Please provide me rex for statement_text column where the data can be extracted till the 2nd column "cnt"
Hi Team, Could someone help me with the field extraction for the below complex data(1000 lines of data I concised to 10 lines of data ) : columns to be extracted are statement_text , cnt, total_rea... See more...
Hi Team, Could someone help me with the field extraction for the below complex data(1000 lines of data I concised to 10 lines of data ) : columns to be extracted are statement_text , cnt, total_reads, total_writes, db_name statement_text="SELECT ISNULL("Interface Entry Header"."timestamp",@3) AS "timestamp",ISNULL("Interface Entry Header"."ImageURL",@8) AS "ImageURL",ISNULL("Interface Entry Header"."Pick Date Time",@7) AS "Pick Date Time",ISNULL("Interface Entry Header"."Start Execution",@7) AS "Start Execution",ISNULL("Interface Entry Header"."End Execution",@7) AS "End Execution",ISNULL("Interface Entry Header"."Send Request",@7) AS "Send Request",ISNULL("Maximo Issue Type$Interface Entry Line"."Header Entry No_",@4) AS "Maximo Issue Type$Interface Entry Line$Header Entry No_",ISNULL("Maximo Issue Type$Interface Entry Line"."Entry No_",@4) AS "Maximo Issue Type$Interface Entry Line$Entry No_" FROM "Algeria".dbo."Tango$Interface Entry Line" AS "Maximo Issue Type$Interface Entry Line" WITH(READUNCOMMITTED) WHERE ("Maximo Issue Type$Interface Entry Line"."Header Entry No_"="Interface Entry Header"."Entry No_") ORDER BY "Maximo Issue Type$Interface Entry Line$Header Entry No_" ASC,"Maximo Issue Type$Interface Entry Line$Entry No_" ASC) AS "SUB$Maximo Issue Type" WHERE ("Interface Entry Header"."Interface Code"=@0 AND "Interface Entry Header"."Direction"=@1 AND "Interface Entry Header"."Status"=@2) ORDER BY "Source No_" ASC,"Entry No_" ASC OPTION(OPTIMIZE FOR UNKNOWN, FAST 50)",    cnt="12",    total_reads="31", total_writes="0",   db_name="test"  I couldn't able to extract the statement_text column completely and the remaining columns are working fine index="index" source="source1"| rex field=_raw "statement_text\=\"(?<statement_text>[@ ( ) $ . , \"A-Z ! ^ | \" - _ : { } A-Z a-z _ 0-9]+]+)\""   | rex field=_raw "cnt\=\"(?<cnt>[0-9]+)\"" | rex field=_raw "diff_reads\=\"(?<diff_reads>[0-9]+)\""| rex field=_raw "total_writes\=\"(?<total_writes>[0-9]+)\"" | rex field=_raw "db_name\=\"(?<db_name>[A-Z a-z _ 0-9]+)\"" Please provide me rex for statement_text column where the data can be extracted till the 2nd column "cnt"
Hi, I am trying to get OWA url into Splunk. I deployed the TA-Windows-Exchange-IIS changing the local input.conf according to our on-prem Exchange version, the stanza [monitor://C:\Program Files\Mic... See more...
Hi, I am trying to get OWA url into Splunk. I deployed the TA-Windows-Exchange-IIS changing the local input.conf according to our on-prem Exchange version, the stanza [monitor://C:\Program Files\Microsoft\Exchange Server\V15\Logging\Ews]  After the deployment of the app, I see events coming in with the right sourcetype      index=msexchange sourcetype="MSWindows:2013EWS:IIS"     but on those events I cannot see either the source IP or the URL. I am trying to detect GET actions to the autodiscovery folder and I dont see on the received events either actions or url. any suggestion? thanks!  
Hello All, I have an inputlookup csv file that contains a list of host and corresponding docker containers running on those hosts for my environment. I am trying to generate a report where I can rep... See more...
Hello All, I have an inputlookup csv file that contains a list of host and corresponding docker containers running on those hosts for my environment. I am trying to generate a report where I can report daily list of host+containers that have not sent any logs to splunk for that particular day. Stats count does not return 0 event count. I am trying to get something like below: Monday                                 Tuesday                           Wednesday                   Thursday                              Friday host1-containerx             host3-containerx         host4-containerx.       host5-containerx              --- host1-containery            host3-containery         host5-containerx host2-containerz          host3-containerz host1-containery host2-containerz     I would appreciate any help   Thank you !!
Hi.  I have log source that has a mix of various field types and then a larger nested JSON payload.  I can't quite wrap my head around how to parse this out in our SplunkCloud environment. High leve... See more...
Hi.  I have log source that has a mix of various field types and then a larger nested JSON payload.  I can't quite wrap my head around how to parse this out in our SplunkCloud environment. High level, the log contains this: date field server name field (separated by four dashes most of the time, but some env have three) process name[PID]  source code function variable field ending with a colon char source code function variable's value, which may or may not have special chars like ()  JSON  ends with [] Sanitized example: Oct 1 20:04:22 my-web01-aa-env my-web[14597]: app.NOTICE: Gateway Transaction Response (BP) {"response":"<Auth><Field1>ch-abcd1234-ab12-ab12-ab12-abcdef123456</Field1><Field2>0123</Field2><Field3>0123</Field3><Field4>Successful Request</Field4><responseMessage><Field5>0123</Field5><Field6>0123</Field6><Field7>0123</Field7><Field8>0123</Field8><Field9>0123</Field9><Field10>0123</Field10><Field11>0123</Field11><Field12>0123</Field12><Field13>012 - Approved (APPROVAL 001077)</Field13><Field14>Approved: 012345 (approval code)</Field14><Field15>00000</Field15><Field16>Address not available (Address not verified)</Field16><Field17>40</Field17><Field18></Field18><Field19>012345</Field19><Field20>20211001</Field20><Field21>0</Field21><Field22>840</Field22><Field23>USD</Field23><Field24>2021-10-01 16:04:21.493</Field24><Field25>2021-10-01 16:04:21.493</Field25><Field26>0123456</Field26><Field27>0123</Field27><Field28>0123456</Field28><Field29>0006</Field29><Field30>ABCDEF</Field30><Field31>Abc</Field31><Field32>ABC ABC ABC</Field32><Field33>Abcd</Field33><Field34>00</Field34><Field35>ABCDEF 012345 </Field35><Field36>ABCDEF0123</Field36><Field37>4F:A0000000041010;95:0000008000;9F10:0110A040002A0000000000000000000000FF;9B:E800;91:6325A37CFBC5CEDD0012;8A:</Field37><Field38>012345012345</Field38><Field39>abcd</Field38><Field39>False</Field39><Field40 /><Field40 /><Field41 /><Field42 /><Field42 /></Field43></Field43>"} []   My grok-fu is not great.  Would appreciate any suggestions.  Thanks!
Hi, I am streaming results from a Kubernetes cluster and i am monitoring for pod restarts by looking at the name of each pod and reporting when it changes. I am able to return the pod name, however... See more...
Hi, I am streaming results from a Kubernetes cluster and i am monitoring for pod restarts by looking at the name of each pod and reporting when it changes. I am able to return the pod name, however i am unable to make my match statement work to only return the different pod names. the pods are named; prod-K8-1-b5c85b547-26wqn  prod-K8-2-7c56dc8559-kzpwm  prod-K8-3-7c7bccf947-4skx2  prod-K8-4-769bb9d4f5-tmwbz  ... i have code that returns the pod names over a time frame;    index=K8 "kubernetes.container_name"=tfs kubernetes.pod_name=prod* earliest=-5m@m latest=@m| stats first(kubernetes.pod_name) as old_pod last(kubernetes.pod_name) as new_pod by kubernetes.pod_name | where old_pod=new_pod   kubernetes.pod_name                     old_pod                                                  new_pod prod-K8-1-b5c85b547-26wqn prod-K8-1-b5c85b547-26wqn prod-K8-1-b5c85b547-26wqn prod-K8-1-b5c85b547-tdgwg prod-K8-1-b5c85b547-tdgwg prod-K8-1-b5c85b547-tdgwg   I think i need to use some regex in my where statement, but i end up getting no results, the above where clause was to show an output. I would like to get my output table to list the current pod name then have the two different pod names. any help would be much appreciated.
Hi, I havethe following search index="windows" source=WinEventLog:Security ([| inputlookup windows_group_change_events | where group_action like "member added" or group_action like "member removed... See more...
Hi, I havethe following search index="windows" source=WinEventLog:Security ([| inputlookup windows_group_change_events | where group_action like "member added" or group_action like "member removed" | fields EventCode ]) NOT src_user=portalsync | ldapfilter search="(&(objectClass=group)(cn=$Group_Name$))" attrs="distinguishedname,description, info" | eval _raw = json_object("time", _time, "action_by", src_user, "event_code", EventCode, "group", mvindex(Security_ID, 2), "member", mvindex(Security_ID, 1),"orig_sourcetype", sourcetype, "orig_host", host, "dn", distinguishedName, "desc", description, "info", info ) | fields _time, _raw | collect index=windows_ad_summary addtime=0 source="windows_group_management"   I try to save the group membership change events to a different index for long-term retention. I got the original idea from here: https://conf.splunk.com/files/2017/slides/using-splunk-enterprise-to-optimize-tailored-longterm-data-retention.pdf My problem is that this search always returns "no results". If I omit the "fields" and "collect" commands it returns results as intended. Also if I omit the "ldapfilter" command it works just fine. Do you have any idea what is the problem with the combination of "ldapfilter" and "fields"?   Thanks, Laszlo   Edit: I'm using Splunk 8.0.9
Hi, is it possible to manage Splunk Cloud enterprise security content via pipeline, including detection rules?   BR  
Hi Splunkers, Long time ago we setup a SH cluster, and added search peers using CLI Some time later we changed the setup and began setting the search peers via an App pushed from the deployer. All ... See more...
Hi Splunkers, Long time ago we setup a SH cluster, and added search peers using CLI Some time later we changed the setup and began setting the search peers via an App pushed from the deployer. All $SPLUNK/etc/system/local distsearch.conf files were purged, and the app contains some peers on distsearch.conf and clustered indexers on server.conf Recently we found one cluster member stubbornly kept re-creating a distsearch.conf on system/local that overrided the cluster's configs ( pushed via the App ). Removing the file and doing a rolling restart, the file showed up again. Cleaning the KV Store and adding that member to the cluster restored the rogue distsearch.conf again. We also deleted *bundle files on $SPLUNK/var/run. Following the steps described as "Add a member that was both removed and disabled" here: https://docs.splunk.com/Documentation/Splunk/8.2.2/DistSearch/Addaclustermember We found when calling "splunk add shcluster-member" on the Captain, the "new" member's system/local path was purged from some files and re-created .. with the rogue distsearch.conf we were trying to get rid. And this event showed up on _internal ERROR ApplicationUpdater - Error reloading : handler for distsearch (access_endpoints /search/distributed/bundle-replication-files, /search/distributed/peers) Any ideas ? The final solution was to nuke the VM and re-create from scratch.
i have a below data generated by a timechart  i'm trying to write a query where if there are continous sequence of number 1 in a span of 15 mins  it should alert me  index=abc  "Heartbeat*"  |timech... See more...
i have a below data generated by a timechart  i'm trying to write a query where if there are continous sequence of number 1 in a span of 15 mins  it should alert me  index=abc  "Heartbeat*"  |timechart span=2m count  2021-10-04 10:20:00 1 2021-10-04 10:22:00 1 2021-10-04 10:24:00 1 2021-10-04 10:26:00 1 2021-10-04 10:28:00 1   when the  
Hi All , Can some one help me understand why  similar query gives me 2 different results for a intrusion detection datamodel . Only difference bw 2 is the order . Query 1: | tstats summariesonly=t... See more...
Hi All , Can some one help me understand why  similar query gives me 2 different results for a intrusion detection datamodel . Only difference bw 2 is the order . Query 1: | tstats summariesonly=true values(IDS_Attacks.dest) as dest values(IDS_Attacks.dest_port) as port from datamodel=Intrusion_Detection where IDS_Attacks.ICS_Asset=Yes IDS_Attacks.url=* by IDS_Attacks.src,IDS_Attacks.transport,IDS_Attacks.url | eval source="wildfire" | rename IDS_Attacks.url as wildfireurl | rex field=wildfireurl "(?P<domain>[^\/]*)" | lookup rf_url_risklist.csv Name as wildfireurl OUTPUT EvidenceDetails Risk | lookup idefense_http_intel ip as wildfireurl OUTPUT description weight | lookup rf_domain_risklist.csv Name as domain OUTPUT RiskString | search EvidenceDetails =* OR description=* OR RiskString=* | rename "IDS_Attacks.*" as * | append    [| tstats summariesonly=true values(All_Traffic.dest) as dest values(All_Traffic.app) as app values(All_Traffic.http_category) as http_category from datamodel=Network_Traffic where All_Traffic.ICS_Asset=Yes All_Traffic.action=allowed app!="insufficient-data" app!="incomplete"        [| inputlookup ids_malicious_ip_tracker        | fields src        | rename src AS All_Traffic.src ] by All_Traffic.src,All_Traffic.transport,All_Traffic.dest_port    | lookup n3-a_cidr_wlist.csv cidr_range as All_Traffic.src OUTPUT cidr_range as src_match    | where src_match="NONE" OR isnull(src_match)    | fields - src_match    | lookup rf_ip_risklist.csv Name as All_Traffic.src OUTPUT EvidenceDetails Risk    | lookup idefense_ip_intel ip as All_Traffic.src OUTPUT description weight    | lookup ips_tor.csv ip as All_Traffic.src output app as app    | search EvidenceDetails =* OR description=* OR app="tor"    | where Risk > 69 OR weight > 40 OR app="tor"    | eval rf_evidence_details_0 = mvappend(EvidenceDetails, description)    | fields - description, EvidenceDetails    | eval Attack_Count = mvcount(rf_evidence_details_0)    | search NOT All_Traffic.src=10.0.0.0/8 OR All_Traffic.src=192.168.0.0/16 OR All_Traffic.src=172.16.0.0/12    | rename "All_Traffic.*" as *] | append    [| tstats summariesonly=true count values(All_Traffic.action) values(All_Traffic.http_category) as http_category from datamodel=Network_Traffic where All_Traffic.ICS_Asset=Yes by All_Traffic.src, All_Traffic.transport, All_Traffic.dest_port, All_Traffic.dest    | lookup rf_ip_risklist.csv Name as All_Traffic.dest OUTPUT EvidenceDetails Risk    | lookup idefense_ip_intel ip as All_Traffic.dest OUTPUT description weight    | search EvidenceDetails =* OR description=*    | where Risk > 69 OR weight > 40    | eval rf_evidence_details_0 = mvappend(EvidenceDetails, description)    | fields - description, EvidenceDetails    | eval Attack_Count = mvcount(rf_evidence_details_0)    | search count > 100    | search All_Traffic.src=10.0.0.0/8 OR All_Traffic.src=192.168.0.0/16 OR All_Traffic.src=172.16.0.0/12    | rename "All_Traffic.*" as *] Second Query : | tstats summariesonly=true count values(All_Traffic.action) values(All_Traffic.http_category) as http_category from datamodel=Network_Traffic where All_Traffic.ICS_Asset=Yes by All_Traffic.src, All_Traffic.transport, All_Traffic.dest_port, All_Traffic.dest | lookup rf_ip_risklist.csv Name as All_Traffic.dest OUTPUT EvidenceDetails Risk | lookup idefense_ip_intel ip as All_Traffic.dest OUTPUT description weight | search EvidenceDetails =* OR description=* | where Risk > 69 OR weight > 40 | eval rf_evidence_details_0 = mvappend(EvidenceDetails, description) | fields - description, EvidenceDetails | eval Attack_Count = mvcount(rf_evidence_details_0) | search count > 100 | search All_Traffic.src=10.0.0.0/8 OR All_Traffic.src=192.168.0.0/16 OR All_Traffic.src=172.16.0.0/12 | rename "All_Traffic.*" as * | append    [| tstats summariesonly=true values(All_Traffic.dest) as dest values(All_Traffic.app) as app values(All_Traffic.http_category) as http_category from datamodel=Network_Traffic where All_Traffic.ICS_Asset=Yes All_Traffic.action=allowed app!="insufficient-data" app!="incomplete"        [| inputlookup ids_malicious_ip_tracker        | fields src        | rename src AS All_Traffic.src ] by All_Traffic.src,All_Traffic.transport,All_Traffic.dest_port    | lookup n3-a_cidr_wlist.csv cidr_range as All_Traffic.src OUTPUT cidr_range as src_match    | where src_match="NONE" OR isnull(src_match)    | fields - src_match    | lookup rf_ip_risklist.csv Name as All_Traffic.src OUTPUT EvidenceDetails Risk    | lookup idefense_ip_intel ip as All_Traffic.src OUTPUT description weight    | lookup ips_tor.csv ip as All_Traffic.src output app as app    | search EvidenceDetails =* OR description=* OR app="tor"    | where Risk > 69 OR weight > 40 OR app="tor"    | eval rf_evidence_details_0 = mvappend(EvidenceDetails, description)    | fields - description, EvidenceDetails    | eval Attack_Count = mvcount(rf_evidence_details_0)    | search NOT All_Traffic.src=10.0.0.0/8 OR All_Traffic.src=192.168.0.0/16 OR All_Traffic.src=172.16.0.0/12    | rename "All_Traffic.*" as *] | append    [| tstats summariesonly=true values(IDS_Attacks.dest) as dest values(IDS_Attacks.dest_port) as port from datamodel=Intrusion_Detection where IDS_Attacks.ICS_Asset=Yes IDS_Attacks.url=* by IDS_Attacks.src,IDS_Attacks.transport,IDS_Attacks.url    | eval source="wildfire"    | rename IDS_Attacks.url as wildfireurl    | rex field=wildfireurl "(?P<domain>[^\/]*)"    | lookup rf_url_risklist.csv Name as wildfireurl OUTPUT EvidenceDetails Risk    | lookup idefense_http_intel ip as wildfireurl OUTPUT description weight    | lookup rf_domain_risklist.csv Name as domain OUTPUT RiskString    | search EvidenceDetails =* OR description=* OR RiskString=*    | rename "IDS_Attacks.*" as *] (edited)   above query has 3 parts suspicious inbound ,suspicious outbound and suspicious url . While executing the first query i am getting 1 result and while executing 2 query i am getting 6 results for the same time frame
Hi, I am trying to create an alert for hosts that are communicating to the internet. Want to know the destinations.  But have a lot of trusted destinations to exclude (Aprox. 2800).  The below quer... See more...
Hi, I am trying to create an alert for hosts that are communicating to the internet. Want to know the destinations.  But have a lot of trusted destinations to exclude (Aprox. 2800).  The below query gives me all the trusted destinations+other.  index=net sourcetype=proxy dest!=10.1.* 10.2.* sc_status=200 c_ip IN (10.1.10.* 10.1.11.* ) | lookup dnslookup clientip as c_ip OUTPUT clienthost as DNSName | stats count by c_ip DNSName dest sc_status Please let me know of any solution.   
Hello everyone, I want to forward all data from index/sourcetype to third system. I did outputs.conf [tcpout:fastlane] server = ***:1468 sendCookedData = false [syslog] defaultGroup=syslogGrou... See more...
Hello everyone, I want to forward all data from index/sourcetype to third system. I did outputs.conf [tcpout:fastlane] server = ***:1468 sendCookedData = false [syslog] defaultGroup=syslogGroup [syslog:syslogGroup] server = ***:514   but it send just metrics from internal index how can I fix it? thank you
I am a Splunk user and was trying to fix a splunk query In the SPL the user is using a key-value filter to get the resulting events. index="as400" sourcetype="pac:avenger:tss:syslog" (status="failu... See more...
I am a Splunk user and was trying to fix a splunk query In the SPL the user is using a key-value filter to get the resulting events. index="as400" sourcetype="pac:avenger:tss:syslog" (status="failure" OR action="failure") And I see many matching events. When I rewrite the query, like the following I was hoping to at least see some or all matching events, But instead I get "No results found" index="as400" sourcetype="pac:avenger:tss:syslog"  "failure" When I looked into the raw events by just querying, index="as400" sourcetype="pac:avenger:tss:syslog"  I found no key-word in the raw event containing the string . How is that possible that the word "failure" is not part of the raw event ?  and yet I see results when I search using key-value pair in the SPL but nor result when only searching by 'failure'. It baffles me !
Hi How can I extract first occured this "User ABC123 invalid"  with rex? Here is the log: 2021-10-03 13:26:44,441 ERROR [APP] User ABC123 invalid: javax.security.auth.login.LoginException: User AB... See more...
Hi How can I extract first occured this "User ABC123 invalid"  with rex? Here is the log: 2021-10-03 13:26:44,441 ERROR [APP] User ABC123 invalid: javax.security.auth.login.LoginException: User ABC123 invalid   Thanks,