All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Please guide me, how can i use "Soc prime" tool with Splunk as i am new with Soc prime. Regards, Rahul
Hi Splunkers, I am facing the below time stamp issues. Could you please help me with this issue. 08-01-2021 22:49:25.563 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMEST... See more...
Hi Splunkers, I am facing the below time stamp issues. Could you please help me with this issue. 08-01-2021 22:49:25.563 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (800) characters of event. Defaulting to timestamp of previous event (Sun Aug 1 22:49:25 2021).   [props] TIME_FORMAT = %m-%d-%Y%H:%M:%S.%3Q SHOULD_LINEMERGE = False is_valid = True maxDist = 9999 BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = CURRENT LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 800 NO_BINARY_CHECK = true category = Custom
I  need to  have display  splunk  dashboard  as pop  window when we click  as drilldown.   For example index=_internal |stats count by source, sourcetype when click on the  table it  should pop  u... See more...
I  need to  have display  splunk  dashboard  as pop  window when we click  as drilldown.   For example index=_internal |stats count by source, sourcetype when click on the  table it  should pop  up  a panel containing  the below search   index_internal |table sourcetype
I had the Splunk Cloud Gateway installed before it was standard (Splunk 7.x) and working, with alerts and dashboards accessible from my phone.  I believe during a license update that stripped my acco... See more...
I had the Splunk Cloud Gateway installed before it was standard (Splunk 7.x) and working, with alerts and dashboards accessible from my phone.  I believe during a license update that stripped my account (new terms allows for only one account, so admin) broke it (stopped getting alerts).  Since its a home lab and not prod I didn't dig into it. Now that I am digging into it, the gateway dashboard is showing this:   SPL:  index=_internal source=*cloud* ERROR AND NOT SUBSCRIPTION Shows this: I can register my device, but it can't see any dashboards, it seems to time out. There seems to be a vacuum in google as to troubleshooting this except talk of using proxies.  I am not running a proxy. What could the issue be?
I have uninstalled add-on Splunk_TA_jmx (by removing the application directory and restarting splunk) but I am still getting data of sourcetype=jmx into the main index every 60 seconds. where is thi... See more...
I have uninstalled add-on Splunk_TA_jmx (by removing the application directory and restarting splunk) but I am still getting data of sourcetype=jmx into the main index every 60 seconds. where is this data coming from?   I would like to reinstall the add-on, after I get everything back to the pre-install
I need to mask data for fields values of <ab:Nm>, <ab:StrtNm>, <ab:PstCd>, <ab:TwnNm>, <ab:CtrySubDvsn>, <ab:Ctry>,  <ab:Ustrd> from below events: First Event: <ab:Dbtr> <ab:Nm>ACB L OESGT AGBDH</... See more...
I need to mask data for fields values of <ab:Nm>, <ab:StrtNm>, <ab:PstCd>, <ab:TwnNm>, <ab:CtrySubDvsn>, <ab:Ctry>,  <ab:Ustrd> from below events: First Event: <ab:Dbtr> <ab:Nm>ACB L OESGT AGBDH</ab:Nm> <ab:PstlAdr> <ab:StrtNm>12345 BELBON WAY</ab:StrtNm> <ab:PstCd>45352-1242</ab:PstCd> <ab:TwnNm>CUBA TWP</ab:TwnNm> <ab:CtrySubDvsn>AB</ab:CtrySubDvsn> <ab:Ctry>CD</ab:Ctry> </ab:PstlAdr> </ab:Dbtr> <ab:RmtInf> <ab:Ustrd>happy birthday </ab:Ustrd> </ab:RmtInf> Second Event: <ab:Dbtr> <ab:Nm>AMIRA S ELHASSAN</ab:Nm> <ab:PstlAdr> <ab:StrtNm>5267 APHEG DR</ab:StrtNm> <ab:PstCd>54672-1080</ab:PstCd> <ab:TwnNm>ANTARTICA BEACH</ab:TwnNm> <ab:CtrySubDvsn>EF</ab:CtrySubDvsn> <ab:Ctry>CD</ab:Ctry> </ab:PstlAdr> </ab:Dbtr> <ab:RmtInf> <ab:Ustrd>happy birthday, birthday party on me</ab:Ustrd> </ab:RmtInf> Third event: <ab:Dbtr> <ab:Nm>ALYSSA FOSTER</ab:Nm> <ab:PstlAdr> <ab:StrtNm>529833 PRIME BILL CT</ab:StrtNm> <ab:PstCd>45673-8297</ab:PstCd> <ab:TwnNm>BEDERICK</ab:TwnNm> <ab:CtrySubDvsn>MD</ab:CtrySubDvsn> <ab:Ctry>CD</ab:Ctry> </ab:PstlAdr> </ab:Dbtr> <ab:RmtInf> <ab:Ustrd>happy birthday my love, have fun.</ab:Ustrd> </ab:RmtInf> Can someone help me with the regex? 
Hi all.  We received a bulletin that our UF certificates were expiring.  I downloaded the credentials package and installed them per documentation on my deployment server using <...install app...-upd... See more...
Hi all.  We received a bulletin that our UF certificates were expiring.  I downloaded the credentials package and installed them per documentation on my deployment server using <...install app...-update 1...> That appears to have updated the APPS 100_xxxx_splunkcloud folder/files but my question is: 1.  Doesn't the DEPLOYMENT-APPS 100_xxxx_splunkcloud folder/files need to be updated to get the updated credentials to all of my UF's? 2.  How can I validate the new certificate date on my Window's UF's and on my HF's? Thanks in advance.
Hello Splunkers. i made a splunk search to count the number of blocked URLs as a single value in a one day span of 3days period of search.   here's my search:  index=proxy action=blocked | bin _... See more...
Hello Splunkers. i made a splunk search to count the number of blocked URLs as a single value in a one day span of 3days period of search.   here's my search:  index=proxy action=blocked | bin _time span=1d | stats count(http_url) by _time   and here's the results:   i want to show it in thousands of tried, i tried this this search but the results doesnt appear:  index=proxy action=blocked | bin _time span=1d | eval url_count= http_url/1000 | stats count(url_count) by _time   but there are no results     please help me with it,  thanks ^_^
Hi, I have written a script which runs for every after 1 hr, here the 24 hr window is from 07am to next day 06:00am My requirement is to provide a monthly count of the report but should only consid... See more...
Hi, I have written a script which runs for every after 1 hr, here the 24 hr window is from 07am to next day 06:00am My requirement is to provide a monthly count of the report but should only consider the last log file 06:00 am which contains all the updated information for the day index=XX host=XX source="abc_YYYYMMDD.csv" | dedup _raw | fields source A B C | replace abc*.csv WITH * in source | eval source=strftime(strptime(source,"%Y%m%d"),"%d-%B-%Y") | eval jobs=A+B+C | dedup jobs | stats count(jobs) by source - this display all the events but I want to only the last log file count. 
Hello all, I'm trying to create an alert for Successful Brute Force Attempts using the Authentication Data Model. Currently, I'm doing this:     | tstats summariesonly=true count as success FROM ... See more...
Hello all, I'm trying to create an alert for Successful Brute Force Attempts using the Authentication Data Model. Currently, I'm doing this:     | tstats summariesonly=true count as success FROM datamodel=Authentication where Authentication.user!="*$*" AND Authentication.action="success" BY _time span=60m sourcetype Authentication.src Authentication.user | join max=0 _time, sourcetype, Authentication.src, Authentication.user [| tstats summariesonly=true count as failure FROM datamodel=Authentication where Authentication.user!="*$*" AND Authentication.action="failure" BY _time span=60m sourcetype Authentication.src Authentication.user] | search success>0 AND failure>0 | where success<(failure*.05) | rename Authentication.* as * | lookup dnslookup clientip as src OUTPUT clienthost as src_host | table _time, user, src, src_host, sourcetype, failure, success     It aggregates the successful and failed logins by each user for each src by sourcetype by hour. Then it returns the info when a user has failed to authenticate to a specific sourcetype from a specific src at least 95% of the time within the hour, but not 100% (the user tried to login a bunch of times, most of their login attempts failed, but at least one of the login attempts succeeded). What everyone else would like me to do is look for the traditional pattern for finding Successful Brute Force Attempts - many failed logins followed by a successful one. I can't seem to find an efficient way of doing that with the Splunk Authentication Data Model. We get a over a million authentication attempts per day and just under 100,000 of those are failures. We can't afford Splunk Enterprise Security (Higher Ed) and my understanding is that this is more or less how ES Successful Brute Force Detection works anyways. Do any of you have any advice on finding a pattern of a large number of something (failures, whatever) followed by something else (success, etc.)?
What I'm doing is: I am doing stuff by my own an then parsing all the information as a JSON in order to append it to the end of an Splunk indexed file. The problem is: Splunk, instead of just adding... See more...
What I'm doing is: I am doing stuff by my own an then parsing all the information as a JSON in order to append it to the end of an Splunk indexed file. The problem is: Splunk, instead of just adding the information that wasn't there when the file was last updated, adds all the information again as if it was completely new, and thus giving me duplicate information. I tried to append the information without altering what was already there, but this doesn't seem to solve anything. What would you do to only add the new info?
Hi Team,   I am very new in Splunk and i need your help to change my query as per requirement     Please validate my syantax  as per requirement How do we get list of server name with count(which... See more...
Hi Team,   I am very new in Splunk and i need your help to change my query as per requirement     Please validate my syantax  as per requirement How do we get list of server name with count(which is change color like grey or Uninitialized) instead of count of server (server name comes under ServerName)   Please find my syntax and result   sourcetype=ABC Category IN ("Support","Patch") HealthValue IN(Grey, Uninitialized,) | bin _time span=1d | dedup ServerName HealthValue _time | timechart count(ServerName) as "QTY Servers" by HealthValue     i am waiting for quick response
Hi All,   In Splunk is it possible to join two joint queries.   I have queries like  1) index=_inter sourcetype=project  | dedup project  server |  eval Pro=project | eval source1 ="Y"  | t... See more...
Hi All,   In Splunk is it possible to join two joint queries.   I have queries like  1) index=_inter sourcetype=project  | dedup project  server |  eval Pro=project | eval source1 ="Y"  | table source1 Pro | join Pro type=outer | [search sourcetype =SA  pronames=* |  dedup  pronames | eval Pro=pronames  ]  | table Pro  which will generate output pro pro1 pro2 pro3 @and I have one query similar one , but changing sourcetype in join . ,index=_inter sourcetype=project  | dedup project  server |  eval Pro=project | eval source1 ="Y"  | table source1 Pro | join Pro type=outer | [search sourcetype =SC  pronames=* |  dedup  pronames | eval Pro=pronames  ]  | table Pro pro pro1 pro2 pro3 Both I'm using for generating alerts, two alerts. now I want to send only one alert by merging both queries,  is it possible. so i can send alerts in a single mail. like below   pro       pros pro1   pro1 pro2   pro2 pro3   pro3              
Hi, I'm currently trying to get the auth token that can be used for rest apis without exposing my password. Since it is a mod input script, I am using passAuth = <user> in inputs.conf. This provides... See more...
Hi, I'm currently trying to get the auth token that can be used for rest apis without exposing my password. Since it is a mod input script, I am using passAuth = <user> in inputs.conf. This provides me the auth token via stdin, but it is exposing some information that I would not like exposed. The stdin that I get is "sessionKey=abcd1234" for example. Is there a way to omit the "sessionKey=" part and only return the session key? https://docs.splunk.com/Documentation/Splunk/8.2.1/Admin/Inputsconf https://docs.splunk.com/Documentation/SplunkCloud/8.2.2105/AdvancedDev/ModInputsIntro Alternatively, is there another way to retrieve the session key / auth token without including the password in my script? Thanks!  
Below are the events:   { "kubernetes" : { "pod_name" : "p1" }, traceId: "t-1" } { "kubernetes" : { "pod_name" : "p1" }, traceId: "t-2" } { "kubernetes" : { "pod_name" : "p2" }, trace... See more...
Below are the events:   { "kubernetes" : { "pod_name" : "p1" }, traceId: "t-1" } { "kubernetes" : { "pod_name" : "p1" }, traceId: "t-2" } { "kubernetes" : { "pod_name" : "p2" }, traceId: "t-4" } { "kubernetes" : { "pod_name" : "p3" }, traceId: "t-5" }   I am looking for a dashboard with 2 panels. 1. Showing the unique # of pods 2. Table with Pod Name, # of Number of unique traces For the above event, panels will be 1. Showing the unique # of pods = 3 2. Table with Pod Name, # of Number of unique traces   ---------------------------------- | POD NAME | # of Traces | ---------------------------------- | p1 | 2 | ---------------------------------- | p2 | 1 | ---------------------------------- | p3 | 1 | ----------------------------------    
All, Just upgraded to 8.2.1 last night and noticed something today with stats. # This search returns 160k+ events index=netfw 162276 # This returns a 0 in Smart mode, this search returned da... See more...
All, Just upgraded to 8.2.1 last night and noticed something today with stats. # This search returns 160k+ events index=netfw 162276 # This returns a 0 in Smart mode, this search returned data in 8.1.x how ever no data in 8.2.1 index=netfw | stats count 0 # Same search in Verbose mode however returns the count index=netfw | stats count 162276 Shouldn't Smart mode have returned the count correctly also? It did work that way in 8.1
I need a few useful Correlation searches (SPLs) to keep a close eye on user (internal or malicious) behavior in ES please? Thank u in advance.
Hello- When I run "splunk cmd python scripts\test.py" it outputs data nicely.  When I setup this through Splunk Enterprise Web, it errors out with: File "C:\Program Files\Splunk\bin\scripts\TEST.py... See more...
Hello- When I run "splunk cmd python scripts\test.py" it outputs data nicely.  When I setup this through Splunk Enterprise Web, it errors out with: File "C:\Program Files\Splunk\bin\scripts\TEST.py", line 58 except HTTPError, e:                                      ^ SyntaxError: invalid syntax I tried this from a different machine that had Python installed and from the windows command prompt it outputs the data fine.  Why doesn't this work in Splunk Enterprise?  
I need to be able to display the Authentication.reason field in a |tstats report, but for some reason, when I add the field to the by clause, my search returns no results (as though the field was not... See more...
I need to be able to display the Authentication.reason field in a |tstats report, but for some reason, when I add the field to the by clause, my search returns no results (as though the field was not present in the data). Except when I query the data directly, the field IS there. I have tried this with and without data model acceleration to no avail.   This search returns zero results:     | tstats count from datamodel=Authentication by Authentication.user, Authentication.app, Authentication.reason     This search returns results in the format I need, except I need to query multiple indexes via the data model     index=<indexname> tag=authentication | stats count by user, app, reason      
  It works on RHEL7 andRHEL8. Trying to installing 8.0.2.1 version of splunk UF on "Red Hat Enterprise Linux Server release 6.10 (Santiago)". 2.6.32-754.35.1.el6.x86_64 cammand: "{{ splunk_home ... See more...
  It works on RHEL7 andRHEL8. Trying to installing 8.0.2.1 version of splunk UF on "Red Hat Enterprise Linux Server release 6.10 (Santiago)". 2.6.32-754.35.1.el6.x86_64 cammand: "{{ splunk_home }}/bin/splunk enable boot-start-user {{ splunk_user }} -systemd-managed 0 --answer-yes --auto-ports --no-prompt --accept-license" "stderr": "execve: No such file or directory\n while running command /sbin/chkconfig", "stderr_lines": ["execve: No such file or directory", " while running command /sbin/chkconfig"],