All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, we have 4 search heads on our installation of Splunk 6.5.1, with DBConnect 2.4.0. Suddenly, all the search heads started feeding data into Splunk via DBConnect, while, some days ago, only on... See more...
Hello, we have 4 search heads on our installation of Splunk 6.5.1, with DBConnect 2.4.0. Suddenly, all the search heads started feeding data into Splunk via DBConnect, while, some days ago, only one of the search heads per time was executing the scheduled query and writing on the database. What can we check to avoid this behavior (that puts 4x data on the index)? Thanks
Hello, I have a fairly short question. In the classic editor this worked just fine but in the modern one it simply does not loop the calls. Scenario: I have a list of artefacts I want to use in a... See more...
Hello, I have a fairly short question. In the classic editor this worked just fine but in the modern one it simply does not loop the calls. Scenario: I have a list of artefacts I want to use in an HTTP Post. First I am creating my format, something like     %% {{ "object": "{0}" }} %%      I will latest access this format in the Splunk HTTP Apps "post data" action. Problem: When accessing the format as the body using myformat.* I am expecting it to loop for each artefact the format was created for. What ends up happening is a single request with multiple { "object": "ip1" },  { "object": "ip2" }, etc..   Is there a new way looping is handled in the modern editor?
Hi Splunkers,   I have prepared a regex extraction using regex101 site, and now trying to extract "Failure Reason" as per below log but for some reason fails.   Where is the catch? Should be pret... See more...
Hi Splunkers,   I have prepared a regex extraction using regex101 site, and now trying to extract "Failure Reason" as per below log but for some reason fails.   Where is the catch? Should be pretty simple but I am out of ideas now.   Search:     | from datamodel:"Authentication"."Insecure_Authentication" | search "*Failure*" | rex "Failure\sReason:\t\t(?<Failure_Reason>.*)\n"     Log:   ComputerName=ot.mydomain.com TaskCategory=Logon OpCode=Info RecordNumber=41462650 Keywords=Audit Failure Message=An account failed to log on. Subject: Security ID: NT AUTHORITY\SYSTEM Account Name: usergeorge$ Account Domain: dm Logon ID: 0x3E7 Logon Type: 8 Account For Which Logon Failed: Security ID: NULL SID Account Name: george1$ Account Domain: mydomain.com Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xC000006D Sub Status: 0xC000006A Process Information: Caller Process ID: 0x2t20     Regards, vagnet
Hi, splunkres! I have a search that returns several text fields and I would like to form a table with predefined rows and columns, how can I do this? Here is an example from my research: index=sear... See more...
Hi, splunkres! I have a search that returns several text fields and I would like to form a table with predefined rows and columns, how can I do this? Here is an example from my research: index=search timeformat="%d-%m-%YT%H:%M:%S" earliest="26-10-2021T00:00:00" latest="26-10-2021T23:59:00" | rex field=search "VPN-ANTIVIRUS-WIN:Mandatory:(?<campo1>.*?):" | rex field=search ";VPN-ANTIVIRUS-RUN-WIN:Audit:(?<campo2>.*?):"   Format of the table I want to return: Título Campo Status linha1 titulo do campo campo1 linha2 titulo do campo campo2   And this way I can put as many lines as I want  
What are the configurations required to forward specific log messages to Splunk. Every  log message that contains "ScanStatistics" this phrase needs to get forwarded to Splunk. Let us know what are... See more...
What are the configurations required to forward specific log messages to Splunk. Every  log message that contains "ScanStatistics" this phrase needs to get forwarded to Splunk. Let us know what are the configurations to be done.
Dears, I am currently using AppDynamics API To Pull metrics data from AppDynamics for Application Infrastructure Performance: https://x.y.z/controller/rest/applications/My-App/metric-data?metric-pa... See more...
Dears, I am currently using AppDynamics API To Pull metrics data from AppDynamics for Application Infrastructure Performance: https://x.y.z/controller/rest/applications/My-App/metric-data?metric-path=Application Infrastructure Performance|*|Individual Nodes|*|JVM|*&time-range-type=BEFORE_NOW&duration-in-mins=5&output=JSON The Data is coming fine but it has some problems like some metrics doesn't contain metric values. Data Is coming as a large Json of Jsons where the small JSONs represent an event or entry. Sample Of a Json Event Response That is Correct: { "metricId" : 12345, "metricName" : "JVM|Process CPU Burnt (ms/min)", "metricPath" : "Application Infrastructure Performance|xyz|Individual Nodes|abcdf|JVM|Process CPU Burnt (ms/min)", "frequency" : "ONE_MIN", "metricValues" : [ { "startTimeInMillis" : 1635511140000, "occurrences" : 0, "current" : 14550, "min" : 12330, "max" : 17850, "useRange" : true, "count" : 5, "sum" : 75700, "value" : 15140, "standardDeviation" : 0 } ] Sample Of a Json Event Response That is Bad/Incorrect: (Contains the Word "METRIC DATA NOT FOUND") { "metricId" : 123456, "metricName" : "METRIC DATA NOT FOUND", "metricPath" : "Application Infrastructure Performance|xyz|Individual Nodes|abcdf|JVM|Process CPU Burnt (ms/min)", "frequency" : "ONE_MIN", "metricValues" : [ ] }, Question: Is there is a way to pull all data while removing whatever contains metricName="METRIC DATA NOT FOUND" ?  So that i don't get extra amount of useless data.
I'd like to add a percentage into the following panel:  I've added severity since I just want to see it for critical and high severity. now I'd like to define an sla value of , let's say 2 hours... See more...
I'd like to add a percentage into the following panel:  I've added severity since I just want to see it for critical and high severity. now I'd like to define an sla value of , let's say 2 hours, and then want a percentage of each rules percentage of it's count breached.  so in other words:  in this statistic I want to have an additional field that tells me the percentage of how many of the counted events for those rules have a longer max time to triage than 2h.    rule 1 count 20 (10 breached over 2h sla) -> a field that tells me 50%    I can't seem to find a good way to get a percentage in. here is the whole SPL (from ES mostly):  | tstats summariesonly=true allow_old_summaries=false earliest(_time) as _time FROM datamodel=Incident_Management BY source, "Notable_Events_Meta.rule_id" | rename "Notable_Events_Meta.*" as "*" | lookup update=true correlationsearches_lookup _key as source OUTPUTNEW annotations, security_domain, severity, rule_name, description as savedsearch_description, rule_title, rule_description, drilldown_name, drilldown_search, drilldown_earliest_offset, drilldown_latest_offset, default_status, default_owner, next_steps, investigation_profiles, extract_artifacts, recommended_actions | eval rule_name=if(isnull(rule_name),source,rule_name), rule_title=if(isnull(rule_title),rule_name,rule_title), drilldown_earliest=case(isint(drilldown_earliest_offset),('_time' - drilldown_earliest_offset),(drilldown_earliest_offset == "$info_min_time$"),info_min_time,true(),null()), drilldown_latest=case(isint(drilldown_latest_offset),('_time' + drilldown_latest_offset),(drilldown_latest_offset == "$info_max_time$"),info_max_time,true(),null()), security_domain=if(isnull(security_domain),"threat",lower(security_domain)), rule_description=case(isnotnull(rule_description),rule_description,isnotnull(savedsearch_description),savedsearch_description,true(),"unknown") | eval governance_lookup_type="default" | lookup update=true governance_lookup savedsearch as source, lookup_type as governance_lookup_type OUTPUT governance, control | eval governance_lookup_type="tag" | lookup update=true governance_lookup savedsearch as source, tag, lookup_type as governance_lookup_type OUTPUT governance as governance_tag, control as control_tag | eval governance=mvappend(governance,NULL,governance_tag), control=mvappend(control,NULL,control_tag) | fields - governance_lookup_type, governance_tag, control_tag | join rule_id [| inputlookup incident_review_lookup | eval _time=time | stats earliest(_time) as review_time by rule_id] | eval ttt=(review_time - '_time') | stats count,values(severity) as severity avg(ttt) as avg_ttt,min(ttt) as min_ttt,max(ttt) as max_ttt by rule_name | search severity=high OR severity=critical | `uptime2string(avg_ttt, avg_ttt)` | `uptime2string(max_ttt, max_ttt)` | `uptime2string(min_ttt, min_ttt)` | sort severity -avg_ttt | rename "*_ttt*" as "*(time_to_triage)*" | fields - "*_dec"  
Hi!  Is there any possibility to make my tables static and determine their row width / height in dashboard studio? I'm trying to build visualisations, but my tables are a mess because the column wid... See more...
Hi!  Is there any possibility to make my tables static and determine their row width / height in dashboard studio? I'm trying to build visualisations, but my tables are a mess because the column width changes depending what kind of data is in the table (I have a table that updates every 5 minutes with alarms and some alarms have a long text, others very short text on message column). Header text changes its position so I can't put icons on top of it, because they move so much.  Is there any way around this or any ideas how to do the table view on some other way? Thanks for the help!
Trying to extract splunk search queried data from Splunk API using Postman. What all para meters need to pass to get successful response. https://testsplunk:8089/services/search/jobs/export?output_m... See more...
Trying to extract splunk search queried data from Splunk API using Postman. What all para meters need to pass to get successful response. https://testsplunk:8089/services/search/jobs/export?output_mode=csv Headers: [{"key":"search","value":"index=abc sourcetype=xyz|stats count by host ","description":"","type":"text","enabled":true}] Authorization header : UserName: jhasuagduh Password: pwd   I am getting 400 bad request and 401 unautherized as response. Please assist. Thanks Sagar
Hi, I want to extract the following term from this message:   (MaRSEPbac, [MaRSEPbac_Old2], [MaRSEPbac]) that means the string between ().. message: 16:21:32.843 [gcp-pubsub-subscriber1][... See more...
Hi, I want to extract the following term from this message:   (MaRSEPbac, [MaRSEPbac_Old2], [MaRSEPbac]) that means the string between ().. message: 16:21:32.843 [gcp-pubsub-subscriber1] INFO  zbank.harissa.cockpit.InboundGateway - update: [export_service] context:RDB (MaRSEPbac, [MaRSEPbac_Old2], [MaRSEPbac]) progress:3/3 status:successful msg:exporting rrid: [8d9a85b8-0d34-4dea-8901-17520b4b9b9d] rrid:f50a0cce-af13-4e64-88aa-84de045380ca How does it goes? Thanks!
Folks,  Need some assistance to understand why Splunk is reporting different IP's for the same hostname ( Active Dir Server) even though  the AD server has only one static IP ip assigned to it. For ... See more...
Folks,  Need some assistance to understand why Splunk is reporting different IP's for the same hostname ( Active Dir Server) even though  the AD server has only one static IP ip assigned to it. For example:  Lets assume my AD server is :   AD01.domain.com with IP  1.2.3.4  Now if i  run the search to group events where the src host is AD01,  index=ad |stats list(action) by src, src_ip| where src="AD01.domain.com"  it shows following results ...where  there is different src-IP for every event for the same Host AD01 src src_ip list(action)   AD01.domain.com 2.3.4.5 success   AD01.domain.com 10.76.12.102 success   AD01.domain.com 10.x.12.101 success   AD01.domaincom x.x.x.x failure   Why so ?
Hi Team, Splunk App for Phantom Reporting Testing 1 : If HEC token is created in HF,     Indexes are created in Indexer,    Roles/User/splunk app for phantom reporting app is created in SH ---> In... See more...
Hi Team, Splunk App for Phantom Reporting Testing 1 : If HEC token is created in HF,     Indexes are created in Indexer,    Roles/User/splunk app for phantom reporting app is created in SH ---> In phantom Side - If I give the host as (HF IP) --> It is not working (Getting error as) Test connection failed. Test connection failed for phantomsearch on host "Splunk": No results found. Testing 2: If indexes are created in Indexer,   HEC token/user/roles/splunk app for Phantom reporting app is created in SH --> In phantom side --> If I give the host as (SH IP) --> It is working (But it is not accepted as best practice) Testing 3: Indexes/HEC token/user/role is created in Indexer and splunk app for phantom reporting app is created in SH, In Phantom end --> If I give the host as (Indexer IP) ---> It is working (This is also not accepted as best practice) What should I do to make my Testing 1 work?
I want to use Splunk to work out the effective working hours of employees based on ad data. How should I make statistics
I have a field "skill" which takes multiple values: I want to extract the count of each of the values of skill and store each of them in variables. Say v1,v2,v3,v5 etc: where their values are ... See more...
I have a field "skill" which takes multiple values: I want to extract the count of each of the values of skill and store each of them in variables. Say v1,v2,v3,v5 etc: where their values are v1 = 181 v2 = 144 v3 = 80 and so on.
Hi, so I have a Bargraph with many values. The enduser who has to use that bargraph needs to see if the values are over or under certain values at some point. Thats why I want to draw a line at both... See more...
Hi, so I have a Bargraph with many values. The enduser who has to use that bargraph needs to see if the values are over or under certain values at some point. Thats why I want to draw a line at both the max allowed value and the min needed value. I attached a picture of how I want it to look. Is it possible to achieve something like this?
Hi Splunk Community, I was wondering if anyone might be able to provide some advice around using the ServiceNow add-on for Splunk specifically in regards to the consuming data from the CMDB. Ther... See more...
Hi Splunk Community, I was wondering if anyone might be able to provide some advice around using the ServiceNow add-on for Splunk specifically in regards to the consuming data from the CMDB. There are OOB Inputs that come with the add-on which are fine for some basic tables however I'm looking at the CI relationship table which currently contains 19m+ records! We don't want to consume all of those as we're only really interested in the ones that relate to the basic tables we're already importing using the OOB inputs, which is around 10 tables. The filters available with the add-on don't provide enough functionality to filter our requirement. Maybe a custom REST API call not within the ServiceNow add-on or maybe a post from ServiceNow to Splunk is the way to go.  Keen to hear how others might have tackled anything similar?
Let's say I have this query   index = x |stats count as Total, sum(AMMOUNT) as TAmmount BY MERCHANT, SUBMERCHANT   I want to make a comparison by percentage between this month to the average of ... See more...
Let's say I have this query   index = x |stats count as Total, sum(AMMOUNT) as TAmmount BY MERCHANT, SUBMERCHANT   I want to make a comparison by percentage between this month to the average of TOTAL three month ago. How do you go about using timewarp to  archive that goal?
hi team, as titled, how to rename 'row1' to 'number' after transpose. I tried rename and replace, but doesn't work.  
Has anyone encountered this issue and how did you fixed it on Splunkcloud and Enterprise Security "Identity: An error occurred while the Asset and Identity Management modular input ran" ?  When I che... See more...
Has anyone encountered this issue and how did you fixed it on Splunkcloud and Enterprise Security "Identity: An error occurred while the Asset and Identity Management modular input ran" ?  When I checked the error it is saying that Lookup file error, unknown path or update time. Pretty sure lookups is existing but I am not sure what it means by update time?    
Hi all, I keep getting "DateParserVerbose [6827 merging] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (75) characters of event. Defaulting to timestamp of previous event" warnings. ... See more...
Hi all, I keep getting "DateParserVerbose [6827 merging] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (75) characters of event. Defaulting to timestamp of previous event" warnings. The time stamp in the logs looks like: 2021/10/28T16:06:08.183-07:00 props.conf looks like: DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = true MAX_TIMESTAMP_LOOKAHEAD = 75 MAX_DAYS_AGO = 36500 MAX_DAYS_HENCE = 36500 TIME_FORMAT = %d-%b-%y %I.%M.%S.%6Q %p SHOULD_LINEMERGE = false TRUNCATE = 500000 Anyone know what my time_format should be instead?