All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

All,  Setting up an index cluster of 3 nodes soon and sizing some disks. Feels like you would always want a replication factor of 2 searchable copies of 2.  But I see that I can in theory set ... See more...
All,  Setting up an index cluster of 3 nodes soon and sizing some disks. Feels like you would always want a replication factor of 2 searchable copies of 2.  But I see that I can in theory set replication factor of 2 searchable copies of 1, which leaves out tsidx/bloom etc.  Docs really seem to gloss over this.  What benefit does this have? What would recovery look like in a RF=2/SC=1 situation with a lost indexer? How would I bring that replicated copy online if I lost an indexer for good and didn't have a second searchable copy? 
I have two different data files which are related by a single named field.   Lets call that field common_field.  From one set of data files, I can get the count per common_field and from the other se... See more...
I have two different data files which are related by a single named field.   Lets call that field common_field.  From one set of data files, I can get the count per common_field and from the other set of data files I can get the count of errors per common_field. I want to create a query which will give me the ratio of error count to total count per common_field.  I have tried to use subsearches but it is not working. 
Is there any plan to support PyODBC as a DB exit? This library makes it easy for us to support multiple DB types in our analytical python environment. 
I am new to splunk. The answer can help me learn more. I have a message in log which looks something like  k45ksp: k45kspProcessControlBuff task 1 (p_id: 2). I need to just extract  k45kspProcessC... See more...
I am new to splunk. The answer can help me learn more. I have a message in log which looks something like  k45ksp: k45kspProcessControlBuff task 1 (p_id: 2). I need to just extract  k45kspProcessControlBuff from above message field  and count how many times it has occurred in a log.
Hi Splunkers. I need some ideas in showing KPI s in Splunk on Windows or Linux logs.  We have AD logs, System logs and Application logs. In Linux, We have secure logs. We are not trying to go w... See more...
Hi Splunkers. I need some ideas in showing KPI s in Splunk on Windows or Linux logs.  We have AD logs, System logs and Application logs. In Linux, We have secure logs. We are not trying to go with ITSI as of now but wanted to demo a KPI in Splunk Enterprise to other teams  to showcase the potential of Splunk. Please provide me some recommendations. Thanks in advance.  
I have been unable to find a suitable driver to get the DB Connect app to work to ingest some azure table data from a cosmos database.  Has anyone had any luck doing so or is there a better way to go... See more...
I have been unable to find a suitable driver to get the DB Connect app to work to ingest some azure table data from a cosmos database.  Has anyone had any luck doing so or is there a better way to go about it?
Hi!  I have a panel in dashboard that uses timechart. I want to make it zoom at highest count or count>0 automatically after it is done loading. Is there a way to achieve this? Thanks
I have two charts that work as expected when separate, but I'm having  a hard time combining them into one chart as they have different search criteria (but from the same index/source) so search2 end... See more...
I have two charts that work as expected when separate, but I'm having  a hard time combining them into one chart as they have different search criteria (but from the same index/source) so search2 ends up being wrong when using the criteria from search 1.  I tried combining using the chart overlays but I couldn't get it to work.  Any pointers would be very much appreciated! search 1 - last 30 days     index=foo source=bar criticality=high state=open | bin _time span=1d | stats count AS warnings by _time     search 2 - last 30 days     index=foo source=bar | bin _time span=1d | stats dc(accountId) AS Accounts by _time      
I've been working with Splunk for many years and have always made changes via the .conf files.  However, I recently added the /var/logs directory by using ./splunk add monitor /var/log -index main... See more...
I've been working with Splunk for many years and have always made changes via the .conf files.  However, I recently added the /var/logs directory by using ./splunk add monitor /var/log -index main -sourcetype linux It's working, but I want to modify it a bit.  However, I have been pulling my hair out trying to figure out which inputs.conf file was modified with the command. Any assistance appreciated. Tim
I made a clone of an existing and empty XML dashboard as the means to start a new studio one. I added text boxes and an image. All looks fine in edit mode. Anytime I save and click View, the dashboar... See more...
I made a clone of an existing and empty XML dashboard as the means to start a new studio one. I added text boxes and an image. All looks fine in edit mode. Anytime I save and click View, the dashboard's title remains but contents disappear. Going back to edit mode shows the contents. I've restarted, but the page and page's source doesn't have my text. How can I debug this ridiculously simple, beginner problem? 
Hi All! How to extract and create different fields by transforms when there is an array (JSON) with several fields with the same name but different values? For example, the "text" field in the firs... See more...
Hi All! How to extract and create different fields by transforms when there is an array (JSON) with several fields with the same name but different values? For example, the "text" field in the first case means "action", in the second the "text" field means "hostname". Just like "port" that appears twice would have to be identified as src_port and dst_port. Sample: "{     detail: {        indicators: [        {           filterId: [          ]          id: 1          objectType: port          objectValue: 445          relatedEntities: [          ]        }        {          filterId: [          ]          id: 2          objectType: text          objectValue: Reset          relatedEntities: [          ]        }        {          filterId: [          ]          id: 3          objectType: port          objectValue: 36880          relatedEntities: [        ]        }        {        filterId: [          ]          id: 6          objectType: text          objectValue: SERVERWIN01          relatedEntities: [          ]        }        {           filterId: [          ]          id: 7          objectType: detection_name          objectValue: Microsoft Windows SMB Information Disclosure Vulnerability (CVE-2017-0147)          relatedEntities: [          ]     }" Thanks! James 
Hello, we have 4 search heads on our installation of Splunk 6.5.1, with DBConnect 2.4.0. Suddenly, all the search heads started feeding data into Splunk via DBConnect, while, some days ago, only on... See more...
Hello, we have 4 search heads on our installation of Splunk 6.5.1, with DBConnect 2.4.0. Suddenly, all the search heads started feeding data into Splunk via DBConnect, while, some days ago, only one of the search heads per time was executing the scheduled query and writing on the database. What can we check to avoid this behavior (that puts 4x data on the index)? Thanks
Hello, I have a fairly short question. In the classic editor this worked just fine but in the modern one it simply does not loop the calls. Scenario: I have a list of artefacts I want to use in a... See more...
Hello, I have a fairly short question. In the classic editor this worked just fine but in the modern one it simply does not loop the calls. Scenario: I have a list of artefacts I want to use in an HTTP Post. First I am creating my format, something like     %% {{ "object": "{0}" }} %%      I will latest access this format in the Splunk HTTP Apps "post data" action. Problem: When accessing the format as the body using myformat.* I am expecting it to loop for each artefact the format was created for. What ends up happening is a single request with multiple { "object": "ip1" },  { "object": "ip2" }, etc..   Is there a new way looping is handled in the modern editor?
Hi Splunkers,   I have prepared a regex extraction using regex101 site, and now trying to extract "Failure Reason" as per below log but for some reason fails.   Where is the catch? Should be pret... See more...
Hi Splunkers,   I have prepared a regex extraction using regex101 site, and now trying to extract "Failure Reason" as per below log but for some reason fails.   Where is the catch? Should be pretty simple but I am out of ideas now.   Search:     | from datamodel:"Authentication"."Insecure_Authentication" | search "*Failure*" | rex "Failure\sReason:\t\t(?<Failure_Reason>.*)\n"     Log:   ComputerName=ot.mydomain.com TaskCategory=Logon OpCode=Info RecordNumber=41462650 Keywords=Audit Failure Message=An account failed to log on. Subject: Security ID: NT AUTHORITY\SYSTEM Account Name: usergeorge$ Account Domain: dm Logon ID: 0x3E7 Logon Type: 8 Account For Which Logon Failed: Security ID: NULL SID Account Name: george1$ Account Domain: mydomain.com Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xC000006D Sub Status: 0xC000006A Process Information: Caller Process ID: 0x2t20     Regards, vagnet
Hi, splunkres! I have a search that returns several text fields and I would like to form a table with predefined rows and columns, how can I do this? Here is an example from my research: index=sear... See more...
Hi, splunkres! I have a search that returns several text fields and I would like to form a table with predefined rows and columns, how can I do this? Here is an example from my research: index=search timeformat="%d-%m-%YT%H:%M:%S" earliest="26-10-2021T00:00:00" latest="26-10-2021T23:59:00" | rex field=search "VPN-ANTIVIRUS-WIN:Mandatory:(?<campo1>.*?):" | rex field=search ";VPN-ANTIVIRUS-RUN-WIN:Audit:(?<campo2>.*?):"   Format of the table I want to return: Título Campo Status linha1 titulo do campo campo1 linha2 titulo do campo campo2   And this way I can put as many lines as I want  
What are the configurations required to forward specific log messages to Splunk. Every  log message that contains "ScanStatistics" this phrase needs to get forwarded to Splunk. Let us know what are... See more...
What are the configurations required to forward specific log messages to Splunk. Every  log message that contains "ScanStatistics" this phrase needs to get forwarded to Splunk. Let us know what are the configurations to be done.
Dears, I am currently using AppDynamics API To Pull metrics data from AppDynamics for Application Infrastructure Performance: https://x.y.z/controller/rest/applications/My-App/metric-data?metric-pa... See more...
Dears, I am currently using AppDynamics API To Pull metrics data from AppDynamics for Application Infrastructure Performance: https://x.y.z/controller/rest/applications/My-App/metric-data?metric-path=Application Infrastructure Performance|*|Individual Nodes|*|JVM|*&time-range-type=BEFORE_NOW&duration-in-mins=5&output=JSON The Data is coming fine but it has some problems like some metrics doesn't contain metric values. Data Is coming as a large Json of Jsons where the small JSONs represent an event or entry. Sample Of a Json Event Response That is Correct: { "metricId" : 12345, "metricName" : "JVM|Process CPU Burnt (ms/min)", "metricPath" : "Application Infrastructure Performance|xyz|Individual Nodes|abcdf|JVM|Process CPU Burnt (ms/min)", "frequency" : "ONE_MIN", "metricValues" : [ { "startTimeInMillis" : 1635511140000, "occurrences" : 0, "current" : 14550, "min" : 12330, "max" : 17850, "useRange" : true, "count" : 5, "sum" : 75700, "value" : 15140, "standardDeviation" : 0 } ] Sample Of a Json Event Response That is Bad/Incorrect: (Contains the Word "METRIC DATA NOT FOUND") { "metricId" : 123456, "metricName" : "METRIC DATA NOT FOUND", "metricPath" : "Application Infrastructure Performance|xyz|Individual Nodes|abcdf|JVM|Process CPU Burnt (ms/min)", "frequency" : "ONE_MIN", "metricValues" : [ ] }, Question: Is there is a way to pull all data while removing whatever contains metricName="METRIC DATA NOT FOUND" ?  So that i don't get extra amount of useless data.
I'd like to add a percentage into the following panel:  I've added severity since I just want to see it for critical and high severity. now I'd like to define an sla value of , let's say 2 hours... See more...
I'd like to add a percentage into the following panel:  I've added severity since I just want to see it for critical and high severity. now I'd like to define an sla value of , let's say 2 hours, and then want a percentage of each rules percentage of it's count breached.  so in other words:  in this statistic I want to have an additional field that tells me the percentage of how many of the counted events for those rules have a longer max time to triage than 2h.    rule 1 count 20 (10 breached over 2h sla) -> a field that tells me 50%    I can't seem to find a good way to get a percentage in. here is the whole SPL (from ES mostly):  | tstats summariesonly=true allow_old_summaries=false earliest(_time) as _time FROM datamodel=Incident_Management BY source, "Notable_Events_Meta.rule_id" | rename "Notable_Events_Meta.*" as "*" | lookup update=true correlationsearches_lookup _key as source OUTPUTNEW annotations, security_domain, severity, rule_name, description as savedsearch_description, rule_title, rule_description, drilldown_name, drilldown_search, drilldown_earliest_offset, drilldown_latest_offset, default_status, default_owner, next_steps, investigation_profiles, extract_artifacts, recommended_actions | eval rule_name=if(isnull(rule_name),source,rule_name), rule_title=if(isnull(rule_title),rule_name,rule_title), drilldown_earliest=case(isint(drilldown_earliest_offset),('_time' - drilldown_earliest_offset),(drilldown_earliest_offset == "$info_min_time$"),info_min_time,true(),null()), drilldown_latest=case(isint(drilldown_latest_offset),('_time' + drilldown_latest_offset),(drilldown_latest_offset == "$info_max_time$"),info_max_time,true(),null()), security_domain=if(isnull(security_domain),"threat",lower(security_domain)), rule_description=case(isnotnull(rule_description),rule_description,isnotnull(savedsearch_description),savedsearch_description,true(),"unknown") | eval governance_lookup_type="default" | lookup update=true governance_lookup savedsearch as source, lookup_type as governance_lookup_type OUTPUT governance, control | eval governance_lookup_type="tag" | lookup update=true governance_lookup savedsearch as source, tag, lookup_type as governance_lookup_type OUTPUT governance as governance_tag, control as control_tag | eval governance=mvappend(governance,NULL,governance_tag), control=mvappend(control,NULL,control_tag) | fields - governance_lookup_type, governance_tag, control_tag | join rule_id [| inputlookup incident_review_lookup | eval _time=time | stats earliest(_time) as review_time by rule_id] | eval ttt=(review_time - '_time') | stats count,values(severity) as severity avg(ttt) as avg_ttt,min(ttt) as min_ttt,max(ttt) as max_ttt by rule_name | search severity=high OR severity=critical | `uptime2string(avg_ttt, avg_ttt)` | `uptime2string(max_ttt, max_ttt)` | `uptime2string(min_ttt, min_ttt)` | sort severity -avg_ttt | rename "*_ttt*" as "*(time_to_triage)*" | fields - "*_dec"  
Hi!  Is there any possibility to make my tables static and determine their row width / height in dashboard studio? I'm trying to build visualisations, but my tables are a mess because the column wid... See more...
Hi!  Is there any possibility to make my tables static and determine their row width / height in dashboard studio? I'm trying to build visualisations, but my tables are a mess because the column width changes depending what kind of data is in the table (I have a table that updates every 5 minutes with alarms and some alarms have a long text, others very short text on message column). Header text changes its position so I can't put icons on top of it, because they move so much.  Is there any way around this or any ideas how to do the table view on some other way? Thanks for the help!
Trying to extract splunk search queried data from Splunk API using Postman. What all para meters need to pass to get successful response. https://testsplunk:8089/services/search/jobs/export?output_m... See more...
Trying to extract splunk search queried data from Splunk API using Postman. What all para meters need to pass to get successful response. https://testsplunk:8089/services/search/jobs/export?output_mode=csv Headers: [{"key":"search","value":"index=abc sourcetype=xyz|stats count by host ","description":"","type":"text","enabled":true}] Authorization header : UserName: jhasuagduh Password: pwd   I am getting 400 bad request and 401 unautherized as response. Please assist. Thanks Sagar