All Topics

Top

All Topics

Hello, we have 4 search heads on our installation of Splunk 6.5.1, with DBConnect 2.4.0. Suddenly, all the search heads started feeding data into Splunk via DBConnect, while, some days ago, only on... See more...
Hello, we have 4 search heads on our installation of Splunk 6.5.1, with DBConnect 2.4.0. Suddenly, all the search heads started feeding data into Splunk via DBConnect, while, some days ago, only one of the search heads per time was executing the scheduled query and writing on the database. What can we check to avoid this behavior (that puts 4x data on the index)? Thanks
Hello, I have a fairly short question. In the classic editor this worked just fine but in the modern one it simply does not loop the calls. Scenario: I have a list of artefacts I want to use in a... See more...
Hello, I have a fairly short question. In the classic editor this worked just fine but in the modern one it simply does not loop the calls. Scenario: I have a list of artefacts I want to use in an HTTP Post. First I am creating my format, something like     %% {{ "object": "{0}" }} %%      I will latest access this format in the Splunk HTTP Apps "post data" action. Problem: When accessing the format as the body using myformat.* I am expecting it to loop for each artefact the format was created for. What ends up happening is a single request with multiple { "object": "ip1" },  { "object": "ip2" }, etc..   Is there a new way looping is handled in the modern editor?
Hi Splunkers,   I have prepared a regex extraction using regex101 site, and now trying to extract "Failure Reason" as per below log but for some reason fails.   Where is the catch? Should be pret... See more...
Hi Splunkers,   I have prepared a regex extraction using regex101 site, and now trying to extract "Failure Reason" as per below log but for some reason fails.   Where is the catch? Should be pretty simple but I am out of ideas now.   Search:     | from datamodel:"Authentication"."Insecure_Authentication" | search "*Failure*" | rex "Failure\sReason:\t\t(?<Failure_Reason>.*)\n"     Log:   ComputerName=ot.mydomain.com TaskCategory=Logon OpCode=Info RecordNumber=41462650 Keywords=Audit Failure Message=An account failed to log on. Subject: Security ID: NT AUTHORITY\SYSTEM Account Name: usergeorge$ Account Domain: dm Logon ID: 0x3E7 Logon Type: 8 Account For Which Logon Failed: Security ID: NULL SID Account Name: george1$ Account Domain: mydomain.com Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xC000006D Sub Status: 0xC000006A Process Information: Caller Process ID: 0x2t20     Regards, vagnet
Hi, splunkres! I have a search that returns several text fields and I would like to form a table with predefined rows and columns, how can I do this? Here is an example from my research: index=sear... See more...
Hi, splunkres! I have a search that returns several text fields and I would like to form a table with predefined rows and columns, how can I do this? Here is an example from my research: index=search timeformat="%d-%m-%YT%H:%M:%S" earliest="26-10-2021T00:00:00" latest="26-10-2021T23:59:00" | rex field=search "VPN-ANTIVIRUS-WIN:Mandatory:(?<campo1>.*?):" | rex field=search ";VPN-ANTIVIRUS-RUN-WIN:Audit:(?<campo2>.*?):"   Format of the table I want to return: Título Campo Status linha1 titulo do campo campo1 linha2 titulo do campo campo2   And this way I can put as many lines as I want  
What are the configurations required to forward specific log messages to Splunk. Every  log message that contains "ScanStatistics" this phrase needs to get forwarded to Splunk. Let us know what are... See more...
What are the configurations required to forward specific log messages to Splunk. Every  log message that contains "ScanStatistics" this phrase needs to get forwarded to Splunk. Let us know what are the configurations to be done.
Dears, I am currently using AppDynamics API To Pull metrics data from AppDynamics for Application Infrastructure Performance: https://x.y.z/controller/rest/applications/My-App/metric-data?metric-pa... See more...
Dears, I am currently using AppDynamics API To Pull metrics data from AppDynamics for Application Infrastructure Performance: https://x.y.z/controller/rest/applications/My-App/metric-data?metric-path=Application Infrastructure Performance|*|Individual Nodes|*|JVM|*&time-range-type=BEFORE_NOW&duration-in-mins=5&output=JSON The Data is coming fine but it has some problems like some metrics doesn't contain metric values. Data Is coming as a large Json of Jsons where the small JSONs represent an event or entry. Sample Of a Json Event Response That is Correct: { "metricId" : 12345, "metricName" : "JVM|Process CPU Burnt (ms/min)", "metricPath" : "Application Infrastructure Performance|xyz|Individual Nodes|abcdf|JVM|Process CPU Burnt (ms/min)", "frequency" : "ONE_MIN", "metricValues" : [ { "startTimeInMillis" : 1635511140000, "occurrences" : 0, "current" : 14550, "min" : 12330, "max" : 17850, "useRange" : true, "count" : 5, "sum" : 75700, "value" : 15140, "standardDeviation" : 0 } ] Sample Of a Json Event Response That is Bad/Incorrect: (Contains the Word "METRIC DATA NOT FOUND") { "metricId" : 123456, "metricName" : "METRIC DATA NOT FOUND", "metricPath" : "Application Infrastructure Performance|xyz|Individual Nodes|abcdf|JVM|Process CPU Burnt (ms/min)", "frequency" : "ONE_MIN", "metricValues" : [ ] }, Question: Is there is a way to pull all data while removing whatever contains metricName="METRIC DATA NOT FOUND" ?  So that i don't get extra amount of useless data.
I'd like to add a percentage into the following panel:  I've added severity since I just want to see it for critical and high severity. now I'd like to define an sla value of , let's say 2 hours... See more...
I'd like to add a percentage into the following panel:  I've added severity since I just want to see it for critical and high severity. now I'd like to define an sla value of , let's say 2 hours, and then want a percentage of each rules percentage of it's count breached.  so in other words:  in this statistic I want to have an additional field that tells me the percentage of how many of the counted events for those rules have a longer max time to triage than 2h.    rule 1 count 20 (10 breached over 2h sla) -> a field that tells me 50%    I can't seem to find a good way to get a percentage in. here is the whole SPL (from ES mostly):  | tstats summariesonly=true allow_old_summaries=false earliest(_time) as _time FROM datamodel=Incident_Management BY source, "Notable_Events_Meta.rule_id" | rename "Notable_Events_Meta.*" as "*" | lookup update=true correlationsearches_lookup _key as source OUTPUTNEW annotations, security_domain, severity, rule_name, description as savedsearch_description, rule_title, rule_description, drilldown_name, drilldown_search, drilldown_earliest_offset, drilldown_latest_offset, default_status, default_owner, next_steps, investigation_profiles, extract_artifacts, recommended_actions | eval rule_name=if(isnull(rule_name),source,rule_name), rule_title=if(isnull(rule_title),rule_name,rule_title), drilldown_earliest=case(isint(drilldown_earliest_offset),('_time' - drilldown_earliest_offset),(drilldown_earliest_offset == "$info_min_time$"),info_min_time,true(),null()), drilldown_latest=case(isint(drilldown_latest_offset),('_time' + drilldown_latest_offset),(drilldown_latest_offset == "$info_max_time$"),info_max_time,true(),null()), security_domain=if(isnull(security_domain),"threat",lower(security_domain)), rule_description=case(isnotnull(rule_description),rule_description,isnotnull(savedsearch_description),savedsearch_description,true(),"unknown") | eval governance_lookup_type="default" | lookup update=true governance_lookup savedsearch as source, lookup_type as governance_lookup_type OUTPUT governance, control | eval governance_lookup_type="tag" | lookup update=true governance_lookup savedsearch as source, tag, lookup_type as governance_lookup_type OUTPUT governance as governance_tag, control as control_tag | eval governance=mvappend(governance,NULL,governance_tag), control=mvappend(control,NULL,control_tag) | fields - governance_lookup_type, governance_tag, control_tag | join rule_id [| inputlookup incident_review_lookup | eval _time=time | stats earliest(_time) as review_time by rule_id] | eval ttt=(review_time - '_time') | stats count,values(severity) as severity avg(ttt) as avg_ttt,min(ttt) as min_ttt,max(ttt) as max_ttt by rule_name | search severity=high OR severity=critical | `uptime2string(avg_ttt, avg_ttt)` | `uptime2string(max_ttt, max_ttt)` | `uptime2string(min_ttt, min_ttt)` | sort severity -avg_ttt | rename "*_ttt*" as "*(time_to_triage)*" | fields - "*_dec"  
Hi!  Is there any possibility to make my tables static and determine their row width / height in dashboard studio? I'm trying to build visualisations, but my tables are a mess because the column wid... See more...
Hi!  Is there any possibility to make my tables static and determine their row width / height in dashboard studio? I'm trying to build visualisations, but my tables are a mess because the column width changes depending what kind of data is in the table (I have a table that updates every 5 minutes with alarms and some alarms have a long text, others very short text on message column). Header text changes its position so I can't put icons on top of it, because they move so much.  Is there any way around this or any ideas how to do the table view on some other way? Thanks for the help!
Trying to extract splunk search queried data from Splunk API using Postman. What all para meters need to pass to get successful response. https://testsplunk:8089/services/search/jobs/export?output_m... See more...
Trying to extract splunk search queried data from Splunk API using Postman. What all para meters need to pass to get successful response. https://testsplunk:8089/services/search/jobs/export?output_mode=csv Headers: [{"key":"search","value":"index=abc sourcetype=xyz|stats count by host ","description":"","type":"text","enabled":true}] Authorization header : UserName: jhasuagduh Password: pwd   I am getting 400 bad request and 401 unautherized as response. Please assist. Thanks Sagar
Hi, I want to extract the following term from this message:   (MaRSEPbac, [MaRSEPbac_Old2], [MaRSEPbac]) that means the string between ().. message: 16:21:32.843 [gcp-pubsub-subscriber1][... See more...
Hi, I want to extract the following term from this message:   (MaRSEPbac, [MaRSEPbac_Old2], [MaRSEPbac]) that means the string between ().. message: 16:21:32.843 [gcp-pubsub-subscriber1] INFO  zbank.harissa.cockpit.InboundGateway - update: [export_service] context:RDB (MaRSEPbac, [MaRSEPbac_Old2], [MaRSEPbac]) progress:3/3 status:successful msg:exporting rrid: [8d9a85b8-0d34-4dea-8901-17520b4b9b9d] rrid:f50a0cce-af13-4e64-88aa-84de045380ca How does it goes? Thanks!
Folks,  Need some assistance to understand why Splunk is reporting different IP's for the same hostname ( Active Dir Server) even though  the AD server has only one static IP ip assigned to it. For ... See more...
Folks,  Need some assistance to understand why Splunk is reporting different IP's for the same hostname ( Active Dir Server) even though  the AD server has only one static IP ip assigned to it. For example:  Lets assume my AD server is :   AD01.domain.com with IP  1.2.3.4  Now if i  run the search to group events where the src host is AD01,  index=ad |stats list(action) by src, src_ip| where src="AD01.domain.com"  it shows following results ...where  there is different src-IP for every event for the same Host AD01 src src_ip list(action)   AD01.domain.com 2.3.4.5 success   AD01.domain.com 10.76.12.102 success   AD01.domain.com 10.x.12.101 success   AD01.domaincom x.x.x.x failure   Why so ?
Hi Team, Splunk App for Phantom Reporting Testing 1 : If HEC token is created in HF,     Indexes are created in Indexer,    Roles/User/splunk app for phantom reporting app is created in SH ---> In... See more...
Hi Team, Splunk App for Phantom Reporting Testing 1 : If HEC token is created in HF,     Indexes are created in Indexer,    Roles/User/splunk app for phantom reporting app is created in SH ---> In phantom Side - If I give the host as (HF IP) --> It is not working (Getting error as) Test connection failed. Test connection failed for phantomsearch on host "Splunk": No results found. Testing 2: If indexes are created in Indexer,   HEC token/user/roles/splunk app for Phantom reporting app is created in SH --> In phantom side --> If I give the host as (SH IP) --> It is working (But it is not accepted as best practice) Testing 3: Indexes/HEC token/user/role is created in Indexer and splunk app for phantom reporting app is created in SH, In Phantom end --> If I give the host as (Indexer IP) ---> It is working (This is also not accepted as best practice) What should I do to make my Testing 1 work?
I want to use Splunk to work out the effective working hours of employees based on ad data. How should I make statistics
I have a field "skill" which takes multiple values: I want to extract the count of each of the values of skill and store each of them in variables. Say v1,v2,v3,v5 etc: where their values are ... See more...
I have a field "skill" which takes multiple values: I want to extract the count of each of the values of skill and store each of them in variables. Say v1,v2,v3,v5 etc: where their values are v1 = 181 v2 = 144 v3 = 80 and so on.
Hi, so I have a Bargraph with many values. The enduser who has to use that bargraph needs to see if the values are over or under certain values at some point. Thats why I want to draw a line at both... See more...
Hi, so I have a Bargraph with many values. The enduser who has to use that bargraph needs to see if the values are over or under certain values at some point. Thats why I want to draw a line at both the max allowed value and the min needed value. I attached a picture of how I want it to look. Is it possible to achieve something like this?
Hi Splunk Community, I was wondering if anyone might be able to provide some advice around using the ServiceNow add-on for Splunk specifically in regards to the consuming data from the CMDB. Ther... See more...
Hi Splunk Community, I was wondering if anyone might be able to provide some advice around using the ServiceNow add-on for Splunk specifically in regards to the consuming data from the CMDB. There are OOB Inputs that come with the add-on which are fine for some basic tables however I'm looking at the CI relationship table which currently contains 19m+ records! We don't want to consume all of those as we're only really interested in the ones that relate to the basic tables we're already importing using the OOB inputs, which is around 10 tables. The filters available with the add-on don't provide enough functionality to filter our requirement. Maybe a custom REST API call not within the ServiceNow add-on or maybe a post from ServiceNow to Splunk is the way to go.  Keen to hear how others might have tackled anything similar?
Let's say I have this query   index = x |stats count as Total, sum(AMMOUNT) as TAmmount BY MERCHANT, SUBMERCHANT   I want to make a comparison by percentage between this month to the average of ... See more...
Let's say I have this query   index = x |stats count as Total, sum(AMMOUNT) as TAmmount BY MERCHANT, SUBMERCHANT   I want to make a comparison by percentage between this month to the average of TOTAL three month ago. How do you go about using timewarp to  archive that goal?
hi team, as titled, how to rename 'row1' to 'number' after transpose. I tried rename and replace, but doesn't work.  
Watch this page for updates — click the 3-dot menu upper right, then subscribe Want to receive all monthly Product Updates? Click to subscribe to the series We did not have a Controller or Enter... See more...
Watch this page for updates — click the 3-dot menu upper right, then subscribe Want to receive all monthly Product Updates? Click to subscribe to the series We did not have a Controller or Enterprise Console release in September. However, the month included several key product enhancements and agent updates. We also resolved a series of issues associated with the previous release. Table of Contents Feature Enhancements Heads Up - What Else You Should Know Beta Programs Resolved Issues Get Started Ask a Question What highlights should I know about? These release highlights include the newest features and capabilities this month, at a glance. In the grid below, we’ve flagged who within your organization may be most interested or impacted for each enhancement.  CATEGORY ENHANCEMENT USER and PERFORMANCE ANALYST ADMIN and IMPLEMENTER  DEVELOPER FULL-STACK OBSERVABILITY Dash Studio updates ✓ ✓   AGENT UPDATES .NET Agent ✓ ✓   Python Agent ✓ ✓   Cluster Agent ✓ ✓   SYSTEM ADMINISTRATION UPDATES User creation and password policies updated   ✓   Full-stack Observability The latest update to Dash Studio includes options to “Reverse Trend Color” and “Compare Value Only” — helping you better customize the way you compare time ranges. You can also now align your widget properties horizontally.  Agent Updates .NET Agent | WeakReference are now utilized in the dynamic reflector library along with improvements to resource usage for Task related interceptors, paving the way for better utilization and efficiency. Additionally, the instrumentation for AzureServiceBus on Linux is now in parity with Windows. (October 15, 2021) Python Agent | Transaction Analytics is now supported with the Python Agent so you are able to send analytic data, such as business transactions, exit calls, data collectors, etc. to the controller to identify anomalies based on historical trends. (October 27, 2021) Cluster Agent | Volumes required to mount can now be listed using InfraViz by adding an overrideVolumeMounts parameter. (October 28, 2021) System Admin Updates We released an enhancement that improves how administrators create users, and also updates our password policies. These changes are meant to improve security while laying the groundwork for new capabilities that will be rolled out in the near future. Check out the Changes to User Creation and Password Policy FAQ for details.  (Staged release started February 22, 2021)   NOTE | To view all technical documentation associated with this month’s releases, please visit Product Announcements, Alerts, and Hot Fixes in our documentation portal.  To Table of Contents  |  To Resource List Heads up - What else you should know  NEW! AppDynamics and Microsoft continue deepening their relationship and we are pleased to announce our latest partnership development. As a result, customers who deploy on Azure Spring Cloud can now observe what matters across all their services and workloads, and without any code changes necessary! On October 18, 2021, we implemented synthetics monitoring behavioral changes for Internet Explorer (IE) 11 and Firefox, and began deprecating support for Chrome versions 64 and 83 browsers. This change is part of an effort to bring our synthetics monitoring to a single browser architecture, allowing us to innovate faster and provide more value-added capabilities to better support our customer needs. Read our knowledge base article for more information, including instructions and support resources to avoid any disruption to your services. Check out the Kudos to you | July - September, 2021 post! In this series, we celebrate member participation in the Community with recaps of the most recent new content, accepted solutions, and the kind of engagement that enhances the Community. (October 11, 2021) The AppDynamics University team has launched a new collection of official learning recognition credentials. Distinct from recently released technical certification badges, the Learning Recognition family of badges expands the experience and learning you can promote to include learning progress and achievements. Learn more in AppDynamics Learning Recognition Badges - FAQs, in the AppD University Group Hub. For more information, see Introducing our new AppDynamics technical certification badges in the AppD University Group Hub. (June 17, 2021) New instructor-led classes have been added to the AppDynamics University catalog! COURSE CODE COURSE DESCRIPTION DEV 310 Developers: AppDynamics for release comparison and validation PRO520 Pro Series: Building effective dashboards with AppDynamics SAP210 AppDynamics essentials for SAP To learn more about each new offering, go to AppDynamics University and search on the course code. Want to learn about those small adjustments that make a big difference? We invite you to our Share a tip space, where AppD Community members can exchange peer-to-peer advice. Be sure to add your own insights too. To Table of Contents  |  To Resource List Share your feedback! Participate in a Beta today We’re currently running the following Beta program and looking for your feedback: Submit feedback on the Dash Studio through the in-product feedback button, or in Community here. To Table of Contents  |  To Resource List Resolved Issues The following key issue was resolved this month. To see a complete listing of this month's resolved issues, see Resolved Issues by Month. (Analytics Agents) Fixed ad.process.name startup error (ANALYTICS-13689) To Table of Contents  |  To Resource List Get Started PLEASE NOTE: customers are advised to check backward compatibility in the Agent and Controller Compatibility documentation.  Download Essential Components (Agents, Enterprise Console, Controller (on-prem), Events Service, EUM Components) Download Additional Components (SDKs, Plugins, etc.) How do I get started upgrading my AppDynamics components for any release? Product Announcements, Alerts, and Hot Fixes Open Source Extensions License Entitlements and Restrictions Can’t find what you’re looking for? Need more assistance? Ask a question in the Latest Release forum. To Table of Contents
Has anyone encountered this issue and how did you fixed it on Splunkcloud and Enterprise Security "Identity: An error occurred while the Asset and Identity Management modular input ran" ?  When I che... See more...
Has anyone encountered this issue and how did you fixed it on Splunkcloud and Enterprise Security "Identity: An error occurred while the Asset and Identity Management modular input ran" ?  When I checked the error it is saying that Lookup file error, unknown path or update time. Pretty sure lookups is existing but I am not sure what it means by update time?