All Topics

Top

All Topics

hi i got a weird problem when i call Splunk API'https://localhost:8089/servicesNS/-/search/search/jobs?output_mode=json', and i can get reaults from it, but i get such error message,  however, ther... See more...
hi i got a weird problem when i call Splunk API'https://localhost:8089/servicesNS/-/search/search/jobs?output_mode=json', and i can get reaults from it, but i get such error message,  however, there is no such lookup files in my report search, and I also can NOT find these lookup in my Splunk. is there someone can help me point out the problem? why i get this error? how can i fix it? thanks a lot.      "messages": [         {             "type": "ERROR",             "text": "[Indexer_01_new,Indexer_11,Indexer_12,Indexer_13,Indexer_14,Indexer_16,Indexer_17,Indexer_18,Indexer_19,Indexer_20,Indexer_21,Indexer_22,Indexer_23,Indexer_24,Indexer_25,Indexer_26,Indexer_27,SearchHead_01] Could not load lookup=User_Account_With_AD"         },         {             "type": "ERROR",             "text": "[Indexer_01_new,Indexer_11,Indexer_12,Indexer_13,Indexer_14,Indexer_16,Indexer_17,Indexer_18,Indexer_19,Indexer_20,Indexer_21,Indexer_22,Indexer_23,Indexer_24,Indexer_25,Indexer_26,Indexer_27,SearchHead_01] Could not load lookup=Userauth_User_Account_With_AD"         },         {
Hi Splunkers, I’m working on a Dashboard panel where I have to create a monthly wise data as shown in the screenshot.  I have a field called “age group” and corresponding month’s data. But every m... See more...
Hi Splunkers, I’m working on a Dashboard panel where I have to create a monthly wise data as shown in the screenshot.  I have a field called “age group” and corresponding month’s data. But every month’s data comes from an external lookup file. Data for example: Age Group    Sept        July        Jun 30-90              235          0             34 90-180          1757        2168     3467 180+              19374     20,534  12,661  I’m using this below code but it’s not actually working for me. Please help me with the logic to produce the chart like as it was in the screenshot. TIA   |inputlookup september.csv |stats count by "Age Group" |eval _time=strptime("2022-09-01","%Y-%m-%d") |append [|inputlookup july.csv |stats count by "Age Group" |eval _time=strptime("2022-07-01","%Y-%m-%d")] … |chart count by _time, “Age Group” 
Hey SMEs,   Has anyone having any prior experience of migrating existing Qradar data to Splunk. Any docs or something useful please do share. thanks in advance 
Hi, If I search on search head, I am getting this error: Monitoring detail of the indexer machine: Can anyone help with this issue? I could not figure out what's and where is the prob... See more...
Hi, If I search on search head, I am getting this error: Monitoring detail of the indexer machine: Can anyone help with this issue? I could not figure out what's and where is the problem. Thanks.   
Hello, I have the input.conf for several log files as   [monitor:///u01/mnt/log-1/data/trafficmanager/access/*] index = myindex sourcetype = csvtype initCrcLength = 1048576   The log file nam... See more...
Hello, I have the input.conf for several log files as   [monitor:///u01/mnt/log-1/data/trafficmanager/access/*] index = myindex sourcetype = csvtype initCrcLength = 1048576   The log file name is structured as access_worker_*_YYYY_mm_dd.log. For example: access_worker_5_03_21.log, access_worker_6_03_21.log, access_worker_5_03_20.log, etc. The stanza that I put in don't work so I try for a specific file name, such as   [monitor:///u01/mnt/log-1/data/trafficmanager/access/access_worker_5_03_21.log] index = myindex sourcetype = csvtype initCrcLength = 1048576    Then the log was pull in no problem. The problem that I see is the way I use my wildcard, somehow don't catch all the log file that I want to monitor. Can anyone point out how to fix this problem?
I have two types of events when the interface is down and when it is up It usually happens that the interface comes down, after 10 seconds it goes back up. * An event arrives where it tells me th... See more...
I have two types of events when the interface is down and when it is up It usually happens that the interface comes down, after 10 seconds it goes back up. * An event arrives where it tells me that the interface is down * Another event arrives where it tells me that the interface is up and it was down for 10 seconds. I would like to alert if the interface does not come back up in a period of 1 minute. I have tried several options but I have not been able to make it alert.
hi there, need to convert a large number of classic dashboards to dashboard studio style. they are used to breakdown quarterly reporting data and have x number of visualisations with a time picker d... See more...
hi there, need to convert a large number of classic dashboards to dashboard studio style. they are used to breakdown quarterly reporting data and have x number of visualisations with a time picker dropdown for quarters eg. Q1-Year, Q2-Year, Q3-Year, Q4-Year using the inbuilt migration option and cloning the dashboards in the new studio style, everything is generally working, but the custom time token setup that was used on my classic dashboards. As an example the classic board have variations on the following time picker:   <input type="dropdown" token="quarter"> <label>Select Quarter</label> <choice value="Q1-22">Q1-22</choice> <choice value="Q2-22">Q2-22</choice> <choice value="Q3-22">Q3-22</choice> <choice value="Q4-22">Q4-22</choice> <choice value="Last4Quarters">Last4Quarters</choice> <change> <condition label="Q1-22"> <set token="custom_earliest">-1y@y+0q</set> <set token="custom_latest">-1y@y+1q</set> </condition> <condition label="Q2-22"> <set token="custom_earliest">-1y@y+1q</set> <set token="custom_latest">-1y@y+2q</set> </condition> <condition label="Q3-22"> <set token="custom_earliest">-1y@y+2q</set> <set token="custom_latest">-1y@y+3q</set> </condition> <condition label="Q4-22"> <set token="custom_earliest">-1y@y+3q</set> <set token="custom_latest">-1y@y+4q</set> </condition> <condition label="Last4Quarters"> <set token="custom_earliest">-4q@q</set> <set token="custom_latest">now</set> </condition> </change> <default>Q4-22</default> <initialValue>Q4-22</initialValue> </input>    this worked fine. but upon migration, the code moves to this:   { "type": "input.dropdown", "title": "Select Quarter", "options": { "token": "quarter", "items": [ { "value": "Q1-22", "label": "Q1-22" }, { "value": "Q2-22", "label": "Q2-22" }, { "value": "Q3-22", "label": "Q3-22" }, { "value": "Q4-22", "label": "Q4-22" }, { "value": "Q1-23", "label": "Q1-23" }, { "value": "Q2-23", "label": "Q2-23" }, { "value": "Q3-23", "label": "Q3-23" }, { "value": "Q4-23", "label": "Q4-23" }, { "value": "Last1Year", "label": "Last1Year" } ], "defaultValue": "" } }     selecting a visualisation and its data source configuration the code seems to reference the custom token   { "type": "ds.search", "options": { "query": "index=report_summary source=quarterly-info | timechart span=1d count by source | eval Threshold = 100000000", "queryParameters": { "earliest": "$custom_earliest$", "latest": "$custom_latest$" } }, "name": "viz1" }   but nothing ever loads. like the tokens are disconnected or not specified. feel like im missing something here on the new style. is it an issue with dynamically setting the tokens custom_earliest and custom_latest per dropdown item? is is this a common migration problem where theres a new token format that should be followed? or am i missing something?
Hi all. Hoping someone can point me in the right direction for a very annoying persistent issue. DBX points to an MS SQL server cluster using a DNS host name in the connection string. Normally it ... See more...
Hi all. Hoping someone can point me in the right direction for a very annoying persistent issue. DBX points to an MS SQL server cluster using a DNS host name in the connection string. Normally it works pretty well except for when there is a site change and a bunch of scheduled queries time out. The issue can be replicated in SQL Explorer but the timeouts can be quite inconsistent. Tests on the search head server show that the host name is being resolved correctly and the server can connect to all resolved IPs. I did note the order of the resolved IPs is somewhat unpredictable and the IP for an offline node may be returned first. My running theory is DBConnect is attempting to connect to the first resolved IP then failing with timeout before it's able to connect to one of the other IPs. Is this expected behaviour? Is there any way to reduce the connection timeout value? I have found a number of posts on these boards relating to similar problems but they don't quite apply here.
What AppDynamics enhancements were new in February 2023?  WATCH THIS PAGE FOR UPDATES— Click the caret menu above right, then Subscribe... Want notification of new monthly Product Update editions?... See more...
What AppDynamics enhancements were new in February 2023?  WATCH THIS PAGE FOR UPDATES— Click the caret menu above right, then Subscribe... Want notification of new monthly Product Update editions? Click here, then Subscribe on the message bar AppDynamics SaaS Controller v23.2.0 was released on February 27, 2023, and includes enhancements around added database monitoring login capabilities, and to Dash Studio.  AppDynamics Cloud v23.2 was released on February 23, 2023, as was the Grafana plugin. It includes enhancements around anomaly detection and added support for AWS cloud service entities,   AppDynamics On-Premises v23.1.1 was released on February 15, 2023, and includes a new version of the Events Service 23.2.0 for new cluster deployments (upgrading existing controllers pending) which is applicable for Windows. Please refer to Administer the Events Service, Events Service Requirements, and the On-Premises Release Notes for more details.   In this article… Summary of releases, February 2023 What release highlights should I know about? Product Areas: AppDynamics Cloud | Agents | SaaS Controller | On-premises Controller Heads Up - What else should I know about? Resolved and known issues Essentials What release highlights should I know about? The following table includes release highlights AppDynamics features and capabilities released in February 2023. Check out the table to see who in your organization may be most interested in or impacted by each enhancement. For minor fixes or patches, please refer to the respective release notes links below. PRODUCTS / ENHANCEMENT HIGHLIGHTS         AppDynamics Cloud enhancement highlights User Admin DevOps SecOps Anomaly Detection ✓ ︎ ︎   AWS Cloud Services ✓ ✓   ✓ Expanded APM to infrastructure correlation entities ✓ ✓     Agent enhancement highlights         Cluster Agent ✓ ✓ ✓   Flutter Agent ✓ ✓ ✓   Java Agent ✓ ✓     MAUI Agent ✓ ✓     .NET Agent ✓ ✓ ✓   SAP ABAP Agent ✓ ✓     Xamarin Agent ✓ ✓     See additional enhancements indicated below ✓ ✓     AppDynamics On-Premises enhancement highlights Enterprise Controller v23.1.1 ✓ ✓     Events Service 23.2.0  ✓ ✓               Where can I find detailed information about this month's enhancements? Product enhancements are described in detail, and on an ongoing basis, on the respective documentation portal pages, i.e., AppDynamics Cloud Release Notes, AppDynamics (CSaaS) Release Notes, Release Notes for AppDynamics Accounts and Licensing, and AppDynamics On-Premises APM Platform Release Notes. Where available, see links to this month's Release Notes below, under each product heading. Back to top AppDynamics Cloud highlights NOTE | See the complete AppDynamics Cloud Release Notes for February 2023, v23.2 in our documentation portal. Grafana plug-in  Build Grafana dashboards using your AppDynamics Cloud data with our new installable plug-in. Available for both Enterprise and open-source Grafana users. (GA v23.2 Released February 23, 2023)  DOCUMENTATION | Integrate AppDynamics Cloud with Grafana  Anomaly detection for Business Transactions  Reduce MTTI with Anomaly Detection by configuring the algorithm to easily monitor performance issues across your services.    DOCUMENTATION | Determine the root case of an anomaly  (GA v23.2 Released February 23, 2023)  Business Transaction (BT) visibility  Use auto-defined business transactions to filter out noise and focus on your application’s most critical flows. Traces and spans provide additional context to help you pinpoint issues. (GA v23.2 Released February 23, 2023)  Collect additional MELT data for additional AWS entities  We now include the ability to collect MELT data for AWS Lambda and Amazon Elastic File System.  (GA v23.2 Released February 23, 2023)  Infrastructure monitoring for Windows container workloads  Monitor key performance metrics and health for your Kubernetes Windows in EKS and AWS. (GA v23.2.1 Released February 23, 2023)  Back to top   Agent release highlights NOTE |See the full 23.2 Release Notes for a complete, ongoing, and sortable list of Agent enhancements   Cluster Agent  Auto-instrument an application using multiple Cluster Agents  Support for building the Cluster Agent RHEL-based container image without a RedHat subscription  Additional support to install AppDynamics Operator from OpenShift OperatorHub for OpenShift Cluster  See detailed Release Notes   (GA v23.2 Released February 27, 2023)  Flutter Agent  Supports the most popular cross-platform mobile framework support   Instrument Flutter applications beyond the existing AppD SDK to monitor and receive analytics   Streamline application development by automatically reporting metrics such as memory, storage, and battery, along with connection transition events  Tracking core analytics for things like network requests, screen, and user-flow tracking, ANR, and crash reporting, automatic screen captures and user touch-points, and so much more!  See Instrument Flutter Applications.  (GA v23.2 Released February 27, 2023)  Java Agent  OpenTelemetry™: Entry points for Spring Integration. See Java Agent Frameworks for Open Telemetry  OpenTelemetry™ is a trademarks of The Linux Foundation® Enable the reverse proxy. See Enable Reverse Proxy for Java Agent  (GA v23.2 Released February 27, 2023)  MAUI Agent  Now supports WebView Instrumentation, as well as other minor enhancements.  (GA v23.2 Released February 14, 2023)  .NET  IIS pipeline instrumentation has been integrated, and third-party dependencies have been updated. (GA v23.2 Released February 28, 2023)  SAP ABAP Agent  Monitor and report SLT replication status to support monitoring financial reporting.   AppDynamics for SAP (as of v23.2) is compatible with S/4 HANA 2022, up to FPS00  Reconstruct and monitor inbound RFC module calls as a dedicated business transaction type  Use match rule copy logic to quickly define many custom match rules  Monitor SAP system virtual memory to reduce chances of system halts.   See the detailed SAP Release Notes. (GA v23.2 Released February 27, 2023)  Xamarin Agent  Customize Xamarin Agent with WebView instrumentation, Fragment Tracking, and by viewing Controller Tracking. (GA v23.2 Released February 14, 2023)  Back to top AppDynamics On-premises enhancement highlights NOTE | See the full On-premises and AMP Platform Release Announcements for a complete, ongoing, and sortable list of enhancements There is an Enterprise Controller Console update, v23.1.1, and one specifically for Events Service v23.2.0.  (GA v23.1.1 Released February 15, 2023)  Enterprise Controller 23.1.1  A number of minor enhancements are included with the Enterprise Controller February release, including a Jetty upgrade, and other 3rd party component upgrades such as Azure JRE, MySQL, and NuProcess, along with typical bug fixes.   Events Service 23.2.0  Events Service 23.2.0, part of the overall Enterprise Console 23.1.1 upgrade, is intended only for new cluster deployments. Existing Enterprise Console deployments can still upgrade the controller, but will not get their Events Service upgraded at this time.  NOTE | Events Service 23.2.0:  Upgrades from a prior version are not supported today (to be made available later in the year)  Is applicable for the Enterprise Console distribution and is not supported on Windows  SSL communication between elastic search nodes is not yet supported  There is a known issue when searching for a particular analytics event service where an Unknown Server Error occurs.   Back to top What else should I know? Upcoming deprecation Node.js Agent As of March 31, 2023, Node.js Agent will no longer support version 12. Community News  Check out the new Welcome Center, where Community members can get self-service and many-to-many help with the community platform’s features and best practices. Read how-to articles in Community 101, or raise or answer questions in Welcome Center discussions.   TIP | To find the Welcome Center from anywhere in the Community, click Groups on the navigation bar, then select Welcome Center.  AppDynamics University News  The AppDynamics University team have launched a refreshed user experience, as well as releasing a number of new courses. Read the summary in the Education Product Updates, January and February 2023.  Back to top Resolved issues See the complete lists of resolved Issues in AppDynamics Cloud Release Notes and AppDynamics (CSaaS) Release Notes. The following key issues were resolved in February.   DBMON-8446 | NullPointerException error occurs while resolving MongoDB topology  PYTHON-1049 | Oracle connection strings with empty username and password fields display CxOracleConnectionInterceptor._connect error.  Back to top Essentials PLANNING AN UPGRADE? | Please check backward compatibility in the Agent and Controller Compatibility documentation as part of your upgrade planning process   Download Essential Components (Agents, Enterprise Console, Controller (on-prem), Events Service, EUM Components)     Download Additional Components (SDKs, Plugins, etc.) How do I get started upgrading my AppDynamics components for any release?     Product Announcements, Alerts, and Hot Fixes Open Source Extensions     License Entitlements and Restrictions Introducing AppDynamics Cloud   Back to top
I'm new to Splunk, so apologies if this is a silly question. I have a log file that reads:     2023-03-22 00:57:09,517 INFO TestScript - Generating reports with date of 20230321 and thread p... See more...
I'm new to Splunk, so apologies if this is a silly question. I have a log file that reads:     2023-03-22 00:57:09,517 INFO TestScript - Generating reports with date of 20230321 and thread pool size of 5 ... ... 2023-03-22 00:59:23,681 INFO MultiTestScript - Multi Test report generation completed successfully!       and I am trying to extract the elapsed time between these two events. If I try this search     <search terms> | transaction startswith="Generating reports" endswith="report generation completed"       I get no results found.   If I search for the two halves of the transaction separately, i.e.     <search terms> | transaction startswith="Generating reports"     and     <search terms> | transaction endswith="report generation completed"     the search returns the appropriate part of the log file. As soon I combine the startswith= and endswith= fields in a single search, however, I get no results.   This query works properly with another log file. The only difference I can see between the files is that the file that works contains multiple transactions (i.e. "Generating report"/"report generation completed" pairs) while the files that won't work contain only one.  
We have some MS dns logs we want to ingest and we want to clean up some of the text before processing.   Essentially the fielddata is coming in as (10)somedomain(3)diy(8)whatever(3)com(0) and we ... See more...
We have some MS dns logs we want to ingest and we want to clean up some of the text before processing.   Essentially the fielddata is coming in as (10)somedomain(3)diy(8)whatever(3)com(0) and we want to only show as somedomain.diy.whatever.com   I have the first part I think, and using the search as a test of course...   | rex field=query mode=sed "s/\(.*?\)/./g" Which leaves me with .somedomain.diy.whatever.com. I can't seem to find a way to get rid of the leading and trailing .'s  Is there away to do it in all one line?  Bear with me here, this is new territory for me.   Thanks for your help
Historical license usage is not showing some days' graph, but the data are all there.  The search string is this, never changed, it worked well before. (index=_internal host=xxxxxx source=*l... See more...
Historical license usage is not showing some days' graph, but the data are all there.  The search string is this, never changed, it worked well before. (index=_internal host=xxxxxx source=*license_usage.log* type="RolloverSummary" earliest=-30d@d) | eval _time=('_time' - 43200) | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | join type=outer _time [| search (index=_internal host=xxxxxxx source=*license_usage.log* type="RolloverSummary" earliest=-30d@d) | eval _time=('_time' - 43200) | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach "*" [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3) ]
Hi experts, Has anyone had any experience to use "Python for Scientific Computing" to classify Japanese text? is the app made to work with Japanese language? Thank you for your sharing in advance!
Hi. Subject is confusing so here goes. I have 3 log lines: org=A Status=Success org=A Status=Fail org=B Status=Success   I would like to get stats for orgs that have Status=Success, but not... See more...
Hi. Subject is confusing so here goes. I have 3 log lines: org=A Status=Success org=A Status=Fail org=B Status=Success   I would like to get stats for orgs that have Status=Success, but not if those orgs have even one log where Status=Fail. I tried to filter in the search query (but then, I get the line where org=A Status=Success) and in a WHERE clause, with no luck. I'm trying to find the right method to do this kind of filtering.   Thanks!
I want to have a table or chart where I can see the failure % of the past 30 days, vs. today, and output the difference too. So the table should have: Success over past 30 days, failure  over pas... See more...
I want to have a table or chart where I can see the failure % of the past 30 days, vs. today, and output the difference too. So the table should have: Success over past 30 days, failure  over past 30 days, failure % over past 30 days, total over past 30 days, success today, failure today, failure % today, total today, failure % today minus failure % over past 30 days.   So far I have something like this, for only the past 30 day fields. earliest=-30days | eval status=case('httpReturnCode' == 200,"Success", 'httpReturnCode' != 200, "Invalid") | stats count(eval(status="Success")) as Success, count(eval(status="Invalid")) as Failure by loggingObject.methodName | eval Failure(%)=(Failure/(Success + Failure)) * 100 | eval Total = Success + Failure
Hello, It's possible to retrieve splunk DDSS archived data using SmartStore w/ AWS S3 in a enterprise instance ?
I have a .csv file that I have uploaded as a lookup file that works fine when I run a search.  If I ask another user to do the same search, it times out on them.  No error, it does nothing.   I have ... See more...
I have a .csv file that I have uploaded as a lookup file that works fine when I run a search.  If I ask another user to do the same search, it times out on them.  No error, it does nothing.   I have the permissions set for everyone read/write.  Should the permissions be different?
We are trying to invoke alerts from Splunk to NetCool, and wondering what the right approach would be. We came up with 3 proposals - Solution 1 : Create a script, and invoke in alert actions, and p... See more...
We are trying to invoke alerts from Splunk to NetCool, and wondering what the right approach would be. We came up with 3 proposals - Solution 1 : Create a script, and invoke in alert actions, and pass the parameters.  Solution 2 : Create a custom command, and append it to the SPL, and pass the arguments.  Solution 3: Create a custom alert action, with html form fields. (Just like Send Email/Snow) - Preferred    We also came across Splunk dev documentation at Create custom alert actions for Splunk Cloud Platform or Splunk Enterprise  Any feedback would be appreciated.  
Hi Everyone,  I am looking for idea to implement a case where subqueries  will be run based on the user choice from check box option. for ex  [sub search -1]. -  If choice match "YES" [sub search ... See more...
Hi Everyone,  I am looking for idea to implement a case where subqueries  will be run based on the user choice from check box option. for ex  [sub search -1]. -  If choice match "YES" [sub search -2] - if choice matches "NO" [sub search-3] - if choice matches "Maybe"   and combine the result and display in single panel .  wort case all option is true .
is there a way to alert an email if today's hourly stats are 25% higher than the previous week same day hourly stats?