All Topics

Top

All Topics

Hi, I have around 5 panels in a dashboard which have their own child panel also.. Each these panels have table, where the datas are fetched from log. I was in need to convert all the parent panels i... See more...
Hi, I have around 5 panels in a dashboard which have their own child panel also.. Each these panels have table, where the datas are fetched from log. I was in need to convert all the parent panels into a tabs in a dashboard.
Hi  I have a message body text as like below message : RequestBody : :  :{                                  individualValue : {                                                    xxxxxx;        ... See more...
Hi  I have a message body text as like below message : RequestBody : :  :{                                  individualValue : {                                                    xxxxxx;                                                    YYYY; -----------------------having many txt in between these } } How can i fetch the string "individualValue" from the message body
I was trying to extract an ip address field. During a search, using |rex "[[ipv4]]" works fine and creates an ip field.  I then wanted to save this field extraction, so I used the field extractor t... See more...
I was trying to extract an ip address field. During a search, using |rex "[[ipv4]]" works fine and creates an ip field.  I then wanted to save this field extraction, so I used the field extractor to do so, edited the regular expression to [[ipv4]] and saved it, but it did not work. I tried taking it down a level, editing the saved regular expression to  (?<ip>[[octet]](?:\.[[octet]]){3}) which also works while using the rex command during a search, but did not work saving it in the field extractor. I took it down one final level changing it to (?<ip>(?:2(?:5[0-5]|[0-4][0-9])|[0-1][0-9][0-9]|[0-9][0-9]?)(?:\.(?:2(?:5[0-5]|[0-4][0-9])|[0-1][0-9][0-9]|[0-9][0-9]?)){3}) which doesn't use modular regular expressions, but finally does work in both the search and the saved field extraction. I haven't found anything in the splunk docs that say modular regular expressions can't be used in the field extractor, so I thought it would be best to check here if that was the case, or if there is maybe some other issue I can't think of.
I have a column that has events recorded in an interval of 1 hour. Example: Date                                                          Value 2010-1-1 1:00                                       ... See more...
I have a column that has events recorded in an interval of 1 hour. Example: Date                                                          Value 2010-1-1 1:00                                         20                2010-1-1 2:00                                         22 2010-1-1 3:00                                          21 2010-1-1 4:00                                          19 2010-1-1 5:00                                           16 ...............................                                              ........ 2010-1-1 24:00                                         12   I want to group this as one row i.e display in the following format Date                                              Value 2010-1-1                                    Calculate average of 24 values   I want to achieve this in splunk
Hello  here's is my problem, I made a search which calculates duration between two jobs. Jobs are supposed to run during our ovn. So the first starts around 10pm and the last around 00.30 so +- 2h3... See more...
Hello  here's is my problem, I made a search which calculates duration between two jobs. Jobs are supposed to run during our ovn. So the first starts around 10pm and the last around 00.30 so +- 2h30 after. it's working fine but if the job A starts later (e.g.  09/04 at 00.09) then I can't get the calculation and get two empty rows . 09/06/21 02:30:42 21:50:00 00:20:41 09/04/21 00:00:00 03:19:24 03:19:24  << 09/03/21 00:00:00 00:09:52 00:09:52  << 09/02/21 02:31:56 21:56:44 00:28:40 it should display only 1 line for that ovn : 09/04/21 03:09:32 00:09:52 03:19:24  sometimes it's ok i guess it's because job A started very later 4.36AM and after previous Job B run. 09/20/21 02:19:10 22:02:02 00:21:12 09/18/21 02:48:11 04:36:59 07:25:10  <<< ??  09/16/21 02:14:33 22:22:41 00:37:13     <query>| tstats latest(evt_tsk_id) as evt_tsk_id, latest(evt_tsk_status) as evt_tsk_status, latest(evt_mes_occ_ts) as evt_mes_occ_ts, latest(evt_mes_occ_ts_epoch) as evt_mes_occ_ts_epoch where index=INDEX1 APP_env=ENV1 APP_inst=INSTANCE (evt_tsk_id ="JOB_A" AND evt_tsk_status="1") OR (evt_tsk_id ="JOB_B" AND evt_tsk_status="2") by _time span=1H | bucket _time span=6H | stats min(evt_mes_occ_ts_epoch) as start, max(evt_mes_occ_ts_epoch) as end by _time | eval N_duration = tostring(round(end-start,0), "duration") | eval _time = strftime(_time,"%m/%d/%y") | convert timeformat="%H:%M:%S" ctime(start) AS JOB1 | convert timeformat="%H:%M:%S" ctime(end) AS JOB2 | rename _time as date | table date N_duration JOB1 JOB2 | reverse</query>     thanks in advance
I have recently created a field extraction on one search head that I have assigned all apps and users to read and write and was wondering how long is would take for a change done in one search head t... See more...
I have recently created a field extraction on one search head that I have assigned all apps and users to read and write and was wondering how long is would take for a change done in one search head to get replicated to other search heads? Also from what I know is that changes done via the GUI are always replicated to other SHs, is this true? If so what CAN and CANNOT be replicated across other search heads via gui. Thanks, Regards,
Hi All, We are planning to configure some of our universal forwarders to use multiple pipeline sets. Do you have some sort of SPL that we can use to identify which forwarders have blocking queues an... See more...
Hi All, We are planning to configure some of our universal forwarders to use multiple pipeline sets. Do you have some sort of SPL that we can use to identify which forwarders have blocking queues and needs to increase the number of pipeline set.
Hi, I have probably and easy question for the ones that have done this before. I have set up an universal forwarder to collect windows performance counters, the collection and forwarding works fine. ... See more...
Hi, I have probably and easy question for the ones that have done this before. I have set up an universal forwarder to collect windows performance counters, the collection and forwarding works fine. The thing i am curious about  is that In the forwarders inputs config I have specified it to collect:   stats = average;min;max;dev;count   But in Splunk i receive an event containing value, min, max, dev and count. Everything except the  "average" value. Is this contained in the value field instead, or have I done something wrong in the config? collection=Processor object=Processor counter="% Idle Time" instance=_Total Value=97.1635562216005 Min=59.084268219671145 Max=99.46225663681797 Dev=6.00739691330151 Count=300   from config: [perfmon://Processor] index = main interval = 600 counters = % Processor Time;% Idle Time object = Processor instances = * formatString = %.20g instance = _Total;% Idle Time samplingInterval = 2000 stats = average;min;max;dev;count mode = single disabled = false
Hi, We are using Microsoft SQL Server as a Database for one of the Applications. For the Microsoft SQL Server by default, we are able to see the Basic Hardware metrics like CPU Usage, Memory Usa... See more...
Hi, We are using Microsoft SQL Server as a Database for one of the Applications. For the Microsoft SQL Server by default, we are able to see the Basic Hardware metrics like CPU Usage, Memory Usage, and Disk I/O. Is it possible to get the Disk Usage also by using DB Agents? Regards, Madhusri R
I want to get metrics from multiple index/sourcetype combinations - have been using the append clause and subquery to do it but need to process a lot of events and hit the limitations of subqueries a... See more...
I want to get metrics from multiple index/sourcetype combinations - have been using the append clause and subquery to do it but need to process a lot of events and hit the limitations of subqueries and although i get all the data from the primary query the appends get truncated.   Im sure there is an easy way of doing this and its what splunk is meant to do but cant work out how to cater for the different manipulation that needs to be done depending on the index and sourcetype. The follow is a relatively simple one but i have more complex queries which need to calculate rates from absolute values etc.   So basically have 3 queries ( one that needs a join so i can do some calculations) keep _time host and the metric I want and then do the visualisation.   index=windows sourcetype=PerfmonMk:Memory host IN(host1,host2,host3) | join type=outer host [ search index=windows sourcetype=WMI:ComputerSystem host IN(host1,host2,host3) earliest=-45d latest=now() | stats last(TotalPhysicalMemory) as TPM by host | eval TPM=TPM/1024/1024] | eval winmem=((TPM-Available_MBytes)/TPM)*100 | fields _time host mem |append [search index=linux sourcetype=vmstat host IN(host4,host5,host6) | where isnotnull(memUsedPct) | eval linmem=memUsedPct | fields _time host mem] |append [ search index=unix sourcetype="nmon-MEMNEW" host IN(host7,host8,host9) | where isnotnull("Free%") | eval aixmem=100-'Free%' | fields _time host mem] | eval host=upper(host) | timechart limit=0 span=1h perc95(mem) as Memory_Utilisation by host
Watch this page for updates — click the 3-dot menu upper right, then subscribe Want to receive all monthly Product Updates? Click to subscribe to the series In September, we released the 21.9.0 ... See more...
Watch this page for updates — click the 3-dot menu upper right, then subscribe Want to receive all monthly Product Updates? Click to subscribe to the series In September, we released the 21.9.0 Controller and Enterprise Console as well as key product enhancements and agent updates. We also resolved a series of issues associated with the previous release. Table of Contents Feature Enhancements Heads Up - What Else You Should Know Beta Programs Resolved Issues Get Started Ask a Question What highlights should I know about? These release highlights include the newest features and capabilities this month, at a glance. In the grid below, we’ve flagged who within your organization may be most interested or impacted for each enhancement.  Category Enhancement  User and Performance Analyst Admin and Implementer Developer Full-stack Observability Mobile App Enhancements ✓ ✓   SAP Monitoring Updates ✓ ✓   Agent Updates     Database Agent ✓ ✓   Java Agent ✓ ✓   Xamarin Agent ✓ ✓   Python Agent ✓ ✓   Synthetic Private Agent (Linux-based) ✓ ✓   .NET Agent ✓ ✓   System Administration Updates Updated user creations and password policies   ✓   Full-stack Observability The latest SAP Monitoring Enhancements  (Released September 6, 2021) several enhancements and new capabilities that give you out-of-the-box visibility into certification expirations and number range usage. See the Release Notes for a complete list. AppDynamics App 2.0 deep links are getting smarter. You can now click on the web URL of any APM entity detail page and event screen on a mobile device, and it will open the content on the AppDynamics App. To learn more about our re-imagined mobile experience, see AppDynamics Mobile App in the documentation and check out the highlights of what’s new in this blog post. Agent Updates Database Agent 21.9.0 (Released September 24, 2021) provides support for the additional roles of the MongoDB replica sets and shard clusters. See Release Notes for a complete list. Java Agent 21.9.0 (Released September 28, 2021) has added support for JBoss EAP 7.3. See all Java Supported Environments. The latest Xamarin Agent (Released September 9, 2021) includes a new API for tracking UI elements with Xamarin.Forms. See Track UI Elements to follow interactions with buttons, entries, and ListViews. The .NET Agent (Released September 22, 2021) now includes the ability to manage Linux Agent console output via environment variable (Windows Agent Parity). Python Agent 21.9.0 (Released September 30, 2021) includes multiple updates: Updated the end_bt() API to allow user-provided exceptions. Added checks to handle the invalid configuration values. Updated the Python agent APIs to add maximum range-bound on the size of the input string. The latest Synthetic Private Agent (Linux-based) includes the following  (Released September 9, 2021): Chrome 86 (Headless) now supports scripts that can navigate to multiple tabs in a browser. The session waterfall view is enhanced to capture and highlight the failed resources. Increased character limit of the location_id column from 50 to 256. System Admin Updates We released an enhancement that improves how administrators create users, and also updates our password policies. These changes are meant to improve security while laying the groundwork for new capabilities that will be rolled out in the near future. Check out the Changes to User Creation and Password Policy FAQ for details.  (Staged release started February 22, 2021) To view all technical documentation associated with this month’s releases, please visit Product Announcements, Alerts, and Hot Fixes in our documentation portal. To view all technical documentation associated with this month’s releases, please visit Product Announcements, Alerts, and Hot Fixes in our documentation portal.  To Table of Contents  |  To Resource List Heads up - What else you should know  On October 18, 2021, we will be implementing synthetics monitoring behavioral changes for Internet Explorer (IE) 11 and Firefox, and deprecating support for Chrome versions 64 and 83 browsers. This change is part of an effort to bring our synthetics monitoring to a single browser architecture, allowing us to innovate faster and provide more value-added capabilities to better support our customer needs. Read our knowledge base article for more information, including instructions and support resources to avoid any disruption to your services. AppDynamics' technical certification recipients can now access official digital badges. Newly launched, each badge consists of an image and metadata that are uniquely linked to the individual and the specific certification. Shared at-will on resumes, social media, email, and more, this credential grants viewers immediate access to validated information, including its issuance by Cisco and AppDynamics. For more information, see Introducing our new AppDynamics technical certification badges in the AppD University Group Hub. (June 17, 2021) Want to learn about those small adjustments that make a big difference? We invite you to our Share a tip space, where AppD Community members can exchange peer-to-peer advice. Be sure to add your own insights too. To Table of Contents  |  To Resource List Share your feedback! Participate in a Beta today We’re currently running the following Beta program and looking for your feedback: Submit feedback on the Dash Studio through the in-product feedback button, or in Community here. To Table of Contents  |  To Resource List Resolved Issues The following key issue was resolved this month. To see a complete listing of this month's resolved issues, see Resolved Issues by Month. Replace check to support tenantNames starting with numbers (IAM-1944) To Table of Contents  |  To Resource List Get Started PLEASE NOTE: customers are advised to check backward compatibility in the Agent and Controller Compatibility documentation.  Download Essential Components (Agents, Enterprise Console, Controller (on-prem), Events Service, EUM Components) Download Additional Components (SDKs, Plugins, etc.) How do I get started upgrading my AppDynamics components for any release? Product Announcements, Alerts, and Hot Fixes Extensions License Entitlements and Restrictions Can’t find what you’re looking for? Need more assistance? Ask a question in the Latest Release forum. To Table of Contents
Hi guys, I am very new to Splunk and this is only my first week using it. What I am wanting to do is view the performance logs of my own local machine and then put it into a dashboard. It would also... See more...
Hi guys, I am very new to Splunk and this is only my first week using it. What I am wanting to do is view the performance logs of my own local machine and then put it into a dashboard. It would also be good to be able to get the number of times I have logged into my laptop if that is possible. The questions is, Do I need to use a universal forwarder to be able to do all this ? I am not sure, from what I have read online the universal forwarder is used for remote machines but because its local would I need to use one. I can imagine this being a very noobie question but need the help if someone is able to.   Thank You
Hi, I have a query which I am not sure why its not working, Assume I have the following JSON record, which has been extracted at index-time index: network sourcetype: devices record: { "deviceId... See more...
Hi, I have a query which I am not sure why its not working, Assume I have the following JSON record, which has been extracted at index-time index: network sourcetype: devices record: { "deviceId" : 1234, "hostName": "Router1} 1. index=network sourcetype=devices deviceId=1234 => works as expected 2. index=network TERM(sourcetype::devices) => works as expected 3. index=network TERM(sourcetype::devices) deviceId=1234 => Fails, returns 0 records 4. index=network TERM(sourcetype::devices) earliest=-7d@d => Fails, returns 0 records 5. index=network sourcetype::devices deviceId=1234 => works as expected 6. index=network sourcetype::devices deviceId::1234 => works as expected 7. index=network sourcetype::devices deviceId::1234 earliest=-7d@d => works as expected The real question is, why do queries 3 and 4 fail, when the others work, especially when I can see that query 2 works and returns the correct data. What impact does TERM() have in the process flow, such that earliest and = make it fail ? cheers -brett
Hi, I tried to find this in the docs, but no luck, more than happy to RTM if someone has the link. On the black menu, top right, there is Help, with Sub Menus of: ... Tutorials Help with this ... See more...
Hi, I tried to find this in the docs, but no luck, more than happy to RTM if someone has the link. On the black menu, top right, there is Help, with Sub Menus of: ... Tutorials Help with this page File a bug ... I want to change where these either point to, or want to be able to leverage the link they point to; for example Help with this page : Where do I put my own docs so they will be used ? File a bug: I want this to point to my Jira Tutorials : I want this to point to a wiki or sharepoint or ? Cheers -brett
So I am very new to Splunk and I have just started using it. What I want to do is be able to view my own laptops operating system file logs and performance data. What I have been doing is logging ont... See more...
So I am very new to Splunk and I have just started using it. What I want to do is be able to view my own laptops operating system file logs and performance data. What I have been doing is logging onto my splunk and then selecting the "add data" button. From there I select the "monitor" button. For example I have chosen to monitor  my local events log but for some reason when I try to search anything I get nothing so something is wrong and I dont know what.   Please help
Users of SaaS controller can check the IP range of SaaS controller by themselves. In some closed environments, it may be necessary to whitelist the IP addresses to access SaaS controller. 1. Go ... See more...
Users of SaaS controller can check the IP range of SaaS controller by themselves. In some closed environments, it may be necessary to whitelist the IP addresses to access SaaS controller. 1. Go to controller and press "My AppDynamics Account" or go directly to https://accounts.appdynamics.com/     2. The list of subscriptions may be shown. Click the Actions. 3. IP Ranges fo the SaaS controller are shown.  
Here's an example of some error logs that simply show which app reported an error and which country: _time(s) sourcetype country 0 app1 US 1 app1 DE 2 app2 DE 65 app2 US ... See more...
Here's an example of some error logs that simply show which app reported an error and which country: _time(s) sourcetype country 0 app1 US 1 app1 DE 2 app2 DE 65 app2 US 66 app2 US 67 app1 DE   Here's the timechart I would like to retrieve(span=1m): _time app1 app2 2021-09-30 00:00:00 {"US": 1, "DE": 1} {"DE": 1} 2021-09-30 00:01:00 {"DE": 1} {"US": 2}   Is this, or something similar, possible?
I have a multi-site cluster, and am planning on decommissioning one to transform it into a single-site cluster. Looking over these two guides: https://docs.splunk.com/Documentation/Splunk/8.0.2/I... See more...
I have a multi-site cluster, and am planning on decommissioning one to transform it into a single-site cluster. Looking over these two guides: https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Decommissionasite https://docs.splunk.com/Documentation/Splunk/8.1.2/Indexer/Converttosinglesite And trying to see how to do both, preferably at the same time. When converting to a single-site, it states to stop the entire cluster, update the configurations, then start the cluster back up. Is there any issue with doing the configurations changes necessary for decommissioning the old site while everything is offline, and only bringing up the remaining site? Basically, current plan is: Stop all nodes Update the Manager Configs Set multi-site to false Set single site search/rep factors Remove site attribute Remove available_sites attribute/site mappings Update Search Head Configs Set multi-site to false Remove site attribute Start nodes that are remaining from new site Would this work, or would it cause conflicts in replication somehow? Do I need to use Splunk commands on the cluster manager to remove the old indexers?
I have a table that looks like this: Time Host User Activity 2021-01-01 01:02:01 ABC Test CommandLine: C:/Users/Cool/cool.exe File: cool.exe Hash: yr3f7r98jkfd7y38ykry... See more...
I have a table that looks like this: Time Host User Activity 2021-01-01 01:02:01 ABC Test CommandLine: C:/Users/Cool/cool.exe File: cool.exe Hash: yr3f7r98jkfd7y38ykry73 2021-01-01 01:02:02 ABC Test CommandLine: C:/Users/Lame/lame.exe File: lame.exe Hash: kf39utkuk0ulftu39uk30utk 2021-01-01 01:02:03 ABC Test CommandLine: C:/Users/Idk/idk.exe File: idx.exe Hash: 9l09uk8dtyjy4j4098tk48   The query I used to made the table looks something like this: host=ABC User=Test | rename host AS Host | eval Time=strftime(_time,"%Y-%m-%d %H:%M:%S"),Activity=mvappend("CommandLine: ".CommandLine," ","File: ".File," ","Hash: ".Hash) | table Time Host User Activity | dedup consecutive=true Activity sortby Time I am trying to use a drilldown to make it so when I click the hash in my Dashboard, it redirects me to a website. The issue I'm have is when I add the link and I click the Hash, instead of just giving me the hash: "9l09uk8dtyjy4j4098tk48", it will give me entire cell "Hash: 9l09uk8dtyjy4j4098tk48" which bugs out my URL.   Expected Output: https://website.com/9l09uk8dtyjy4j4098tk48   Actual Output: https://website.com/Hash: 9l09uk8dtyjy4j4098tk48   Another issue is no matter what cell I click they will all try to redirect me to the website:   Example: https://website.com/CommandLine: C:/Users/Lame/lame.exe   How can I make it so I can only click the hash value to get my expected output?
We've had good success auto-instrumenting an all-java kubernetes application with the cluster agent, but require the ability to use a custom APPDYNAMICS_AGENT_NODE_NAME. During manual instrumentation... See more...
We've had good success auto-instrumenting an all-java kubernetes application with the cluster agent, but require the ability to use a custom APPDYNAMICS_AGENT_NODE_NAME. During manual instrumentation, this property can be set as an ENV in the container the java agent is attaching to, but it's not clear from the documentation the way to do this from the cluster agent config.  https://docs.appdynamics.com/21.4/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent/auto-instrumentation-configuration I am utilizing the latest cluster agent operator and cluster agent, with a cluster-agent.yaml as follows: ``` apiVersion: appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics spec: appName: "demo" controllerUrl: "http://xxx.com:80" account: "xxx" logLevel: "DEBUG" # docker image info image: "docker.io/appdynamics/cluster-agent:latest" serviceAccountName: appdynamics-cluster-agent nsToMonitor: [demo] instrumentationMethod: Env nsToInstrumentRegex: demo appNameStrategy: manual defaultAppName: demo #defaultCustomConfig: "-Dappdynamics.agent.nodeName=manual-test" defaultEnv: JAVA_TOOL_OPTIONS resourcesToInstrument: [ Deployment, StatefulSet ] instrumentationRules: - namespaceRegex: demo language: java appName: demo # customAgentConfig: -Dappdynamics.agent.nodeName="manual-test" # customAgentConfig: -Dappdynamics.agent.nodeName=${APPDYNAMICS_AGENT_NODE_NAME} customAgentConfig: APPDYNAMICS_AGENT_NODE_NAME="manual-test" imageInfo: image: docker.io/appdynamics/java-agent:20.3.0 agentMountPath: /opt/appdynamics   With all 3 variations of the customAgentConfig above and  an APPDYNAMICS_AGENT_NODE_NAME in the target deployment set. Any help would be much appreciated