All Topics

Top

All Topics

Enhance full-stack observability by correlating Mobile Real User Monitoring (Mobile RUM) with network intelligence  In this article... What is Customer Digital Experience Monitoring (CDEM)?  ... See more...
Enhance full-stack observability by correlating Mobile Real User Monitoring (Mobile RUM) with network intelligence  In this article... What is Customer Digital Experience Monitoring (CDEM)?  CDEM offers two-way data flow and analysis across stacks in real time  How do I correlate data across APM, RUM, and NPM domains? Latest CDEM updates extended 2-way information sharing, now including MRUM  Additional resources    What is Customer Digital Experience Monitoring (CDEM)?  In June 2023, Cisco released Customer Digital Experience Monitoring (CDEM) – a bi-directional integration between AppDynamics™ and ThousandEyes™- which extended Cisco’s Full Stack Observability by combining application, network and user experience monitoring to provide powerful customer digital experience monitoring. It helps our customers to:  Proactively identify gaps in monitoring and deliver optimal digital experiences to their users  Correlate inferior user experiences over customer applications attributed due to network issues  Reduce MTTR and prioritize network remediation based on the business impact due to user experience issues  This integration offers a powerful correlation between network insights and customer’s application experience their users face, over their respective web browsers.   CDEM offers two-way data flow and analysis across stacks in real time There is a real-time flow of data across APM, RUM and NPM stacks which is further correlated in near-real time and then analyzed and presented over insightful visualization.  AppDynamics APM Application Performance Management  Helps in monitoring and managing the performance of applications by providing real-time insights into the performance of the application itself, including metrics related to response times, errors, and resource utilization.    AppDynamics RUM   Real User Monitoring  Involves tracking and monitoring the experience of real users interacting with an application. It provides insights into how users are experiencing the application, including details about page load times, user interactions, and more.    ThousandEyes NPM Network Performance Management Helps in monitoring the performance of internet infrastructure — including factors like network latency, packet loss, and bandwidth utilization.    By correlating network insights for similar network domains between NPM, APM, and RUM stacks, we provide our customers with full-stack observability across these stacks, packaged as the CDEM offering. Adding to our current correlation between network performance and user experience over web browsers, this release includes mobile applications as well.   How do I correlate data across APM, RUM and NPM domains?  In AppDynamics, applications are the entities that contain application performance insights and associated user experience data to provide business observability. These applications contain network domains against which ThousandEyes network tests can be configured for getting regular network insights.  Data from common entities across domains including metrics related to application performance, user experience, and network performance is collected.  The collected data is correlated to identify patterns or relationships. This involves associating network performance issues with user experience metrics and eventually with application performance metrics.  Once the data is correlated, it is analyzed to gain insights. This analysis can help identify the root causes of performance issues and can be used to optimize the application and network.  The insights are presented through visualization tools. These tools might create dashboards, charts, and reports to make the information more accessible and actionable for users.  Latest CDEM updates extended 2-way information sharing to now include Mobile RUM (MRUM) RUM is further categorized into Browser RUM (BRUM) and Mobile RUM (MRUM). Early this year, we launched CDEM to integrate ThousandEyes network insights with Browser RUM. In the current release, we have extended it to include Mobile RUM as well.   With bi-directional information sharing between MRUM, APM and NPM, this solution eliminates silos and provides end-to-end visibility to every team, all from within Cisco AppDynamics. It helps isolate issues like slow mobile application responsiveness due to network issues by visualizing aggregated mobile application user experience metrics along with network metrics across the same timelines.    Additional resources  AppDynamics SaaS documentation: End User Monitoring  In the Blog: Mobile Real User Monitoring and Cisco ThousandEyes Integration 
I wish I were more well-versed in the various deployment architectures for Splunk and what they mean as far as app / add-on deployment, but I'm not and am stuck at the moment. A customer has asked w... See more...
I wish I were more well-versed in the various deployment architectures for Splunk and what they mean as far as app / add-on deployment, but I'm not and am stuck at the moment. A customer has asked whether an app we have published to Splunkbase support Search Head Clustering.  Having read through some documentation on what it is and how it works, I'm still uncertain as to what that means with respect to my app.   Does anyone know (or can point me to a resource that I've yet to unearth) what does "support Search Head Clustering" mean and how would I know whether my app supports it / what must be done by an app developer to support it? I can say with certainty that we did not do anything special during the development process to support this, but that doesn't mean it isn't support inherently ... so I'm at a loss. 
We are in the process of implementing SAML configuration in Splunk, utilizing an external .pem certificate. However, Splunk does not accept this certificate. How can we obtain an external certificate... See more...
We are in the process of implementing SAML configuration in Splunk, utilizing an external .pem certificate. However, Splunk does not accept this certificate. How can we obtain an external certificate in Splunk to successfully configure SAML? Additionally, for SAML integration, we are utilizing NetIQ Access Manager.
I have a Splunk result like below. VM col1 col2 vm1 car sedan vm2 car sedan vm3 plane Priv vm4 bike Fazer vm5 bike thunder   I would like to make them in a below f... See more...
I have a Splunk result like below. VM col1 col2 vm1 car sedan vm2 car sedan vm3 plane Priv vm4 bike Fazer vm5 bike thunder   I would like to make them in a below format, would you please suggest me. I want to merge the same value into one (columns merge)    
Hello I am working on creating a search that eval's results and adds boolean strings. the results will then be passed as a token to later searches. The result of the search could be a single ID or m... See more...
Hello I am working on creating a search that eval's results and adds boolean strings. the results will then be passed as a token to later searches. The result of the search could be a single ID or multiple IDs. The idea is that the first panel lists IDs. The next panel in the dashboard will search an index but only for IDs from the first panel.  For example: Panel 1 index=db source=MSGTBL MSG_src="XXXX" MSG_DOMAIN="CCCCCCCC" "<messageType>AAA</messageType>" | eval MSGID1="MSGID="+MSGID+" OR" | table MSGID might give you a table of MSGIDs: MSGID=56454GF-5RT1KL-566IOS-FT5GFAS OR MSGID=56454GF-65WE-566IOS-5845UIK OR MSGID=SD8734-DFH745-DFHJ7867-GKJH8 OR I can then set that as a token like <done> <set token="tokMSGID1">$result.MSGID1$</set> </done>   The issue im having is that if there is only a single MSGID it will have an 'OR' at the end as well as the last result in a set of IDs would have the 'OR' at the end. Can anyone tell me search-wise how to handle this? Thanks!  
Hi, When I execute this search index=foo | stats count by _raw, sourcetype, source, host | where count>1 , I'm able to observe events with counts higher than 1. However, I'm uncertain if these ... See more...
Hi, When I execute this search index=foo | stats count by _raw, sourcetype, source, host | where count>1 , I'm able to observe events with counts higher than 1. However, I'm uncertain if these events are being duplicated. Is there an alternative search method I can use to verify whether these events are being double-ingested? Thanks..
Hello, at the moment we are indexing JSON files in Splunk and then rename the fields with a Field Alias function. This leads to the problem, that we cannot use tStats on these renamed fields anymore.... See more...
Hello, at the moment we are indexing JSON files in Splunk and then rename the fields with a Field Alias function. This leads to the problem, that we cannot use tStats on these renamed fields anymore.   Now to the question: Is there a way to rename the fields with splunk before indexing the data? The goal is that we can use tStats on all fields with the new renamed names.
index=jedi domain="jedi.lightside.com" (master!="yoda" AND master!="mace" AND master="Jinn") | table saber_color, Jname, strengths, mentor, skill, domain, mission index-=sith broker sithlord!=dar... See more...
index=jedi domain="jedi.lightside.com" (master!="yoda" AND master!="mace" AND master="Jinn") | table saber_color, Jname, strengths, mentor, skill, domain, mission index-=sith broker sithlord!=darth_maul | table saber_color, Sname, strength, teacher, actions I need to list where Jname=Sname, but I need to list all columns The third one is where the Jname!=Sname The caveat is I cannot use the join for this query. This helped however I am unable to utilize the index drill down for each in the search otherwise the query is 75% white noise. index=jedi OR index=sith | eval name=coalesce(Jname, Sname) | stats values(name) as names by saber_color strengths | where mvcount(names)=1 Please help.
Hi, I want to import the entities via csv to entity management in Splunk ITSI, so please help me with this. Thanks
Hello Experts, I'm currently having CSV file that contains fields such as ID, IP, OS, status, tracking_method, Last_boot, First_found_date, last_activity, hostname, domain, etc. I want to ingest a... See more...
Hello Experts, I'm currently having CSV file that contains fields such as ID, IP, OS, status, tracking_method, Last_boot, First_found_date, last_activity, hostname, domain, etc. I want to ingest as metrics data. Is it possible? I'd appreciate any guidance or examples how to achieve this.? Thanks in advance
Getting "Unexpected error downloading update: Connection reset by peer" while trying to install add-on from splunkbase (via 'Find more apps)   Internet is connected, I'm able to access splunk a... See more...
Getting "Unexpected error downloading update: Connection reset by peer" while trying to install add-on from splunkbase (via 'Find more apps)   Internet is connected, I'm able to access splunk application as well. Only the installation is failing. Earlier to this, I was getting SSL error when I try to open this page. Then I set sslVerifyServerCert to false, after which the page started loading. I'm not sure if some SSL related blocking still exists.  Any suggestions around getting through this? 
Does splunk shares common userbase amongst all splunk products? Which API request fetch Audit logs or events for splunk users?
When I try to use below code to test the API search:     var context = new Context(Scheme.Https, "www.splunk.com", 443); using (var service = new Service(context, new Namespace(user: "nobody", app... See more...
When I try to use below code to test the API search:     var context = new Context(Scheme.Https, "www.splunk.com", 443); using (var service = new Service(context, new Namespace(user: "nobody", app: "search"))) { Run(service).Wait(); } /// <summary> /// Called when [search]. /// </summary> /// <param name="service">The service.</param> /// <returns></returns> static async Task Run(Service service) { await service.LogOnAsync("aaa", "bbb"); //// Simple oneshot search using (SearchResultStream stream = await service.SearchOneShotAsync("search index=test_index | head 5")) { foreach (SearchResult result in stream) { Console.WriteLine(result); } } }     But failed, get the error message: XmlException: Unexpected DTD declaration. Line 1, position 3. Question: int this line: new Namespace(user: "nobody", app: "search") how to define the "user" and "app" parameters value? I try to use this way: var service = new Service(new Uri("https://www.splunk.com")); but still failed and got the same error message.  
Hi, below are the log details. index=ABC sourcetype=logging_0 Below are the values of "ErrorMessages" field: invalid - 5 count unprocessable - 7 count (5 invalid pair + 2 others) no user foundv... See more...
Hi, below are the log details. index=ABC sourcetype=logging_0 Below are the values of "ErrorMessages" field: invalid - 5 count unprocessable - 7 count (5 invalid pair + 2 others) no user foundv- 3 count invalid message process - 3 count process failed- 3 count   Now I have to eliminate ErrorMessage=invalid and ErrorMessage=unprocessable. Then show all other  ErrorMessage. But the problem here is , "unprocessable" ErrorMessage will show for other messages as well. so we cannot fully eliminate the "unprocessable" ErrorMessage. Whenever "Invalid" ErrorMessage is logging that time "unprocessable" ErrorMessage also will be logged. So we need to eliminate this pair only. Not every "unprocessable" ErrorMessage.   Expected result: unprocessable - 2 count no user foundv- 3 count invalid message process - 3 count process failed- 3 count   I tried with join using requestId but its not resulting anything because i am using | search ErrorMessage="Invalid" and elimated this in next query so its not searching for other ErrorMessages.   Can someone please help.    
hello, Could anyone assist me in creating a correlation search to detect triggered alerts across all searches. This will enable us to monitor counts and automatically notify us if any situation esca... See more...
hello, Could anyone assist me in creating a correlation search to detect triggered alerts across all searches. This will enable us to monitor counts and automatically notify us if any situation escalates beyond control. Thanks
Hi All, Need a help to write a query based on the field "Timestamp" which is different from "_time" value. Sample Event in XML Format: Email: xyz@gmail.com RoleName: User RowKey: 123456 Timesta... See more...
Hi All, Need a help to write a query based on the field "Timestamp" which is different from "_time" value. Sample Event in XML Format: Email: xyz@gmail.com RoleName: User RowKey: 123456 Timestamp: 2023-12-13T23:56:18.200016+00:00 UserId: mno UserName: acho This is one of the sample event in xml format and there is a specific field as "Timestamp" in the event and this "Timestamp" field is completely different from _time value. Hence I want to pull only the "Timestamp" value for a particular day might be yesterday 2023-12-13 i.e. from 2023-12-13 00:00:00 to 2023-12-13 23:59:59 So how can I write the query for the same. index=abc host=xyz sourcetype=xxx
Hi Team,I am using a query which has same index and source but fetch two results based on the search and combine to a single table..now i want to display the result along with the timestamp it appear... See more...
Hi Team,I am using a query which has same index and source but fetch two results based on the search and combine to a single table..now i want to display the result along with the timestamp it appears in ascending order index=index1 source=source1 CASE("latest") AND "id" AND "dynamoDB data retrieved for ids" AND "material"| eval PST=_time-28800 | eval PST_TIME3=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=dataNotFoundIdsCount path=dataNotFoundIdsCount | stats values(*) as * by _raw | table dataNotFoundIdsCount, PST_TIME3 | sort- PST_TIME3| appendcols [search index=index1 source=source1 CASE("latest") AND "id" AND "sns published count" AND "material"| eval PST=_time-28800 | eval PST_TIME4=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath snsPublishedCount output=snsPublishedCount |spath output=republishType path=republishType| spath output=version path=republishInput.version| spath output=publish path=republishInput.publish| spath output=nspConsumerList path=republishInput.nspConsumerList{} | spath output=objectType path=republishInput.objectType | stats values(*) as * by _raw | table snsPublishedCount,republishType,version,publish, nspConsumerList,objectType,PST_TIME4 | sort- PST_TIME4 ] |table PST_TIME4 objectType version republishType publish nspConsumerList snsPublishedCount dataNotFoundIdsCount  
Hi All, I am facing error using wildcard in multivalue field. I am using mvfind to find a string.     eval test_loc=case(isnotnull(Region,%bangalore%), Bangalore)     I am just giving pa... See more...
Hi All, I am facing error using wildcard in multivalue field. I am using mvfind to find a string.     eval test_loc=case(isnotnull(Region,%bangalore%), Bangalore)     I am just giving part of eval statement here Example  : Region =  "sh bangalore Test" The above eval statement should work on this Region and set test_loc = Bangalore. I tried passing * and % (*bangalore*, %bangalore%) , but am getting error.  Please help me. Thanks , poojitha NV
Hello! I'm new to splunk so any help is much appreciated. I have two queries of different index.  Query1: index=rdc sourcetype=sellers-marketplace-api-prod custom_data | search "custom_data.result.i... See more...
Hello! I'm new to splunk so any help is much appreciated. I have two queries of different index.  Query1: index=rdc sourcetype=sellers-marketplace-api-prod custom_data | search "custom_data.result.id"="*" | dedup custom_data.result.id | timechart span=1h count   Query2: index=leads host="pa*" seller_summary | spath input="Data" | search "0.lead.form.page_name"="seller_summary" | dedup 0.id | timechart span=1h count I would like to write a query that executes Query1-Query2 for the counts in each hour. It should be in the same format. Thank you!!
Hi,  I need help in a splunk search.  My requirement is get the stats for failed and successful count along with the percentage of Failed and  Successful  and at last I would need to fetch the stat... See more...
Hi,  I need help in a splunk search.  My requirement is get the stats for failed and successful count along with the percentage of Failed and  Successful  and at last I would need to fetch the stats only when the failed % is > 10 % My query works fine  until the below index=abcd | eval status= case(statuscode < 400, "Success", statuscode > 399,"Failed") | stats count(status) as TOTAL  count(eval(status="Success")) as Success_count  count(eval(status="Failed")) as Failed_count  by Name, URL | eval Success%= ((Success_count /TOTAL)*100) | eval Failed%= ((Failed_count /TOTAL)*100) The above works and I get the table with Name URL TOTAL  Success_count   Failed_count   Success% Failed% Now, when I add the below to the above query, It fails  | where Failed% > 10 How do I get the failed% > 10 with the above table. Please assist