All Topics

Top

All Topics

Hey Guys, I have a node js application and I used Winston to print out the log for our application. Ex(logger.info({responseStatus:200}). I am not using a log file and just simply printing out the l... See more...
Hey Guys, I have a node js application and I used Winston to print out the log for our application. Ex(logger.info({responseStatus:200}). I am not using a log file and just simply printing out the log. I am not quite sure what's causing the issue here. The log event is working fine in other environments and displaying in the separate event log, so I can keep track of the event field name. But in the production environment, my logs are mixed with console.log and treated as one event instead. It looks something like this right here. (Just an example, but looks similar).   I am new to Splunk Enterprise, and I am not quite sure where my configuration file is located. It's ok if there's no solution, but I would like to hear some advice from the expert from Splunk, on what may be causing this happening.  
Good afternoon, I hope you are well. I am migrating my alert environment from TheHive to start using ES. I would like to know and learn if, in ES, when creating Correlations, I can configure a field ... See more...
Good afternoon, I hope you are well. I am migrating my alert environment from TheHive to start using ES. I would like to know and learn if, in ES, when creating Correlations, I can configure a field in the notable event that analysts can edit. For example, when creating cases in TheHive, I include the desired field, and analysts set the value when they take the case for processing. Despite studying, I couldn't figure out how to implement this in a notable event so that analysts can provide inputs such as identifying the technology involved or deciding whether it should be forwarded. This would help me use it for auditing purposes later on. Is it possible to achieve this in ES?
Hello Team,  I am trying to setup proxy in splunk Heavy Forwarder. I did it by setting up environment variable http_proxy, but splunk python is not honouring the environment variable setup in Linux... See more...
Hello Team,  I am trying to setup proxy in splunk Heavy Forwarder. I did it by setting up environment variable http_proxy, but splunk python is not honouring the environment variable setup in Linux machine where HF is installed. If I run python script it will get the data from proxy, if I run the same script with opt/splunk/etc cmd python it is not going to proxy.   Is there any way we can make splunk to honour environment variables. 
Dear All, Scenario--> 1AV server is having multiple endpoint reporting to it. This AV server integrated with Splunk and through the AV server we are reciving DAT version info. for all the reporting ... See more...
Dear All, Scenario--> 1AV server is having multiple endpoint reporting to it. This AV server integrated with Splunk and through the AV server we are reciving DAT version info. for all the reporting endpoints. Requirement--> Need to generate a AV monthly DAT compliance report.   The criteria for DAT compliance is 7 days. within 7 days system should be updated to latest DAT. Workdone till now--> THere is no intelligenec in data to get the latest DAT from AV-Splunk logs. Only endpoint that are updated with N DAT is coming. I used EVAL command and tied the Latest/today DAT to the today DATE (Used today_date--convert-->today_DAT). Based on that I am able to calculate the DAT compliance for 7 days keeping the today_DAT for the 8th day as reference. This splunk query is able to give correct data for whatever time frame with  the past 7 days compliance only.   Issue--> for past 30 days i.e 25th to 25th of every month, I wanted to divide the logs with 7 days time frame starting from e.g 25th dec, 1 jan,  8th jan 15th jan 22jan  till 25th Jan (last slot less than 7days) and then calculate for each 7 day time frame to know what is the overall compliance on 25th jan. Accordingly calculate the overall 25th dec, 1 jan,  8th jan till 25th Jan  data for a month to give the final report Where stuck--> current query i tried to add the "bin" command for 7 days but unable to tie the latest DAT date (today_DAT date for the 1st Jan) to 7th day for first bin then 8th Jan for second bin so on and so forth In case there is any other method/query to do the same stuff. Kindly let me know   PFA screenshot for your reference @PickleRick @ITWhisperer  @yuanliu 
Hi Kinda a new to splunk . Sending data to splunk via HEC. Its a DTO which contains various fields, one of them being requestBody which is a string and it contains the JSON Payload my end point is r... See more...
Hi Kinda a new to splunk . Sending data to splunk via HEC. Its a DTO which contains various fields, one of them being requestBody which is a string and it contains the JSON Payload my end point is receiving. When viewing the log event within splunk, the requestBody stays as string. I was hoping that it could be expanded so that the json fields could be searchable.  As you can see, when i click on "body", the whole line is selected. I am hoping for , for example, "RYVBNQ" to be individually selectable so that i can do searches against that.   
Does the Palo Alto Networks App no longer have a page where you can view and filter out network traffic activity?
Start a new learning journey or pick up where you left off by logging into Cisco U.  Today is the day to explore all that Cisco U. offers!   This means your AppDynamics University subscription j... See more...
Start a new learning journey or pick up where you left off by logging into Cisco U.  Today is the day to explore all that Cisco U. offers!   This means your AppDynamics University subscription just migrated to a Cisco U. subscription. If you still need to set up your Cisco U. login, follow the setup process at this how-to on the AppDynamics Community and continue your learning journey.   Cisco U. now has AppDynamics training and so much more. Find content on Networking, Cloud and Computing, Software, Security, and Data Center in a variety of media to suit your learning styles. Upskill and cross-skill with:  Learning Paths guiding to certifications  Self-assessments to gauge your starting point  Courses  Tutorials  and videos  Get started with recommendations based on your interests. Bookmark the content you want to learn and create your unique learning journey.    Additional resources  Learn more about Cisco U. or the migration:    AppDynamics Community migration FAQ   Cisco U. home page  Cisco U. plans page  Cisco U. YouTube channel  Cisco U. 101 on YouTube 
Hello everybody  I'm new here and recently I created this :  Ubuntu : splunk server Ubuntu : splunk forwarder  Windows 10 : splunk forwarder  I followed the Splunk How-To video for ubuntu sp... See more...
Hello everybody  I'm new here and recently I created this :  Ubuntu : splunk server Ubuntu : splunk forwarder  Windows 10 : splunk forwarder  I followed the Splunk How-To video for ubuntu splunkfwd : https://www.youtube.com/watch?v=rs6q28xUd-o&t=191s I can see my host in data summary but not in the Forwarder Management : how could you explain it ? I'm thinking about permission maybe so here is :  I also add a deploymentclient.conf in :    /opt/splunkforwarder/etc/system/local/ nano deploymentclient.conf [deployment-client] [target-broker:deploymentServer] targetUri = 192.ipfromserver:8089   Have a great evening 
I have a search as follows: index=*| search sourcetype=*| spath logs{} output=logs| spath serial_number output=serial_number| spath result output=result| table serial_number result| ```sta... See more...
I have a search as follows: index=*| search sourcetype=*| spath logs{} output=logs| spath serial_number output=serial_number| spath result output=result| table serial_number result| ```stats dc(serial_number) as throughput|``` stats count(eval(if(result="Fail",1,null()))) as failures count(eval(if(result="Pass",1,null()))) as passes |   This returns a table shown in the capture with failures=215 and passes=350 how can i get these results as two sperate bars in one bar chart? basically want to show the pass/fail rate     sample of the JSON data i am working with: {"serial_number": "30913JC0024EW1482300425", "type": "Test", "result": "Pass", "logs": [ {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Disable UGC USB Comm Watchdog", "result": "Pass"}, {"test_name": "Hardware Rev", "result": "Pass", "received": "4"}, {"test_name": "Firmware Rev", "result": "Pass", "received": "1.8.3.99", "expected": "1.8.3.99"}, {"test_name": "Set Serial Number", "result": "Pass", "received": "1 A S \n", "expected": "1 A S"}, {"test_name": "Verify serial number", "result": "Pass", "received": "JC0024EW1482300425", "expected": "JC0024EW1482300425", "reason": "Truncated full serial number: 30913JC0024EW1482300425 to JC0024EW1482300425"}, {"test_name": "Thermocouple", "pt1_ugc": "24969.0", "pt1": "25000", "pt2_ugc": "19954.333333333332", "pt2": "20000", "pt3_ugc": "14993.666666666666", "pt3": "15000", "result": "Pass", "tolerance": "1000 deci-mV"}, {"test_name": "Cold Junction", "result": "Pass", "ugc_cj": "278", "user_temp": "270", "tolerance": "+ or - 5 C"}, {"test_name": "Glow Plug Open and Short", "result": "Pass", "received": "GP Open, Short, and Load verified OK.", "expected": "GP Open, Short, and Load verified OK."}, {"test_name": "Glow Plug Power On", "result": "Pass", "received": "User validated Glow Plug Power"}, {"test_name": "Glow Plug Measure", "pt1_ugc": "848", "pt1": "2070", "pt1_tolerance": "2070", "pt2_ugc": "5201", "pt2": "5450", "pt2_tolerance": "2800", "result": "Pass"}, {"test_name": "Motor Soft Start", "result": "Pass", "received": "Motor Soft Start verified", "expected": "Motor Soft Start verified by operator"}, {"test_name": "Motor", "R_rpm_ugc": 1525.0, "R_rpm": 1475, "R_v_ugc": 160.0, "R_v": 155, "R_rpm_t": 150, "R_v_t": 160, "R_name": "AUGER 320 R", "F_rpm_ugc": 1533.3333333333333, "F_rpm": 1475, "F_v_ugc": 164.0, "F_v": 182, "F_rpm_t": 150, "F_v_t": 160, "F_name": "AUGER 320 F", "result": "Pass"}, {"test_name": "Fan", "ugc_rpm": 2436.0, "rpm": 2130, "rpm_t": 400, "ugc_v": 653.3333333333334, "v": 630, "v_t": 160, "result": "Pass"}, {"test_name": "RS 485", "result": "Pass", "received": "All devices detected", "expected": "Devices detected: ['P']"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "DFU Test", "result": "Pass", "received": "Found DFU device"}, {"test_name": "Power Cycle", "result": "Pass", "received": "User confirmed power cycle"}, {"test_name": "UGC Connect", "result": "Pass"}, {"test_name": "Close UGC Port", "result": "Pass"}, {"test_name": "USB Power", "result": "Pass", "received": "USB Power manually verified"}]}
hI Currently trying to test an HTTP event collector token by directly sending events to the cloud before we use the HEC for a OpenTelemetry Connector, but we are getting stuck at 403 Forbidden error... See more...
hI Currently trying to test an HTTP event collector token by directly sending events to the cloud before we use the HEC for a OpenTelemetry Connector, but we are getting stuck at 403 Forbidden error. Is there something wrong with this curl command?  Not sure if it affects anything but we are still on the Splunk Cloud Classic Screenshots attached, appreciate any help we can get!
When trying to schedule a PDF delivery for a dashboard, the error message Parameter "name" must be 100 characters or less is displayed. The dashboard runs fine, export PDF has no issues. Where do I f... See more...
When trying to schedule a PDF delivery for a dashboard, the error message Parameter "name" must be 100 characters or less is displayed. The dashboard runs fine, export PDF has no issues. Where do I find this "name" parameter?
Hi, The AppD ansible collection for machine agent has an issue where if you want to change the values of tier, application, or node_name but they already have values in the conf file, you cannot cha... See more...
Hi, The AppD ansible collection for machine agent has an issue where if you want to change the values of tier, application, or node_name but they already have values in the conf file, you cannot change them without first uninstalling and then re-installing the agent. I can't give a link to the git repo, because the Ansible collection does not expose which git repo the collection was synced from. The collection page is https://galaxy.ansible.com/ui/repo/published/appdynamics/agents/, which also contains a tarball of the code. This specific code is both: roles/java/tasks/merging-controller-info.yml (starting line 98) roles/machine/tasks/merging-controller-info.yml (starting line 108) I can submit a PR for this if you point me to the git repo for it, or I would request that it either. Any suggestions or a way through?
Hi, In our environment, we utilize Windows security logs for our security purposes. To reduce licensing costs, I'm considering switching the render XML setting to false. I'm wondering if this is adv... See more...
Hi, In our environment, we utilize Windows security logs for our security purposes. To reduce licensing costs, I'm considering switching the render XML setting to false. I'm wondering if this is advisable, especially given our focus on security use cases. Could you highlight the major distinctions between using XML and non-XML formats for these logs? Thanks.
Hi team, I have the following search code, and I want to trigger an alert when the condition is 'OFFLINE'. Note that we receive logs every 2 minutes, and the alert should be triggered only once; sub... See more...
Hi team, I have the following search code, and I want to trigger an alert when the condition is 'OFFLINE'. Note that we receive logs every 2 minutes, and the alert should be triggered only once; subsequent alerts should be suppressed. Similarly, when the condition becomes 'ONLINE', I want to trigger an alert only once, with subsequent alerts being suppressed. I hope my requirement is clear. index= "XXXX" invoked_component="YYYYY" "Genesys system is available" | spath input=_raw output=new_field path=response_details.response_payload.entities{} | mvexpand new_field | fields new_field | spath input=new_field output=serialNumber path=serialNumber | spath input=new_field output=onlineStatus path=onlineStatus | where serialNumber!="" | lookup Genesys_Monitoring.csv serialNumber | where Country="Egypt" | stats count(eval(onlineStatus="OFFLINE")) AS offline_count count(eval(onlineStatus="ONLINE")) AS online_count | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"ONLINE", offline_count>0 AND online_count=0,"OFFLINE", offline_count>0 AND online_count>0 AND online_count>offline_count, "OFFLINE", offline_count>0 AND online_count>0 AND offline_count>online_count, "OFFLINE", offline_count=0 AND online_count=0, "No data") | search condition="OFFLINE" OR condition="ONLINE" | table condition  
Hello, Is it possible to get the serial numbers of windows/linux machines being ingested to splunk using the splunk add-on for windows or linux?   Thanks  
Our custom app had changes to the views and these changes are not getting updated. I have zipped the custom app and followed the install from file process. The custom app passed the AppInspection ver... See more...
Our custom app had changes to the views and these changes are not getting updated. I have zipped the custom app and followed the install from file process. The custom app passed the AppInspection version 3.0.3 after I figured out how to run the slim generate-manifest command. It took a few tries to get it correct, but I have uploaded this custom app to Splunk Cloud. When I use the app, I expect the latest xml code for our custom views to be used, but the data is not displaying correctly in the chart. When I click on Open in search icon, I get an old version of the view search query, so that explains why the chart looks funny.  Has anyone dealt with this before? Are there tricks to clearing out the obsolete views when uploading a new version? I have incremented the minor and release versions, based on other reasons. I do know the cloud expects the versions to increment. Our last working version was 1.0.115 and my current version is 1.1.7. 
my Linux webserver is running Apache and I'd like to Splunk to analyze the logs. I'm using the "Splunk App for Web Analytics". I followed the documentation and imported my Apache log files and instal... See more...
my Linux webserver is running Apache and I'd like to Splunk to analyze the logs. I'm using the "Splunk App for Web Analytics". I followed the documentation and imported my Apache log files and installed the "Splunk Add-on for Apache Web Server". My Apache logs are getting properly parsed in Splunk and updated the eventtype web-traffic to point to the logs  by source type. I'm running into a problem configuring the Web Analytics app. It found two log files (access_log and ssl_access_log) and i pointed them to the site's domain. access_log appears to be configured correctly but ssl_access_log gives the error "Site not configured". lastly, running "Generate user sessions" and "generate pages" shows zero events. There are no results in any of the App dashboard menus, but i do see plenty of logs in the raw search. Any idea what's going on? Here are two screen shots of my configs:
Hi Everybody, Maybe a noob question, when I configure the Javascript agent I noticed that you just have to copy paste a script in the main page of your web app, the AppKey value is included in tha... See more...
Hi Everybody, Maybe a noob question, when I configure the Javascript agent I noticed that you just have to copy paste a script in the main page of your web app, the AppKey value is included in that script, but this AppKey is visible if you open the dev tools of any browser, is there any problem or risk if I let the AppKey visible in my web app?, any suggestion on how to hide it? I'm working with Sveltekit, but I guess it will be the same for most javascript frameworks.
Enhance full-stack observability by correlating Mobile Real User Monitoring (Mobile RUM) with network intelligence  In this article... What is Customer Digital Experience Monitoring (CDEM)?  ... See more...
Enhance full-stack observability by correlating Mobile Real User Monitoring (Mobile RUM) with network intelligence  In this article... What is Customer Digital Experience Monitoring (CDEM)?  CDEM offers two-way data flow and analysis across stacks in real time  How do I correlate data across APM, RUM, and NPM domains? Latest CDEM updates extended 2-way information sharing, now including MRUM  Additional resources    What is Customer Digital Experience Monitoring (CDEM)?  In June 2023, Cisco released Customer Digital Experience Monitoring (CDEM) – a bi-directional integration between AppDynamics™ and ThousandEyes™- which extended Cisco’s Full Stack Observability by combining application, network and user experience monitoring to provide powerful customer digital experience monitoring. It helps our customers to:  Proactively identify gaps in monitoring and deliver optimal digital experiences to their users  Correlate inferior user experiences over customer applications attributed due to network issues  Reduce MTTR and prioritize network remediation based on the business impact due to user experience issues  This integration offers a powerful correlation between network insights and customer’s application experience their users face, over their respective web browsers.   CDEM offers two-way data flow and analysis across stacks in real time There is a real-time flow of data across APM, RUM and NPM stacks which is further correlated in near-real time and then analyzed and presented over insightful visualization.  AppDynamics APM Application Performance Management  Helps in monitoring and managing the performance of applications by providing real-time insights into the performance of the application itself, including metrics related to response times, errors, and resource utilization.    AppDynamics RUM   Real User Monitoring  Involves tracking and monitoring the experience of real users interacting with an application. It provides insights into how users are experiencing the application, including details about page load times, user interactions, and more.    ThousandEyes NPM Network Performance Management Helps in monitoring the performance of internet infrastructure — including factors like network latency, packet loss, and bandwidth utilization.    By correlating network insights for similar network domains between NPM, APM, and RUM stacks, we provide our customers with full-stack observability across these stacks, packaged as the CDEM offering. Adding to our current correlation between network performance and user experience over web browsers, this release includes mobile applications as well.   How do I correlate data across APM, RUM and NPM domains?  In AppDynamics, applications are the entities that contain application performance insights and associated user experience data to provide business observability. These applications contain network domains against which ThousandEyes network tests can be configured for getting regular network insights.  Data from common entities across domains including metrics related to application performance, user experience, and network performance is collected.  The collected data is correlated to identify patterns or relationships. This involves associating network performance issues with user experience metrics and eventually with application performance metrics.  Once the data is correlated, it is analyzed to gain insights. This analysis can help identify the root causes of performance issues and can be used to optimize the application and network.  The insights are presented through visualization tools. These tools might create dashboards, charts, and reports to make the information more accessible and actionable for users.  Latest CDEM updates extended 2-way information sharing to now include Mobile RUM (MRUM) RUM is further categorized into Browser RUM (BRUM) and Mobile RUM (MRUM). Early this year, we launched CDEM to integrate ThousandEyes network insights with Browser RUM. In the current release, we have extended it to include Mobile RUM as well.   With bi-directional information sharing between MRUM, APM and NPM, this solution eliminates silos and provides end-to-end visibility to every team, all from within Cisco AppDynamics. It helps isolate issues like slow mobile application responsiveness due to network issues by visualizing aggregated mobile application user experience metrics along with network metrics across the same timelines.    Additional resources  AppDynamics SaaS documentation: End User Monitoring  In the Blog: Mobile Real User Monitoring and Cisco ThousandEyes Integration 
I wish I were more well-versed in the various deployment architectures for Splunk and what they mean as far as app / add-on deployment, but I'm not and am stuck at the moment. A customer has asked w... See more...
I wish I were more well-versed in the various deployment architectures for Splunk and what they mean as far as app / add-on deployment, but I'm not and am stuck at the moment. A customer has asked whether an app we have published to Splunkbase support Search Head Clustering.  Having read through some documentation on what it is and how it works, I'm still uncertain as to what that means with respect to my app.   Does anyone know (or can point me to a resource that I've yet to unearth) what does "support Search Head Clustering" mean and how would I know whether my app supports it / what must be done by an app developer to support it? I can say with certainty that we did not do anything special during the development process to support this, but that doesn't mean it isn't support inherently ... so I'm at a loss.