All Topics

Top

All Topics

"CEF:0|Bitdefender|GravityZone|6.35.1-1|35|Product Modules Status|5|BitdefenderGZModule=modules dvchost=xxx      BitdefenderGZComputerFQDN=xxxxx dvc=x.x.x.x deviceExternalId=xxxxx BitdefenderGZIsCont... See more...
"CEF:0|Bitdefender|GravityZone|6.35.1-1|35|Product Modules Status|5|BitdefenderGZModule=modules dvchost=xxx      BitdefenderGZComputerFQDN=xxxxx dvc=x.x.x.x deviceExternalId=xxxxx BitdefenderGZIsContainerHost=0 BitdefenderGZMalwareModuleStatus=enabled BitdefenderGZBehavioralScanAVCModuleStatus=enabled BitdefenderGZDataLossPreventionModuleStatus=disabled"   The logs are from Bitdefender and they show a time diff of +15 hrs. and there is no timestamp in logs no other source types from same HF show the behavior only bit-defender logs. All the help is appreciated to correct the time.
Hello Splunkers, I'm trying to send traces from an existing website that is built on top of Python (3.9.7) Django(4.1.3) and MySQL(8.0.32) hosted in linux to APM Observability. I'm having problems c... See more...
Hello Splunkers, I'm trying to send traces from an existing website that is built on top of Python (3.9.7) Django(4.1.3) and MySQL(8.0.32) hosted in linux to APM Observability. I'm having problems configuring via python instumentation. here are the steps I did using a virtual environment based on the splunk docs: installed open telemetry collector via curl script installed instumentation packages for python environment ran splunk-py-trace-bootstrap set environment vaiables (OTEL_SERVICE_NAME, OTEL_RESOURCE_ATTRIBUTES, OTEL_EXPORTER_OTLP_ENDPOINT, DJANGO_SETTINGS_MODULE) When I enable the splunk otel python agent this it is giving me the below error : Instrumenting of sqlite3 failed ModuleNotFoundError: No module named '_sqlite3' Failed to auto initialize opentelemetry ModuleNotFoundError: No module named '_sqlite3' Performing system checks... I've already tried reinstalling the sqlite3 and even downloaded from python repository the contents of sqlite3 and manually replaced it on the sqlite3 file but still cannot proceed. any help or direction would be very much apprecaited. thanks!
Hi Guys,   We are collecting the Kubernetes logs using HEC on our Cloud splunk. When ever there is a ERROR entry in the logs , it will have a timestamp in the first line and later lines will be log... See more...
Hi Guys,   We are collecting the Kubernetes logs using HEC on our Cloud splunk. When ever there is a ERROR entry in the logs , it will have a timestamp in the first line and later lines will be logged as below one after one which will have a information about that error.         But when we see it in Splunk console these lines will be splitted as multiple events as below which is leading to confusion.       Is there anyway we can merge these particular lines as single events so that all lines related to any error should be visible as a single event. Please help on this.
Hi Team, I have the table in my dashboard as below: Age Approval Name 61 Approve Sujata 29 Approve Linus 33 Approve Karina 56 Approve Rama Requirement is to update the Appr... See more...
Hi Team, I have the table in my dashboard as below: Age Approval Name 61 Approve Sujata 29 Approve Linus 33 Approve Karina 56 Approve Rama Requirement is to update the Approve to Approved once user click on a particular row and the output should like like below: Age Approval Name 61 Approved Sujata 29 Approve Linus 33 Approve Karina 56 Approve Rama
Hi everyone I created a look up table:   Department,Vendor,Type,url_domain,user,src_ip,Whitelisted BigData,Material,Google Remote Desktop,Alpha.com,Alice,172.16.28.12,TRUE   Then I created a l... See more...
Hi everyone I created a look up table:   Department,Vendor,Type,url_domain,user,src_ip,Whitelisted BigData,Material,Google Remote Desktop,Alpha.com,Alice,172.16.28.12,TRUE   Then I created a look up definition with this match type:   WILDCARD(url_domain), WILDCARD(user), WILDCARD(src_ip)   Then I tested it on following search but it didn't work.   index=fortigate src_ip=172.16.28.12 url_domain=Alpha.com | lookup Whitelist url_domain user src_ip | where isnull(Whitelisted) | table _time, severity, user, url_domain, src_ip, dest_ip, dest_domain, transport, dest_port, vendor_action, app, vendor_eventtype, subtype, devname   and shows all results including traffic from 172.16.28.12 by Alice to the mentioned url  Anyone has any idea what is the issue?
Hello Everyone There is one index cluster, one search header, one management node, and three peers. The configuration is RF=3 and SF=3.  There is a non-clustered indexer, and many universal forward... See more...
Hello Everyone There is one index cluster, one search header, one management node, and three peers. The configuration is RF=3 and SF=3.  There is a non-clustered indexer, and many universal forwarder have sent data to this non-clustered indexer. I want this non-clustered indexer to join this cluster and have this cluster take over the incoming data from this indexer. If this indexer fails, other peers in the cluster can store its data. What should I do?
Hello everyone In the Investigation view, in the Workbench section, I want to add a different artifact type than the ones that appear (asset, identity, file, url), I would like an artifact type: Pro... See more...
Hello everyone In the Investigation view, in the Workbench section, I want to add a different artifact type than the ones that appear (asset, identity, file, url), I would like an artifact type: Process, and another type: Index. Where to add custom artifact types to use in the workbench?
I have a question regarding how to properly extract the time ranges between the Events to use as a field value for a Date-Range column. Im setting up the Chargeback app, and im making a specific repo... See more...
I have a question regarding how to properly extract the time ranges between the Events to use as a field value for a Date-Range column. Im setting up the Chargeback app, and im making a specific report. Currently, Im tracking the total ingestion by the Biz_Unit. The main splunk query does fine, but there's a lot of time manipulation within the search, and im not sure how to properly set the date I need. here is an example of some of the output.    This is the query, i know its a large query, but this outputs all of the fields used in chargeback.    `chargeback_summary_index` source=chargeback_internal_ingestion_tracker idx IN (*) st IN (*) idx="*" earliest=-7d@d latest=now | fields _time idx st ingestion_gb indexer_count License | rename idx As index_name | `chargeback_normalize_storage_info` | bin _time span=1h | stats Latest(ingestion_gb) As ingestion_gb_idx_st Latest(License) As License By _time, index_name, st | bin _time span=1d | stats Sum(ingestion_gb_idx_st) As ingestion_idx_st_GB Latest(License) As License By _time, index_name, st `chargeback_comment(" | `chargeback_data_2_bunit(index,index_name,index_name)` ")` | `chargeback_index_enrichment_priority_order` | `chargeback_get_entitlement(ingest)` | fillnull value=100 perc_ownership | eval shared_idx = if(perc_ownership="100", "No", "Yes") | eval ingestion_idx_st_GB = ingestion_idx_st_GB * perc_ownership / 100 , ingest_unit_cost = ingest_yearly_cost / ingest_entitlement / 365 | fillnull value="Undefined" biz_unit, biz_division, biz_dep, biz_desc, biz_owner, biz_email | fillnull value=0 ingest_unit_cost, ingest_yearly_cost, ingest_entitlement | stats Latest(License) As License Latest(ingest_unit_cost) As ingest_unit_cost Latest(ingest_yearly_cost) As ingest_yearly_cost Latest(ingest_entitlement) As ingest_entitlement_GB Latest(shared_idx) As shared_idx Latest(ingestion_idx_st_GB) As ingestion_idx_st_GB Latest(perc_ownership) As perc_ownership Latest(biz_desc) As biz_desc Latest(biz_owner) As biz_owner Latest(biz_email) As biz_email Values(biz_division) As biz_division by _time, biz_unit, biz_dep, index_name, st | eventstats Sum(ingestion_idx_st_GB) As ingestion_idx_GB by _time, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_dep_GB by _time, biz_unit, biz_dep, index_name | eventstats Sum(ingestion_idx_st_GB) As ingestion_bunit_GB by _time, biz_unit, index_name | eval ingestion_idx_st_TB = ingestion_idx_st_GB / 1024 , ingestion_idx_st_PB = ingestion_idx_st_TB / 1024 ,ingestion_idx_TB = ingestion_idx_GB / 1024 , ingestion_idx_PB = ingestion_idx_TB / 1024 , ingestion_bunit_dep_TB = ingestion_bunit_dep_GB / 1024 , ingestion_bunit_dep_PB = ingestion_bunit_dep_TB / 1024, ingestion_bunit_TB = ingestion_idx_GB / 1024 , ingestion_bunit_PB = ingestion_bunit_TB / 1024 | eval ingestion_bunit_dep_cost = ingestion_bunit_dep_GB * ingest_unit_cost, ingestion_bunit_cost = ingestion_bunit_GB * ingest_unit_cost, ingestion_idx_st_cost = ingestion_idx_st_GB * ingest_unit_cost | eval ingest_entitlement_TB = ingest_entitlement_GB / 1024, ingest_entitlement_PB = ingest_entitlement_TB / 1024 | eval Time_Period = strftime(_time, "%a %b %d %Y") | search biz_unit IN ("*") biz_dep IN ("*") shared_idx=* _time IN (*) biz_owner IN ("*") biz_desc IN ("*") biz_unit IN ("*") | table biz_unit biz_dep Time_Period index_name st perc_ownership ingestion_idx_GB ingestion_idx_st_GB ingestion_bunit_dep_GB ingestion_bunit_GB ingestion_bunit_dep_cost ingestion_bunit_cost biz_desc biz_owner biz_email | sort 0 - ingestion_idx_GB | rename st As Sourcetype ingestion_bunit_dep_cost as "Cost B-Unit/Dep", ingestion_bunit_cost As "Cost B-Unit", biz_unit As B-Unit, biz_dep As Department, index_name As Index, perc_ownership As "% Ownership", ingestion_idx_st_GB AS "Ingestion Sourcetype GB", ingestion_idx_GB As "Ingestion Index GB", ingestion_bunit_dep_GB As "Ingestion B-Unit/Dep GB",ingestion_bunit_GB As "Ingestion B-Unit GB", biz_desc As "Business Description", biz_owner As "Business Owner", biz_email As "Business Email" | fieldformat Cost B-Unit/Dep = printf("%'.2f USD",'Cost B-Unit/Dep') | fieldformat Cost B-Unit = printf("%'.2f USD",'Cost B-Unit') | search Index = testing | dedup Time_Period | table B-Unit Time_Period "Ingestion B-Unit GB"   The above image shows what im trying to extract. The query has binned _time twice:   | fields _time idx st ingestion_gb indexer_count License | rename idx As index_name | `chargeback_normalize_storage_info` | bin _time span=1h | stats Latest(ingestion_gb) As ingestion_gb_idx_st Latest(License) As License By _time, index_name, st | bin _time span=1d | stats Sum(ingestion_gb_idx_st) As ingestion_idx_st_GB Latest(License) As License By _time, index_name, st   Ive asked our GPT equivalent bot how to properly do it, and it mentioned that when im sorting the stats by _time and index, it was overwriting the time variable. it also kept recommending me change and eval time down near the bottom of the query, something like:   | stats sum(Ingestion_Index_GB) as Ingestion_Index_GB sum("Ingestion B-Unit GB") as "Ingestion B-Unit GB" sum("Cost B-Unit") as "Cost B-Unit" earliest(_time) as early_time latest(_time) as late_time by B-Unit | eval Date_Range = strftime(early_time, "%Y-%m-%d %H:%M:%S") . " - " . strftime(late_time, "%Y-%m-%d %H:%M:%S") | table Date_Range B-Unit Ingestion_Index_GB "Ingestion B-Unit GB" "Cost B-Unit"     Other instances it said that it wasnt in string format, so i couldnt use the strftime.    overall, im now confused as to what is happening to the _time value. All i want is to get the earliest and latest value by index and set that as Date_Range. Can someone help me with this and possibly explain what is happening to the _time variable as it keeps getting manipulated and sorted by.    This is the search query found in the chargeback app under the storage tab. Its the "Daily Ingestion By Index, B-Unit & Department" search query.  if anyone has any ideas, any help would be much appreciated. 
Hello, | dbxquery connection=test query="select employee_data from company" The following employee_data is not in proper JSON format, so I can't use spath. How do I replace single quote (') with d... See more...
Hello, | dbxquery connection=test query="select employee_data from company" The following employee_data is not in proper JSON format, so I can't use spath. How do I replace single quote (') with double quote ("), replace None with "None" and put it on a new field? Thank you for your help. employee_data [{company':'company A','name': 'employee A1','position': None}, {company': 'company A','name': 'employee A2','position': None}] [{company':'company B','name': 'employee B1','position': None}, {company': 'company B','name': 'employee B2','position': None}] [{company':'company C','name': 'employee C1','position': None}, {company': 'company C','name': 'employee C2','position': None}]  
Intelligent monitoring in a rapidly changing digital landscape   Video Length: 4 min 17 seconds    CONTENTS | Introduction | Video | Transcript | Resources In line with enterprise applica... See more...
Intelligent monitoring in a rapidly changing digital landscape   Video Length: 4 min 17 seconds    CONTENTS | Introduction | Video | Transcript | Resources In line with enterprise applications transitioning to the cloud, the need for simplified and AI-driven observability solutions is growing. AppDynamics’ cloud native response is an innovative platform designed to effortlessly onboard customer cloud environments, automate monitoring of ephemeral environments, and streamline MELT (Metrics, Events, Logs, and Traces) data correlation. By leveraging the power of data science—including machine learning and AI—it provides comprehensive solutions to challenges arising from applications, Kubernetes, infrastructure, or other cloud-native aspects in a multi-cloud world. This demonstration video explores these capabilities, demonstrating AppDynamics’ effectiveness through an application deployed to an AWS Kubernetes cluster. See Helm charts used to simplify the installation of monitoring components in a Kubernetes environment and enable app deployment auto-instrumentation using OpenTelemetry. Video Transcript Spoiler (Highlight to read) 00:00:09:05 - 00:00:37:18 As applications move to the cloud with the need to simplify observability, we are meeting those needs with AppDynamics. Here, we will show an example of a simple application deployed to an AWS Kubernetes cluster. It contains one pod and two virtual machines to support the cluster. We'll show how easy it is to onboard on to AppDynamics, requiring minimal configuration as is needed in large-scale environments and the world of microservices, as well as show how we are able to monitor every aspect of the application as it scales to meet user demand. 00:00:37:18 - 00:01:01:10 First, from the UI, we easily onboard a cloud connection to consume cloud components, including infrastructure metrics. Once that configuration is set, we can see how we pull in the infrastructure, including hosts, load balancers and storage, into the platform automatically. Reviewing the hosts, we see two hosts are being monitored through CloudWatch, including host metrics going back to investigate how many hosts are currently allocated to the Kubernetes cluster. 00:01:01:17 - 00:01:21:14 We see those (same) hosts, which we see in the AppDynamics UI, are the same hosts allocated to this cluster. The cloud connection is a one-time setup. Any new hardware allocated in the cloud will be monitored automatically by AppDynamics, as we'll see later in the video. After we set up our cloud connection, we will use Helm charts to simplify the installation of monitoring components in the Kubernetes environment. 00:01:21:16 - 00:01:41:13 This includes installing the app, mixed cluster operator and agent, the infrastructure and logs agents to monitor and collect logs. Additionally, we will enable the ability to auto-instrument applications using Open Telemetry, which provides a framework for capturing application telemetry data. To recap to this point, we have rapidly established a cloud connection to either a public or private cloud. Additionally, with just a few steps, we have started gathering telemetry and logs related to the Kubernetes environment. 00:01:41:16 - 00:02:04:07 Next, we will auto-instrument the Pet Clinic application deployment using Open Telemetry. After playing some load to the application, the Pet Clinic service appears in AppDynamics, as shown in this flow map. We can also see the business transaction generated by this Pet Clinic service, which is coming from the Open Telemetry instrumentation. 00:02:04:10 - 00:02:21:17 Here, we see three business transactions are being generated from this service. We also see the service instance of which there is only one. We see the Kubernetes cluster, the services running under, which namespace, which workloads, and how many pods of which we know there is only one deployed, and which hosts the pods are running under, of which there is only one pod. 00:02:21:23 - 00:02:39:10 So, there is only one host out of the two that are running this pod. Remember, we didn't have to do any fancy configuration to enable all this. We set up a cloud connection, installed a Helm chart to enable Kubernetes monitoring, and we auto instrumented our application. We didn't have to set up an application tier or node names as we do with a commercial SaaS solution. 00:02:39:10 - 00:03:01:05 And everything you see in AppDynamics was automatically ingested into the platform and automatically correlated between all entities, including APM, Kubernetes and the cloud infrastructure. Moving forward, AppDynamics will automatically monitor and show any changes to the infrastructure, such as scaling up or down, issues in Kubernetes or changes to the application, as well as any performance issues. Let’s show this by scaling the application. 00:03:01:07 - 00:03:24:05 First, we see there is only one pod running for Pet Clinic service and two nodes are host to support the cluster. Now we'll scale up the Pet Clinic application service from one pod to 20 pods total. We can see all the parts started with one having an issue. So a total of 21 pods. We also see the Kubernetes cluster automatically scaled up the number of virtual hosts, EC2 instances in this case to handle the increased demand for 20 pods. 00:03:24:06 - 00:03:45:13 We now have 20 pods running, one pod failed, and five hosts for the infrastructure. Coming back to the AppDynamics UI and under our Kubernetes cluster, we see AppDynamics automatically monitored and correlated the five total hosts to this cluster and Pet Clinic service. Additionally, we're also now automatically monitoring and reporting on 21 pods to support the Pet Clinic service. 00:03:45:14 - 00:04:07:09 AppDynamics’ new Cloud Native solution will make it easier to onboard customer cloud environments and automatically monitor their ephemeral environments, and automatically correlate incoming MELT metrics, events, logs, and traces. Once the data is in the platform, we can apply data science, such as machine learning and AI, to help solve problems from the application, Kubernetes, infrastructure or other cloud native aspects that the customer is using. 00:00:09:05 - 00:00:37:18As applications move to the cloud with the need to simplify observability, we are meeting those needs with AppDynamics. Here, we will show an example of a simple application deployed to an AWS Kubernetes cluster. It contains one pod and two virtual machines to support the cluster. We'll show how easy it is to onboard on to AppDynamics, requiring minimal configuration as is needed in large-scale environments and the world of microservices, as well as show how we are able to monitor every aspect of the application as it scales to meet user demand. 00:00:37:18 - 00:01:01:10First, from the UI, we easily onboard a cloud connection to consume cloud components, including infrastructure metrics. Once that configuration is set, we can see how we pull in the infrastructure, including hosts, load balancers and storage, into the platform automatically. Reviewing the hosts, we see two hosts are being monitored through CloudWatch, including host metrics going back to investigate how many hosts are currently allocated to the Kubernetes cluster. 00:01:01:17 - 00:01:21:14We see those (same) hosts, which we see in the AppDynamics UI, are the same hosts allocated to this cluster. The cloud connection is a one-time setup. Any new hardware allocated in the cloud will be monitored automatically by AppDynamics, as we'll see later in the video. After we set up our cloud connection, we will use Helm charts to simplify the installation of monitoring components in the Kubernetes environment. 00:01:21:16 - 00:01:41:13This includes installing the app, mixed cluster operator and agent, the infrastructure and logs agents to monitor and collect logs. Additionally, we will enable the ability to auto-instrument applications using Open Telemetry, which provides a framework for capturing application telemetry data. To recap to this point, we have rapidly established a cloud connection to either a public or private cloud.Additionally, with just a few steps, we have started gathering telemetry and logs related to the Kubernetes environment. 00:01:41:16 - 00:02:04:07Next, we will auto-instrument the Pet Clinic application deployment using Open Telemetry. After playing some load to the application, the Pet Clinic service appears in AppDynamics, as shown in this flow map. We can also see the business transaction generated by this Pet Clinic service, which is coming from the Open Telemetry instrumentation. 00:02:04:10 - 00:02:21:17Here, we see three business transactions are being generated from this service. We also see the service instance of which there is only one. We see the Kubernetes cluster, the services running under, which namespace, which workloads, and how many pods of which we know there is only one deployed, and which hosts the pods are running under, of which there is only one pod. 00:02:21:23 - 00:02:39:10So, there is only one host out of the two that are running this pod. Remember, we didn't have to do any fancy configuration to enable all this.We set up a cloud connection, installed a Helm chart to enable Kubernetes monitoring, and we auto instrumented our application. We didn't have to set up an application tier or node names as we do with a commercial SaaS solution. 00:02:39:10 - 00:03:01:05And everything you see in AppDynamics was automatically ingested into the platform and automatically correlated between all entities, including APM, Kubernetes and the cloud infrastructure. Moving forward, AppDynamics will automatically monitor and show any changes to the infrastructure, such as scaling up or down, issues in Kubernetes or changes to the application, as well as any performance issues. Let’s show this by scaling the application. 00:03:01:07 - 00:03:24:05First, we see there is only one pod running for Pet Clinic service and two nodes are host to support the cluster. Now we'll scale up the Pet Clinic application service from one pod to 20 pods total. We can see all the parts started with one having an issue. So a total of 21 pods. We also see the Kubernetes cluster automatically scaled up the number of virtual hosts, EC2 instances in this case to handle the increased demand for 20 pods. 00:03:24:06 - 00:03:45:13We now have 20 pods running, one pod failed, and five hosts for the infrastructure. Coming back to the AppDynamics UI and under our Kubernetes cluster, we see AppDynamics automatically monitored and correlated the five total hosts to this cluster and Pet Clinic service. Additionally, we're also now automatically monitoring and reporting on 21 pods to support the Pet Clinic service. 00:03:45:14 - 00:04:07:09AppDynamics’ new Cloud Native solution will make it easier to onboard customer cloud environments and automatically monitor their ephemeral environments, and automatically correlate incoming MELT metrics, events, logs, and traces. Once the data is in the platform, we can apply data science, such as machine learning and AI, to help solve problems from the application, Kubernetes, infrastructure or other cloud native aspects that the customer is using. Additional Resources  Learn more about OpenTelemetry and Kubernetes in the documentation.   Deploying the Cisco AppDynamics distribution of OpenTelemetry Collector, Kubernetes   Cisco AppDynamics support for OpenTelemetry
Hello, I have a question about how to pull custom method data collector values and add them to custom metrics which can be used in dashboard widgets on app dynamics. I have configured the data colle... See more...
Hello, I have a question about how to pull custom method data collector values and add them to custom metrics which can be used in dashboard widgets on app dynamics. I have configured the data collectors to pull the values from a given endpoint and have validated the values are being pulled from snapshots, however when I navigate to the analytics tab and search for the custom method data it is not present. I have double checked that transaction analytics is enabled for this application's business transaction in question, and the data collector is shown in the transaction analytics - manual data collectors section of analytics. The only issue is getting these custom method data collectors to populate in the Custom Method Data section of the search tab of analytics so that I can create custom metrics on this data. Any help is much appreciated!
Been receiving this error from my UF. extremely frustrating since splunk doesn't offer any support unless your paying them. -did then system daemon reload  - enable/disable boot-start - reviewed s... See more...
Been receiving this error from my UF. extremely frustrating since splunk doesn't offer any support unless your paying them. -did then system daemon reload  - enable/disable boot-start - reviewed splunkd.log -Somtimes it would say splunk.pid doesnt exist. -What the hell is going on here, failures for both  Ubuntu and AWS Splunk FW: Receiving the following error: "failed to start splunk.service: unit splunk.service not found" SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: error (Reason: Unit SplunkForwarder.service failed to load properly, please adjust/correct and reload service manager: Device or resource busy) Active: failed (Result: signal) since Wed 2024-01-17 20:04:18 UTC; 13s ago Duration: 1min 48.199s Main PID: 14888 (code=killed, signal=KILL) CPU: 2.337s  
Is there a way to export the content management list to excel? I want to go over them with my team and it would be faster to have the full list of objects to determine what we want to enable.
Hi, I am having issues passing value into savedsearch Below is the simplified version of my query: | inputlookup alert_thresholds.csv | search Alert="HTTP 500" | stats values(Critical) as Cr... See more...
Hi, I am having issues passing value into savedsearch Below is the simplified version of my query: | inputlookup alert_thresholds.csv | search Alert="HTTP 500" | stats values(Critical) as Critical | appendcols [| savedsearch "Events_list" perc=Critical] basically what I want to do is to use Critical value as the value of perc in subsearch but it seems to not work correctly. I get no results. When I replace Critical with 10 in the subsearch it works just fine.
we have a log ingestion from aws cloud env via HTTP event collector to splunk , one of the user reporting some of the logs which is missing in splunk is there any log file to validate this or if ther... See more...
we have a log ingestion from aws cloud env via HTTP event collector to splunk , one of the user reporting some of the logs which is missing in splunk is there any log file to validate this or if there is any connectivity drop in http to cloud apps how to validate this 
As a Splunk app developer, it’s critical that you set up your users for success. This includes marketing your app toward the right audience, making sure it’s easy for new customers to get up and runn... See more...
As a Splunk app developer, it’s critical that you set up your users for success. This includes marketing your app toward the right audience, making sure it’s easy for new customers to get up and running, and helping existing customers to keep things running smoothly. But how exactly can you do all of this? Through communicating strategic information directly to your users. In this overview, we’ll walk you through some of the different ways you can reach out to your users to help them get the most out of your app.   Reach Your Intended Audience on Splunkbase Splunkbase is Splunk’s app marketplace. Splunk platform system admins turn to Splunkbase to find apps that solve their problems so that they don’t need to build out every solution themselves. From streamlined data ingestion to compelling visualizations to proactive alerting to integrations with other systems, Splunk apps are hugely valuable to our customers.  The following screenshot shows an example of the Splunk DB Connect app listing on Splunkbase. Publishing your app to Splunkbase is a great way to reach these customers, but how do you make sure the right people are able to discover and download your app? At .conf22, Burch, Splunker and Splunk Trust member, described how to create a good Splunkbase listing in his breakout session, Best Practices and Better Practices for App Development. According to Burch, a good Splunkbase listing has the following characteristics. Contains a compelling overview. Is concise! Includes basic documentation and support information. Be sure to check out the session recording for the complete list of what makes a good Splunkbase listing. For more information about creating a Splunkbase listing, see Submit content to Splunkbase. Make “Getting Started” Simple With a Setup Page Your app might be as simple to install as selecting “Install App” in Splunk Web. But your app might also need to collect some information from its new users, such as licenses, API keys, or indexes to run on before it can work. If your app requires your users to provide configuration information, you can use a setup page to collect these details. A setup page is a view in your app that displays the first time a user launches the app in Splunk Web. This view guides users through the app configuration workflow and prompts them to provide any information required for the app to run. After a user completes setup, they are redirected to the app’s home page and can begin using the app.   Here’s an example of a setup page that prompts users to create a password for the app. For more information about setup pages, see Enable first-run configuration with setup pages in Splunk Cloud Platform or Splunk Enterprise. Log Errors from Your App Let’s face it. Sometimes things go wrong, and your app might not end up working the way you were expecting. But by logging useful information from your app straight to your customers’ indexes, you can make it easy for them, and for you, to get back on track. You can create a custom log file in your app to log errors related to modular inputs, custom search commands, external lookups, and other Splunk platform extensions. Custom log files help your users to quickly identify and debug any issues that come up, so that they can get back to using your app to maximize the value of their data. For example, say your app needs an API token to authenticate to an external API, but your customers didn’t configure this token before launching the app. You can log an error message to surface this issue, as shown in the following code snippet.   level=ERROR, appid=buttercup_games, message="API key not configured.", file=/opt/splunk/etc/apps/buttercup_games/bin/connect_to_api.py   Your users can then search for these errors in Splunk Web. For example, this Search Processing Language (SPL) string returns the error message shown above, as well as any other error messages generated by your app.   index=_internal appid=buttercup_games level=ERROR   To make troubleshooting even easier for your users, you can add scheduled searches to your app to automatically detect and alert users to errors that arise.  For more information about logging, see Logging for Splunk extensions in an app or add-on for Splunk Enterprise. Message Your Users through the Bulletin Board From time to time, you might need to reach out to users with specific roles to communicate key information about your app. For example, say you need to remind admins that it’s time to renew their licenses, prompt sc_admins to complete installation, or let users know that they must update their apps before updating their Splunk platform installations to avoid breaking changes.  You can message app users with specified roles using the bulletin board. The bulletin board lets you use the Splunk platform REST API to create custom messages that display in Splunk Web for users assigned to the roles that you specify. Users can view bulletin board messages through the Messages menu when they log in to Splunk Web, as shown in the following image. For more information about bulletin board messages, see Message users in apps for Splunk Cloud Platform and Splunk Enterprise. Get User Metrics from Splunkbase The best way to make sure you’re seeing eye to eye with your app’s users is to speak with them directly. But how do you even know who, exactly, these users are? When someone downloads your app from Splunkbase, they have the option to share their contact information. You can use this information to provide support and updates, elicit feedback, and conduct usability testing. The following screenshot shows how to download user contact information from Splunkbase. For more information about gathering user metrics, see User leads. Build a Community for Your Users  Splunk’s customers are passionate and knowledgeable about the platform and they generally enjoy sharing their expertise with one another. You can build a community for your app’s users to foster a support network that helps them have the best experience possible with your app. One community program that Splunk offers is Splunk Answers, a question and answer forum for users to get help deploying, managing, and using Splunk products. Users can search existing posts for solutions and ask questions of their own if they can't find what they're looking for. People who answer questions in the forum are experienced Splunk customers, partners, or employees who are passionate about helping the community. Your customers can also use Splunk Answers tags to identify content related to your app. When you upload your app to Splunkbase, Splunkbase generates a Splunk Answers tag for it. As a best practice, make sure to note this tag in your Splunkbase listing, as well as in any other support material you provide, including in-app dashboards and readme files.  For example, the “dbconnect” tag, as shown in the following screenshot, helps users with questions about Splunk DB Connect to connect with each other and with the app development team.    To get started building a community for your users, see Splunk Answers. What’s Next? Now, it’s time to start implementing these methods of communicating with users in your own Splunk apps!  As always, if you have any questions or feedback, feel free to reach out at devinfo@splunk.com.  You can also join us in the #appdev channel of the `splunk-usergroups` Slack workspace, which you can join at splk.it/slack. We post there under the @taylor and @tedd handles and would be happy to chat about your thoughts on this blog post. Many thanks to @smcmaster , @chuggard_splunk , and @dhosaka for their assistance with this article! 
The Observability team is planning to release its first Role Based Access Control (RBAC) release soon. This release will deliver additional pre-defined roles out-the-box including Admin, Power, Read-... See more...
The Observability team is planning to release its first Role Based Access Control (RBAC) release soon. This release will deliver additional pre-defined roles out-the-box including Admin, Power, Read-only and Usage roles. In preparation of that release, we plan to rename the existing “User” role to “Power” role. There is no change in the capabilities or permissions of the role or existing assignments of user roles but rather this is simply a role name change. Customer will start seeing new Power role instead of user role in the User page. Customers should expect the renaming of the user role sometime at the end of the month January 2024, before RBAC pre-defined roles is released.  Correction: Customer should expect renaming of user role couple of days after RBAC pre-defined roles is released in early February 2024.
Events are merging like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T1... See more...
Events are merging like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T15:26:50.032387-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running2022-02-02T15:26:50.488943-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=5fdc6ec-01f0-41d5-8a33-d58b5efre2022-02-02T15:26:50.496126-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=6fe48c-20ee-4f7b-bf88-22ed5dfdd2022-02-02T15:26:50.502563-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=bcd5c461-9d23-4c79-8509-4af76c03ff5a2022-02-02T15:26:50.505764-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=bbb9449e-2893-4d06-bc51-edfdd42022-02-02T15:26:50.512171-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=155c7a37-69bc-44d2-98ac-cb75831a7c472022-02-02T15:26:50.517049-05:00 mycompany: [Portal|CONTENTMANAGER|20942|-] Created fields (category), uid=a575dfde3eb-4ca6-be2d-4491a4b59fe02022-02-02T15:33:33.669982-05:00 mycompany: syslog initialised2022-02-02T15:33:40.935228-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting2022-02-02T15:33:41.990171-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running2022-02-02T15:35:34.533063-05:00 mycompany: syslog initialised2022-02-02T15:35:42.168799-05:00 mycompany: [Portal|SYSTEM|20001 I am expecting logs should break on timestamps like this: 2022-02-02T15:26:46.593150-05:00 mycompany: syslog initialised 2022-02-02T15:26:48.970328-05:00 mycompany: [Portal|SYSTEM|20001|*system] Portal is starting 2022-02-02T15:26:50.032387-05:00 mycompany: [Portal|SYSTEM|20002|*system] Portal is up and running
Hello to all, really hoping I can make sense while asking this....    I'm an entry level  IT Security Specialist and I have been tasked with re-writing our current query for overnight logins as our e... See more...
Hello to all, really hoping I can make sense while asking this....    I'm an entry level  IT Security Specialist and I have been tasked with re-writing our current query for overnight logins as our existing query does not put out the correct information we need.  Here is the current query: source=WinEventLog:Security EventCode=4624 OR (EventCode=4776 Keywords="Audit Success") | eval Account = mvindex(Account_Name, 1) | eval TimeHour = Strftime(_time, "%H") | eval Source = coalesce(Source_Network_Address, Sorce_Workstation) | eval Source=if(Source="127.0.0.1" or Source="::1" OR Source="-" OR Source="", hos, Source) | where (Time_Hour > 20 AND Time_Hour <24) OR (Time_Hour > 0 AND Time_Hour < 5) | bin _time span=12h aligntime=@d+20h | eval NightOf = strftime(_time "%m/%d/%Y) | lookup dnslookup clienttip as Source OUTPUT clienthost as SourceDevice | search NOT Account="*$" NOT Account=HealthMail*" NOT Account="System" | stats count as LoginEvents values(sourceDevice) as SourceDevices by Account NightOf | sort NightOfAccount SourceDevices | table NightOf Account Source Devices LoginEvents I need to somehow add an exclusion to the query for logon type 3, (meaning for splunk to omit them from its search), as well as add our asset to the query, that way splunk will only target searches from that particular asset.   I know nothing about coding, or scripts, and my boss just thought it would be super fun if the guy with the least experience try to figure it all out since the current query does not give us the data that we need for our audits.  In a nutshell, we need splunk to tell us who was logged in between 8pm-5am, that it was a logon type 2 , and what computer system they were on.  If anyone could help out an absolute noob here I would greatly appreciate it!  
Hi All,    I have particular issue when getting data from kv store is working fine. But saving anything using  helper.save_check_point  is failling. Also added logs and found that this issue is o... See more...
Hi All,    I have particular issue when getting data from kv store is working fine. But saving anything using  helper.save_check_point  is failling. Also added logs and found that this issue is only for  batch_save post API which splunk uses internaly and error I get is                  File "/opt/splunk/lib/python3.7/http/client.py", line 1373, in getresponse response.begin() File "/opt/splunk/lib/python3.7/http/client.py", line 319, in begin version, status, reason = self._read_status() File "/opt/splunk/lib/python3.7/http/client.py", line 288, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response