All Posts

Top

All Posts

Hi @_pravin , in this case, I'm sorry, but the only solution is to open a case to Splunk Support. before opening the case, remember to prepare the diags of the CM, the OK IDX and one NOT OK IDX. C... See more...
Hi @_pravin , in this case, I'm sorry, but the only solution is to open a case to Splunk Support. before opening the case, remember to prepare the diags of the CM, the OK IDX and one NOT OK IDX. Ciao. Giuseppe
Hello @richgalloway found out | tstats ... by source provides less results than | tstats ... values(source)  in a search combining a query joined with tstats | tstats min(_time) as firstTime... See more...
Hello @richgalloway found out | tstats ... by source provides less results than | tstats ... values(source)  in a search combining a query joined with tstats | tstats min(_time) as firstTime max(_time) as lastTime values(source) as source WHERE index=* by host,index provides ALL sources | tstats min(_time) as firstTime max(_time) as lastTime WHERE index=* by host,index,source provides only 1 source
Agent Saturation What and Whys In application performance monitoring, saturation is defined as the total load on a system or how much of a given resource is consumed at a time. If saturation is at ... See more...
Agent Saturation What and Whys In application performance monitoring, saturation is defined as the total load on a system or how much of a given resource is consumed at a time. If saturation is at 100%, your system is running at 100% capacity. This is generally a bad thing. Agent saturation is a similar concept. It represents the percentage of available system resources currently being monitored by an observability agent. 100% agent saturation means 100% of available resources are instrumented with observability agents, which is a great thing. When it comes to observability practices, agent saturation is how well a system is instrumented and can be represented by:  ( instrumented resources / total resources ) x 100 Because greater visibility into system health and performance means proactive detection of issues, improved user experience, more efficient troubleshooting, decreased downtime, and countless other pluses, 100% agent saturation is the ultimate goal.  So why doesn’t everyone get to 100% agent saturation for full system observability and magical unicorn application visibility status? It’s challenging! Setting up observability agents across distributed applications and environments takes time. In ephemeral, dynamic systems already integrated into existing solutions, it can just be too much of a lift.  But the good news is that if you’re already using Splunk (maybe for logging and security) there are quick and easy ways to improve your system observability. In this post, we’re going to look at how to leverage the Splunk Add-on for OpenTelemetry Collector to gain a quick win when it comes to improving agent saturation.  Improve Agent Saturation with the Splunk Add-on for OpenTelemetry Collector For Splunk Enterprise or Splunk Cloud Platform customers who ingest logs using universal forwarders, you can quickly improve agent saturation and deploy, update, and configure OpenTelemetry Collector agents in the same way you do any of your other technology add-ons (TAs). The Splunk Add-on for OpenTelemetry Collector leverages your existing Splunk Platform and Splunk Cloud deployment mechanisms (specifically the universal forwarder and the deployment server) to deploy the OpenTelemetry Collector and its capabilities for increased visibility into your system from Splunk Observability Cloud. The add-on is a version of the Splunk Distribution of the OpenTelemetry Collector that simplifies configuration, management, and data collection of metrics and traces. This means OpenTelemetry instrumentation will out-of-the-box exist anywhere the universal forwarder is present for logs and security use cases, making it easier to instrument systems quickly and gain visibility into telemetry data from within Splunk Observability Cloud. This comprehensive system coverage also comes with out-of-the-box Collector content and configuration with Splunk-specific metadata and optimizations (like batching, compression, and efficient exporting), all preconfigured. This means that you can get answers using observability data faster, saving you time and effort. Prerequisites for using the Splunk Add-on for OpenTelemetry Collector include:  Splunk Universal Forwarder (version 8.x or 9.x on Windows or Linux) Splunk Observability Cloud Splunk Enterprise or Splunk Cloud or deployment server as forwarders (Optional) Install the deployment server if you plan to use it to push the Collector to multiple hosts Getting started with the Splunk Add-on for OpenTelemetry Collector The Splunk Add-on for OpenTelemetry Collector is available on Splunkbase similar to other TAs, and you can deploy it alongside universal forwarders using existing Splunk tools like the deployment server.  We have a Linux EC2 instance we’re going to be instrumenting, but we first need to download the Splunk Add-On for OpenTelemetry Collector from Splunkbase:  We’ll unzip the package and then create a local folder and copy over the config credential files: In Splunk Observability Cloud, we’ll get the access token and the realm for our Splunk Observability Cloud organization:  Your organization's realm can be found under your user’s organizations:  Next, we set these values in our /local/access_token file: We then need to make sure the Splunk Add-On for OpenTelemetry folder (Splunk_TA_otel) is in the deployments app folder on the deployment server instance: We’ll then move over to the deployment server UI in Splunk Enterprise to create the Splunk_TA_otel server class and add the relevant hosts along with the Splunk_TA_otel app. Once the TA is installed make sure you check both Enable App and Restart Splunkd and select Save: That’s it! If we now navigate to Splunk Observability Cloud, we’ll see the telemetry data flowing in from our EC2 instance:  Wrap up Increasing agent saturation and improving observability for comprehensive system insight can be quick and easy. Not sure how you’re currently doing in terms of agent saturation? Check out our Measuring & Improving Observability-as-a-Service blog post to learn how to set KPIs on agent saturation. Ready to improve your agent saturation? Sign up for a Splunk Observability Cloud 14-day free trial, integrate the Splunk Add-on for OpenTelemetry Collector, and start on your journey to 100% agent saturation.  Resources Splunk Add-On for OpenTelemetry Collector Differences between the OpenTelemetry Collector and the Splunk Add-on for OpenTelemetry Collector Get started with Splunk Observability Cloud
Hi @gcusello , We have enough space in the servers, it's not an issue with the disk. Thanks, Pravin
Hi @Karthikeya , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @_pravin , check if you have enough disk space in all your indexers. If you have, open a case to Splunk Support. Ciao. Giuseppe
You may use /opt/splunk/bin/genRootCA.sh to regenerate ca.pem & cacert.pem
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have... See more...
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have two server associated. In addition, we have a cluster manager, search heads cluster, development search head, development indexer and deployment server. All instances have splunk 8.1.14. I see the Splunkd Thread Activity looking for possible clues to a problem we have when indexing data in production. The problem is that sometime we don't have some events in production. Not matter what sourcetype or method used to ingest data. We suspect that can be problem of indexers or of index where data is ingested.
yes it can. and no it wont. because you wont be extracting fields at index time if you dont use indexed_extractions=json.  Splunk is very good at applying only what config matters. So when in doubt ... See more...
yes it can. and no it wont. because you wont be extracting fields at index time if you dont use indexed_extractions=json.  Splunk is very good at applying only what config matters. So when in doubt send them to both idx and sh. Splunk usually just figures it out.  The duplicate extractions issues happens when you do BOTH index time (indexed_extractions=json) AND Search time (kv_mode=json) in your props.conf config. Thats when they may collide, and is why i say i ALMOST never enable indexed_extractions=json as I would always prefer review of search time extract then only move key fields i need to index time for performance reasons. 
Hi,   I am trying to push the configuration bundle from the CM to the indexers. I keep getting the error message "Last Validate and Check Restart:  Unsuccessful" The validation is done for one of ... See more...
Hi,   I am trying to push the configuration bundle from the CM to the indexers. I keep getting the error message "Last Validate and Check Restart:  Unsuccessful" The validation is done for one of the indexers, and it's 'checking for restart' for the other two indexers. When I checked the last change date for all the indexers, only one of them has been updated and the other 2 are not. But it's opposite to what is shown in the UI of the CM.   Regards, Pravin    
To configure NetScaler to pass the source IP, you'll need to enable the Use Source IP (USIP) mode. Here are the steps to do this: Log in to NetScaler: Open your NetScaler management interface. ... See more...
To configure NetScaler to pass the source IP, you'll need to enable the Use Source IP (USIP) mode. Here are the steps to do this: Log in to NetScaler: Open your NetScaler management interface. Navigate to Load Balancing: Go to Traffic Management > Load Balancing > Services. Open a Service: Select the service you want to configure. Enable USIP Mode: In the Advanced Settings, find the Service Settings section and select Use Source IP Address. This will ensure that NetScaler uses the client's IP address for communication with the backend servers. Would you like more detailed instructions or help with another aspect of your setup?
Ok here my doubt is... Can one app which contains props.conf (with kv_mode=json) be distributed to both indexers and search heads? Because will it may lead to duplication of fields or events by any c... See more...
Ok here my doubt is... Can one app which contains props.conf (with kv_mode=json) be distributed to both indexers and search heads? Because will it may lead to duplication of fields or events by any chance? Index time and search time extraction I am asking about. Is it ok?
Are you trying to find errors send email *from* Splunk or using Splunk to find any email sending errors?  I'll assume the former for now. Splunk logs email it sends in python.log.  Searching for "se... See more...
Are you trying to find errors send email *from* Splunk or using Splunk to find any email sending errors?  I'll assume the former for now. Splunk logs email it sends in python.log.  Searching for "sendemail" should find them.  The only errors you're likely to find are failures to pass the email to the SMTP server.  Any failures beyond that point would be sent as mailer-daemon messages to the sending mailbox.  You'll only be able to search for those if you are Splunking the mailbox (not common).
It seems the company firewall blocked outbound traffic to 8088. Issue explained
Simplest way to put it...create a single app with all your sourcetype configs in it, then distribute that app using the appropriate mechanism for 1. indexers (manager node) 2. Search heads (deployer ... See more...
Simplest way to put it...create a single app with all your sourcetype configs in it, then distribute that app using the appropriate mechanism for 1. indexers (manager node) 2. Search heads (deployer for SHC or DS/Directly, if standalone) 
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have... See more...
Hi,   I have a indexers cluster with 4 indexers. All indexers have 8.1.14 splunk version. The OS of servers are RedHat 7.9. The indexers cluster are multisite. We have two sites. Each of sites have two server associated. In addition, we have a cluster manager, search heads cluster, development search head, development indexer and deployment server. All instances have splunk 8.1.14. I see the Splunkd Thread Activity looking for possible clues to a problem we have when indexing data in production. The problem is that sometime we don't have some events in production. Not matter what sourcetype or method used to ingest data. We suspect that can be problem of indexers or of index where data is ingested.
i checked splunkd.log but did not find anything listed under connected or 9997 i did a netstat -an and cannot find any connections to 9997. where else can i check on a windows system that logs are ... See more...
i checked splunkd.log but did not find anything listed under connected or 9997 i did a netstat -an and cannot find any connections to 9997. where else can i check on a windows system that logs are forwarding?
Can I put kv_mode = json in already  existing props.conf in manager node then it will push to peer nodes? But you said it should be in search heads? Should I create new app in Deployer and in locals ... See more...
Can I put kv_mode = json in already  existing props.conf in manager node then it will push to peer nodes? But you said it should be in search heads? Should I create new app in Deployer and in locals hould I place props.conf (here I will keep kv_mode = json) and then deploy it to search heads? Sorry I am asking so many questions literally I am confused here...
kv_mode=json would be in the sourcetype on the Search Heads.  Ingest_Eval will be props/transforms on indexers.  Technically you can just put all the configs everywhere and splunk will sort it out.
Hi, after your basic search you can create a table. Then you can use replace  like | replace blub with 1blub ... Then you create a chart  and do a rename after.