All Topics

Top

All Topics

The documentation seems to suggest that version 8.0.1 of "Splunk Enterprise Security" is available for download from splunkbase, however the lastest version available there appears to be 7.3.2. Am I ... See more...
The documentation seems to suggest that version 8.0.1 of "Splunk Enterprise Security" is available for download from splunkbase, however the lastest version available there appears to be 7.3.2. Am I missing something? https://docs.splunk.com/Documentation/ES/8.0.1/Install/UpgradetoNewVersion https://splunkbase.splunk.com/app/263/
Ever since upgrading Windows clients above to 9.0 we've had access issues. We've resolved some of that by adding the "SplunkForwarder" user (which gets provisioned at the time of the install) to the ... See more...
Ever since upgrading Windows clients above to 9.0 we've had access issues. We've resolved some of that by adding the "SplunkForwarder" user (which gets provisioned at the time of the install) to the Event Log Readers group. Unfortunately, that hasn't resolved all access issues. IIS logs for instance .. When I deploy a scripted input to a test client to provide a directory listing of C:\Windows\System32\Logfiles\HTTPERR ... the internal index gets a variety of errors, one of which is included below. (yes, the directory exists) Get-ChildItem : Access to the path 'C:\Windows\System32\Logfiles\HTTPERR' is denied  So, other than having our IT staff reinstall the UF everywhere to run as a System privileged user as it has run in every version I've ever worked with ... How are we to know what Group the SplunkForwarder user needs to be added to read data that is not under the purview of "Event Log Readers"
Hello. I am trying to get SAML authentication working on Splunk Enterprise using our local IdP, which is SAML 2.0 compliant.  I can successfully authenticate against the IdP, which returns the asser... See more...
Hello. I am trying to get SAML authentication working on Splunk Enterprise using our local IdP, which is SAML 2.0 compliant.  I can successfully authenticate against the IdP, which returns the assertion, but Splunk won't let me in. I get this error: "Saml response does not contain group information." I know Splunk looks for a 'role' variable, but our assertion does not return that. Instead, it returns "memberOf", and I added that to authentication.conf: [authenticationResponseAttrMap_SAML] role = memberOf I also map the role under roleMap_SAML. It seems like no matter what I do, no matter what I put, I get the "Saml response does not contain group information." response.  I have a ticket open with tech support, but at the moment, they're not sure what the issue is.  Here's a snippet (masked) of the assertion response: <saml2:Attribute FriendlyName="memberOf" Name="urn:oid:1.2.xxx.xxxxxx.1.2.102" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"> <saml2:AttributeValue xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xsd:string"> xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:some-group </saml2:AttributeValue> </saml2:Attribute> Feeling out of options, I asked ChatGPT (I know, I know), and it said that the namespace our assertion is using may be the issue. It said that Splunk uses the "saml" namespace, but our IdP is returning "saml2". I don't know if that's the actual issue nor, if it is, what to do about it.  splunkd.log shows the error message that I'm seeing in the web interface: 12-12-2024 15:14:24.611 -0500 ERROR Saml [847764 webui] - No value found in SamlResponse for match key=saml:AttributeStatement/saml:Attribute attrName=memberOf err=No nodes found for xpath=saml:AttributeStatement/saml:Attribute I've looked at the Splunk SAML docs, but don't see anything about namespacing, so maybe ChatGPT just made that up.  What exactly is Splunk looking for that I'm not providing?  If anyone has any suggestions or insight, please let me know. Thank you!
I am creating a dashboard with Splunk to monitor offline assets in my environment with SolarWinds. I have the add-on and incorporate solarwinds:nodes and solarwinds:alerts into my query. I am running... See more...
I am creating a dashboard with Splunk to monitor offline assets in my environment with SolarWinds. I have the add-on and incorporate solarwinds:nodes and solarwinds:alerts into my query. I am running into an issue where I cant get the correct output for how long an asset has been down.  In SolarWinds you can see Trigger time in the Alert Status Overview. This shows the exact date and time the node went down.  I cannot find a field from the raw data between both sourcetypes that will give me that output. I want to use eval to show how much time has passed since the trigger. Does anyone know how to achieve this?     
What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth analysis on your incoming telemetry data. SignalFlow is the computational backbone that... See more...
What is SignalFlow? Splunk Observability Cloud’s analytics engine, SignalFlow, opens up a world of in-depth analysis on your incoming telemetry data. SignalFlow is the computational backbone that powers all charts and detectors in Splunk Observability Cloud, but the statistical computation engine also allows you to write custom programs in the SignalFlow programming language (modeled after and similar to Python). Writing SignalFlow programs definitely isn’t required for a complete observability practice, but if you want to perform advanced computations on your Splunk Observability Cloud data, SignalFlow is your friend. In this post, we’ll explore the whys and hows of SignalFlow so you can run these computations, aggregations, and transformations on your data and stream the results to detectors and charts for custom and in-depth observability analytics. Why use SignalFlow? Most Splunk Observability Cloud use cases don’t require complex computations. Out-of-the-box charts and detectors make building a complete observability solution easy. But sometimes there is a need for more detailed and advanced insight. SignalFlow can be used to: Define custom behavior or conditions for fine-tuned control over your monitoring so you can tailor your charts and detectors to your specific needs Aggregate metrics from different applications, cloud providers, or environments to unify data Troubleshoot by correlating metrics across many different sources – using SignalFlow during an incident helps provide deep, real-time investigation into root cause Detect trends over time or compare historical data to increase resiliency, reduce downtime, and capacity plan – i.e. correlate resources with user activity over time to optimize resource allocation Stream metric data to background analytics jobs – i.e. execute computations across a population over time Create reports or visualizations in third-party UIs so you can stream data out of Splunk Observability Cloud – i.e. in a service provider use case you can stream data to your own UIs and expose it to your customers Correlate business metrics with infrastructure and/or application metrics to understand how performance impacts customers – i.e. how does latency impact customer renewal rates There are many reasons why you might need to tap into SignalFlow; generally, if you need customized analytics, SignalFlow is the answer. So let’s see how it works! How to use SignalFlow You can define SignalFlow queries directly in the Splunk Observability Cloud UI or programmatically using the SignalFlow API. If you open up or create a chart in the UI, you’ll see the chart builder view:  If you select View SignalFlow, you can dive right into the SignalFlow that powers the chart and use it as a template for additional programs:  The same is true for detectors. If you open up a detector, you can select the kebab icon to Show SignalFlow:    SignalFlow programs outside of the Splunk Observability Cloud UI typically live within code configurations for detectors and dashboards (see our Observability as Code post). When you create a chart or detector using the API, you can specify a SignalFlow program as part of the request. Here’s an example of defining a detector using Terraform, where the program_text is the SignalFlow program:  You can also use the Signalflow API to run programs in the background and receive the results asynchronously in your client. Let’s take a look at some SignalFlow functions and methods we can use to build out charts and detectors.  SignalFlow Functions and Methods in Charts Most SignalFlow programs begin with a data() block. The data function is used to query data and is the main way to create stream objects, which are similar to a time-parameterized NumPy array or pandas DataFrame. Queries can run against both real-time incoming streams and historical data from the systems you monitor. In SignalFlow, you specify streams as queries that return data. Here’s a template for the data function:  We can expand on the data function in many ways. For example, here’s what it would look like to query for the CPU utilization metric and filter by host: We can also add/chain methods or functions to our data block. Here are examples of using the mean method to look at mean CPU utilization, mean CPU utilization by Kubernetes cluster, and mean CPU utilization over the last hour:  Operations like mean, variance, percentile, exclude, ewma, timeshift, rate of change, standard deviation, map(lambda), and others are available as methods on numerical streams. Here’s an example where our data stream, signal, is the CPU utilization with a filter of host, and we can add functions to timeshift by a week and two weeks, and then find the max value:  Comparing the max CPU utilization for two separate time series can’t actually be accomplished using the chart plot editor in the Splunk Observability Cloud UI, so this is an instance of where using SignalFlow is necessary.  To actually output these stream results to a chart, we need to call the publish() method:  We’ve now built out a chart using SignalFlow ! We can also do this with detectors – read on. SignalFlow Functions and Methods in Detectors Detectors evaluate conditions involving one or more streams, and typically compare streams over periods of time – i.e. disk utilization is greater than 80% for 90% of the last 10 minutes. When building detectors using SignalFlow, we still start with our data streams, and then transform our data streams using boolean logic:  Note: when setting static thresholds in the UI, thresholds can only be greater than or less than. But as you can see with SignalFlow, we can specify greater than or equal to static thresholds.  We can use these when statements on their own or combined with and, or, not statements to publish our alert conditions and build out our detectors:  The detect streams in this example are similar to data streams. Detect streams turn our boolean statement – when our signal is greater than 90 for 1 minute – into an event stream. When this statement is evaluated as true, an event will fire and be published to an event stream. This is what triggers an alert.  Note: event streams are evaluated and published in real time as metrics are ingested, enabling you to find problems faster and speed up your MTTD. Every publish method call in a SignalFlow detect statement corresponds to a rule on the Alert Rules tab in the Splunk Observability Cloud UI. The label inside the publish block is displayed next to the number of active alerts in the Alert Rules tab:  You can create your detectors using the SignalFlow API, but if you want to use SignalFlow to build detectors directly in the Splunk Observability Cloud detector UI, you can append /#/detector/v2/new to your organization URL to do so:  Wrap up While working with SignalFlow is not required, it can help customize and advance your observability practice. A great place to start is editing the SignalFlow for existing charts and detectors in the Splunk Observability Cloud UI or using observability as code with SignalFlow programs. In no time, you’ll be building out SignalFlow program background jobs and streaming customized analytics to meet all your observability and business needs.  New to Splunk Observability Cloud? Try it free for 14 days!  Resources SignalFlow and analytics Analyze data using SignalFlow Up Close Monitoring with SignalFlow Intermediate to advanced SignalFlow
Hi, I can see the below error in the internal logs for a host  that is not bringing any logs in  Splunk error SSLOptions [17960 TcListener] - inputs. conf/[SSL]: could not read properties; we don’... See more...
Hi, I can see the below error in the internal logs for a host  that is not bringing any logs in  Splunk error SSLOptions [17960 TcListener] - inputs. conf/[SSL]: could not read properties; we don’t have ssl options in inputs.conf just wondered if there was any other locations to check on the universal forwarder as it works fine for other servers.
Hi there! I want to create a scorecard by Manager and Region counting my Orders over Month. So the chart would look something like:  I have all the fields: Region, Director, Month and Order_Numb... See more...
Hi there! I want to create a scorecard by Manager and Region counting my Orders over Month. So the chart would look something like:  I have all the fields: Region, Director, Month and Order_Number to make a count. Please let me know if you have an efficient way to do this in SPL. Thank you very much!    
Hi, I’m quite new to splunk when it comes to sending data to splunk. I do have experience with making dashboards etc. I’ve got a problem receiving data from a windows pc. I’ve installed the universal... See more...
Hi, I’m quite new to splunk when it comes to sending data to splunk. I do have experience with making dashboards etc. I’ve got a problem receiving data from a windows pc. I’ve installed the universal forwarder on there and I’ve got another windows pc that acts as my enterprise environment. I do know that the forwarder is active and can see a connection. I want to send wineventlog data to splunk. I’ve made a input.conf and output.conf containing information for what I want to forward. But when I want to look it up in the search I have 0 events. I’m sure I’m doing some things wrong haha. I would like some help with it. Thanks! 
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to ... See more...
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to change the template of the email.  is there a way to stop it?   thank you in advance  
Hello everyone,  need your support to parse below sample json, i want is  1. Only the fields from "activity_type" till "user_email" 2. Remove first lines before "activity_type"and last lines after... See more...
Hello everyone,  need your support to parse below sample json, i want is  1. Only the fields from "activity_type" till "user_email" 2. Remove first lines before "activity_type"and last lines after "user_email" 3. Line should break at "activity_type" 4. TIME_PREFIX=event_time   i added below but doesn't work removing the lines and TIME_PREFIX  [ sample_json ] BREAK_ONLY_BEFORE=\"activity_type":\s.+, CHARSET=UTF-8 SHOULD_LINEMERGE=true disabled=false LINE_BREAKER=([\r\n]+) TIME_PREFIX=event_time SEDCMD-remove=s/^\{/g   Sample data: { "status": 0, "message": "Request completed successfully", "data": [ { "activity_type": "login", "associated_items": null, "changed_values": null, "event_time": 1733907370512, "id": "XcDutJMBNXQ_Xwfn2wgV", "ip_address": "x.x.x.x", "is_impersonated_user": false, "item": { "user_email": "xyz@example.com" }, "message": "User xyz@example.com logged in", "object_id": 0, "object_name": "", "object_type": "session", "source": "", "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1.1 Safari/605.1.15", "user_email": "xyz@example.com" }, { "activity_type": "export", "associated_items": null, "changed_values": null, "event_time": 1732634960475, "id": "bd0XaZMBH5U9RA7biWrq", "ip_address": "", "is_impersonated_user": false, "item": null, "message": "Incident Detail Report generated successfully", "object_id": 0, "object_name": "", "object_type": "breach incident", "source": "", "user_agent": "", "user_email": "" }, { "activity_type": "logout", "associated_items": null, "changed_values": null, "event_time": 1732625563087, "id": "jaGHaJMB-qVJqBPy_3IG", "ip_address": "87.200.106.98", "is_impersonated_user": false, "item": { "user_email": "xyz@example.com" }, "message": "User xyz@example.com logged out", "object_id": 0, "object_name": "", "object_type": "session", "source": "", "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36", "user_email": "xyz@example.com" } ], "count": 33830, "meta_info": { "total_rows": 33830, "row_count": 200, "pagination": { "pagination_id": "" } } }
Hello, I have all-in-one Splunk instance with data already indexed. Now I want to add new Indexer (not-clustered, clean installation). I would like to move part of indexed data to new Indexer (to ha... See more...
Hello, I have all-in-one Splunk instance with data already indexed. Now I want to add new Indexer (not-clustered, clean installation). I would like to move part of indexed data to new Indexer (to have cca the same amount of data on both instances). My idea of process is: Stop all-in-one instance  Create new index(es) on new indexer Stop new indexer Copy (what is best - rsync?) part of buckets in given index(es) from all-in-one instance to new indexer Start new indexer and all-in-one instance Configure outputs.conf on forwarders - add new indexer Add new indexer as search peer to all-in-one instance Would it work or I missed something? Thank you for help. Best regards Lukas Mecir
 hi , I want to extract from  this date 12/11/2024 result should be 12/2024
Hi I have a Dashboard in which i am using javascript , so whenever I am making changes on the script and restarting the splunk,  I am not able to see those changes , The only way I am able to see tho... See more...
Hi I have a Dashboard in which i am using javascript , so whenever I am making changes on the script and restarting the splunk,  I am not able to see those changes , The only way I am able to see those changes when I am clearing my cache of my browser.
Hi Splunkers, Per this documentation - https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/tokens - setting default value is done by navigating to the Interactions section of the Configur... See more...
Hi Splunkers, Per this documentation - https://docs.splunk.com/Documentation/Splunk/latest/DashStudio/tokens - setting default value is done by navigating to the Interactions section of the Configuration panel. This is simple with the given example with the token set as $method$. "tokens": { "default": { "method": { "value": "GET" } } }   Would anyone be able to advise as to how can I set default tokens of a dashboard (created using Dashboard Studio) if the value is of the panel is pointing to a data source whose query has a dependency to another data source's results? Panel A: Data Source: 'Alpha status' 'Alpha status' query: | eval status=$Beta status:result._statusNumber$     e.g. I need to set a default token value for $Beta status:result._statusNumber$ Thanks in advance for the response.
Hi All, I have few columns with is in the format "21 (31%)" , these are the value and percentage of the value. I want to use MinMidMax for the coloring based on the percentage. But i am not able to ... See more...
Hi All, I have few columns with is in the format "21 (31%)" , these are the value and percentage of the value. I want to use MinMidMax for the coloring based on the percentage. But i am not able to use it directly since it is a customized value. Any one knows any solution for coloring such columns?
Hi smart folks. I have the output of a REST API call as seen below. I need to split each of the records as delimited by the {} as it's own event with each of the key:values defined for each record.  ... See more...
Hi smart folks. I have the output of a REST API call as seen below. I need to split each of the records as delimited by the {} as it's own event with each of the key:values defined for each record.  [ { "name": "ESSENTIAL", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 17, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" }, { "name": "ADVANTAGE", "status": "ENABLED", "compliance": "EVALUATION", "consumptionCounter": 0, "daysOutOfCompliance": "-", "lastAuthorization": "Jul 09,2024 22:49:25 PM" }, { "name": "PREMIER", "status": "ENABLED", "compliance": "EVALUATION", "consumptionCounter": 0, "daysOutOfCompliance": "-", "lastAuthorization": "Aug 10,2024 21:10:44 PM" }, { "name": "DEVICEADMIN", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 2, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" }, { "name": "VM", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 2, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" } ] Thanks in advance for any help you all might offer to get me down the right track.
Is there any way to search for similar strings dynamically in different  logs? I want to group unique error string coming from different logs. Events are from different application having different... See more...
Is there any way to search for similar strings dynamically in different  logs? I want to group unique error string coming from different logs. Events are from different application having different logging format. I am creating a report that shows count of events for all the unique error string. Sample Events: error events found for key a1 Invalid requestTimestamp abc error event found for key a2 Invalid requestTimestamp def correlationID - 1234 Exception while calling some API ...java.util.concurrent.TimeoutException correlationID - 2345 Exception while calling some API ...java.util.concurrent.TimeoutException Required results: I am looking for the following stats from the above error log statements 1) Invalid requestTimestamp - 2 2) error events found for key - 2 3) Exception while calling some API ...java.util.concurrent.TimeoutException -2
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am think... See more...
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am thinking before I install UF on clients. Configure LDAP and other parameters Create users (Admin and other users)  Identify data ingestion disk partition  Enable Data receiving   Create indexes   Am I missing anything before I install UF and start sending data to the indexer? I have checked the document site but haven't found anything specific about the initial configuration; maybe I am not looking at the right place.  Thanks for your help in advance.   
Hi everyone, I performed all the steps to instrument a php application into Splunk O11y Saas and there is not data(spans). Following the steps done below: 1. Installed the linux packages.   luiz... See more...
Hi everyone, I performed all the steps to instrument a php application into Splunk O11y Saas and there is not data(spans). Following the steps done below: 1. Installed the linux packages.   luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|egrep make ii automake 1:1.16.5-1.3 all Tool for generating GNU Standards-compliant Makefiles ii make 4.3-4.1build1 amd64 utility for directing compilation ii xxd 2:8.2.3995-1ubuntu2.21 amd64 tool to make (or reverse) a hex dump luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|egrep autoconf ii autoconf 2.71-2 all automatic configure script builder luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|grep gcc ii gcc 4:11.2.0-1ubuntu1 amd64 GNU C compiler ii gcc-11 11.4.0-1ubuntu1~22.04 amd64 GNU C compiler ii gcc-11-base:amd64 11.4.0-1ubuntu1~22.04 amd64 GCC, the GNU Compiler Collection (base package) ii gcc-12-base:amd64 12.3.0-1ubuntu1~22.04 amd64 GCC, the GNU Compiler Collection (base package) ii libgcc-11-dev:amd64 11.4.0-1ubuntu1~22.04 amd64 GCC support library (development files) ii libgcc-s1:amd64 12.3.0-1ubuntu1~22.04 amd64 GCC support library luizpolli@PCWIN11-LPOLLI:~$       2. Installed php extension using pecl and added the opentelemetry.so inside php.ini file.   3. Installed some extensions using composer.     php composer.phar install open-telemetry/exporter-otlp:^1.0.3 php composer.phar install php-http/guzzle7-adapter:^1.0 luizpolli@PCWIN11-LPOLLI:~$ composer show brick/math 0.12.1 Arbitrary-precision arithmetic library composer/semver 3.4.3 Semver library that offers utilities, version constraint parsing and validation. google/protobuf 4.29.1 proto library for PHP guzzlehttp/guzzle 7.9.2 Guzzle is a PHP HTTP client library guzzlehttp/promises 2.0.4 Guzzle promises library guzzlehttp/psr7 2.7.0 PSR-7 message implementation that also provides common utility methods nyholm/psr7 1.8.2 A fast PHP7 implementation of PSR-7 nyholm/psr7-server 1.1.0 Helper classes to handle PSR-7 server requests open-telemetry/api 1.1.1 API for OpenTelemetry PHP. open-telemetry/context 1.1.0 Context implementation for OpenTelemetry PHP. open-telemetry/exporter-otlp 1.1.0 OTLP exporter for OpenTelemetry. open-telemetry/gen-otlp-protobuf 1.2.1 PHP protobuf files for communication with OpenTelemetry OTLP collectors/servers. open-telemetry/sdk 1.1.2 SDK for OpenTelemetry PHP. open-telemetry/sem-conv 1.27.1 Semantic conventions for OpenTelemetry PHP. php-http/discovery 1.20.0 Finds and installs PSR-7, PSR-17, PSR-18 and HTTPlug implementations php-http/guzzle7-adapter 1.1.0 Guzzle 7 HTTP Adapter php-http/httplug 2.4.1 HTTPlug, the HTTP client abstraction for PHP php-http/promise 1.3.1 Promise used for asynchronous HTTP requests psr/container 2.0.2 Common Container Interface (PHP FIG PSR-11) psr/http-client 1.0.3 Common interface for HTTP clients psr/http-factory 1.1.0 PSR-17: Common interfaces for PSR-7 HTTP message factories psr/http-message 2.0 Common interface for HTTP messages psr/log 3.0.2 Common interface for logging libraries ralouphie/getallheaders 3.0.3 A polyfill for getallheaders. ramsey/collection 2.0.0 A PHP library for representing and manipulating collections. ramsey/uuid 4.7.6 A PHP library for generating and working with universally unique identifiers (UUIDs). symfony/deprecation-contracts 3.5.1 A generic function and convention to trigger deprecation notices symfony/http-client 6.4.16 Provides powerful methods to fetch HTTP resources synchronously or asynchronously symfony/http-client-contracts 3.5.1 Generic abstractions related to HTTP clients symfony/polyfill-mbstring 1.31.0 Symfony polyfill for the Mbstring extension symfony/polyfill-php82 1.31.0 Symfony polyfill backporting some PHP 8.2+ features to lower PHP versions symfony/service-contracts 3.5.1 Generic abstractions related to writing services tbachert/spi 1.0.2 Service provider loading facility luizpolli@PCWIN11-LPOLLI:~$       4. Set linux env variables and php.ini.     luizpolli@PCWIN11-LPOLLI:~$ env|grep OTEL OTEL_EXPORTER_OTLP_TRACES_HEADERS=x-sf-token=uv8z-g77txiCZigBV1OZVg OTEL_RESOURCE_ATTRIBUTES=deployment.environment=prod,service.version=1.0 OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.eu1.signalfx.com/trace/otlp OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$ cat /etc/php/8.1/apache2/php.ini |grep OTEL OTEL_RESOURCE_ATTRIBUTES="deployment.environment=prod,service.version=1.0" OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318" OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$       5. Restarted the application.     luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl restart apache2 [sudo] password for luizpolli: luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl status apache2 ● apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2024-12-10 17:32:43 CET; 3s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 53957 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS) Main PID: 53961 (apache2) Tasks: 6 (limit: 18994) Memory: 13.8M CGroup: /system.slice/apache2.service ├─53961 /usr/sbin/apache2 -k start ├─53962 /usr/sbin/apache2 -k start ├─53963 /usr/sbin/apache2 -k start ├─53964 /usr/sbin/apache2 -k start ├─53965 /usr/sbin/apache2 -k start └─53966 /usr/sbin/apache2 -k start Dec 10 17:32:43 PCWIN11-LPOLLI systemd[1]: Starting The Apache HTTP Server... Dec 10 17:32:43 PCWIN11-LPOLLI systemd[1]: Started The Apache HTTP Server. luizpolli@PCWIN11-LPOLLI:~$       6. Checking Splunk O11y SaaS apm page we cannot spans. Any ideas on what is wrong or missing?          
I'm working with the Windows TA for Splunk, however the metrics it obtains for CPU are not correct. On my server, nothing similar is reflected. The same thing happens to me when consulting the RAM. I... See more...
I'm working with the Windows TA for Splunk, however the metrics it obtains for CPU are not correct. On my server, nothing similar is reflected. The same thing happens to me when consulting the RAM. Is there any other way to consume the CPU or RAM usage? What other alternative would be the solution to make them match with my server data?