All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi smart folks. I have the output of a REST API call as seen below. I need to split each of the records as delimited by the {} as it's own event with each of the key:values defined for each record.  ... See more...
Hi smart folks. I have the output of a REST API call as seen below. I need to split each of the records as delimited by the {} as it's own event with each of the key:values defined for each record.  [ { "name": "ESSENTIAL", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 17, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" }, { "name": "ADVANTAGE", "status": "ENABLED", "compliance": "EVALUATION", "consumptionCounter": 0, "daysOutOfCompliance": "-", "lastAuthorization": "Jul 09,2024 22:49:25 PM" }, { "name": "PREMIER", "status": "ENABLED", "compliance": "EVALUATION", "consumptionCounter": 0, "daysOutOfCompliance": "-", "lastAuthorization": "Aug 10,2024 21:10:44 PM" }, { "name": "DEVICEADMIN", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 2, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" }, { "name": "VM", "status": "ENABLED", "compliance": "COMPLIANT", "consumptionCounter": 2, "daysOutOfCompliance": "-", "lastAuthorization": "Dec 11,2024 07:32:21 AM" } ] Thanks in advance for any help you all might offer to get me down the right track.
You're right in that location based analysis can often highlight interesting things in data. Postal codes are common in many countries. I have used Australian postcodes along with postcode population... See more...
You're right in that location based analysis can often highlight interesting things in data. Postal codes are common in many countries. I have used Australian postcodes along with postcode population density information, to do some covid related dashboards some years ago. It's also possible to do geocoding, e.g. using Google's API https://developers.google.com/maps/documentation/geocoding/overview (there are others), to convert addresses to lat/long and also to then get postcode information. I have used that in the past to do distance calculations using the haversine formula, between GPS coordinates so you can then include a distance element in your events where relevant, e.g. to answer the question "where's the nearest...?" What is the challenge you face - is it getting reliable postcode data from your event data. You can sometimes find good sources of postcode to gps coordinates, I found some Australian downloadable CSV files containing Suburb/Postcode/GPS coordinate data that I used as a lookup dataset which you can then use in your dashboard.  
@Aresndiz The data in Splunk is the data being sent by that machine. What tells you that the data in Splunk is not the same as the data on the server? Splunk wil not change the data coming from your ... See more...
@Aresndiz The data in Splunk is the data being sent by that machine. What tells you that the data in Splunk is not the same as the data on the server? Splunk wil not change the data coming from your server. I note that the table and the event list do not appear to have the same information, e.g. CPU instance 13 has a reading of 9.32 in your table, yet that number does not match any of the event data you show. Is this what you mean? CPU measurements are sometimes difficult to compare - in your example, you show data from a 16 core CPU with individual cores ranging from 7 to 60% and a total of 15%. What is the sampling rate of your readings being sent to Splunk, as that reading represents the average value since the previous reading. If you use a different sampling interval when looking at data on your server you may well see different values, so you need to be comparing like with like.
The use of makeresults is to show examples of how to use a technique, so what you need is the eval statement that sets the field 'color' based on the values of State_after. Add it after your stats co... See more...
The use of makeresults is to show examples of how to use a technique, so what you need is the eval statement that sets the field 'color' based on the values of State_after. Add it after your stats command | eval color=case(State_after="DOWN", "#FF0000", State_after="ACTIVE", "#00FF00", State_after="STANDBY", "#FFBF00")  
Another app in splunkbase for this https://splunkbase.splunk.com/app/7339
There is an awful lot of different UAs and they can introduce themselves in many various ways. It's not standardized in any way. So browser detection is more an art than strict science. And it's even... See more...
There is an awful lot of different UAs and they can introduce themselves in many various ways. It's not standardized in any way. So browser detection is more an art than strict science. And it's even before we take into account that people can spoof their UA strings or even set it to any arbitrary value. There are sites gathering known UA strings though. Like https://explore.whatismybrowser.com/useragents/parse/?analyse-my-user-agent=yes#parse-useragent BTW, your search is very ineffective.
Do something like this to find out which events aren't being counted and adjust your matches accordingly | eval browser=case( searchmatch("*OPR*"),"Opera", searchmatch("*Edg*"),"Edge", searchmatch("... See more...
Do something like this to find out which events aren't being counted and adjust your matches accordingly | eval browser=case( searchmatch("*OPR*"),"Opera", searchmatch("*Edg*"),"Edge", searchmatch("*Chrome*Mobile*Safari*"),"Chrome", searchmatch("*firefox*"),"Firefox", searchmatch("*CriOS*safari"),"Safari") | where isnull(browser)
You could try something like this | stats count(eval(match(_raw, "Invalid requestTimestamp"))) as IrT count(eval(match(_raw, "error events found for key"))) as eeffk count(eval(match(_raw, "Exceptio... See more...
You could try something like this | stats count(eval(match(_raw, "Invalid requestTimestamp"))) as IrT count(eval(match(_raw, "error events found for key"))) as eeffk count(eval(match(_raw, "Exception while calling some API ...java.util.concurrent.TimeoutException"))) as toe
Hi You could try to play with punct field. I'm quite sure that it's not exactly what you are looking for but maybe it helps you to find those similarities and you can go forward with those some othe... See more...
Hi You could try to play with punct field. I'm quite sure that it's not exactly what you are looking for but maybe it helps you to find those similarities and you can go forward with those some other ways. See: punct r. Ismo
When rex'ing backslashes, you need to quadruple them | rex "eligible\\\\\":(?<eligibility_status>[^,]+)"
Hi As already said there is a lot of stuff to tweak before you should do it in production, but those are dependent what is your use case. With PoC environment you can start with e.g. this https://la... See more...
Hi As already said there is a lot of stuff to tweak before you should do it in production, but those are dependent what is your use case. With PoC environment you can start with e.g. this https://lantern.splunk.com/Splunk_Platform/Getting_Started/Getting_started_with_Splunk_Enterprise?mt-learningpath=enterprisestart But for real production I propose that you should hire some Splunk Partner or other person who already know what needs to do and how. t. Ismo
What exactly do you mean by "when I do it in Splunk"?
As usual, it depends. Right after installation Splunk can be used and often is - for example - in PoC/PoV scenarios where you just want to show the prospect customer what it can do on a quick and dir... See more...
As usual, it depends. Right after installation Splunk can be used and often is - for example - in PoC/PoV scenarios where you just want to show the prospect customer what it can do on a quick and dirty setup. But such setup will probably quickly hit some problems due to not pre-configuring it. But it's not only about configuration as technical process of setting stuff via gui/conf files/cli/rest api but also about planning your environment.
Not out of the box. Maybe you could do something like that with MLTK but I've never tried it.
Is there any way to search for similar strings dynamically in different  logs? I want to group unique error string coming from different logs. Events are from different application having different... See more...
Is there any way to search for similar strings dynamically in different  logs? I want to group unique error string coming from different logs. Events are from different application having different logging format. I am creating a report that shows count of events for all the unique error string. Sample Events: error events found for key a1 Invalid requestTimestamp abc error event found for key a2 Invalid requestTimestamp def correlationID - 1234 Exception while calling some API ...java.util.concurrent.TimeoutException correlationID - 2345 Exception while calling some API ...java.util.concurrent.TimeoutException Required results: I am looking for the following stats from the above error log statements 1) Invalid requestTimestamp - 2 2) error events found for key - 2 3) Exception while calling some API ...java.util.concurrent.TimeoutException -2
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am think... See more...
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am thinking before I install UF on clients. Configure LDAP and other parameters Create users (Admin and other users)  Identify data ingestion disk partition  Enable Data receiving   Create indexes   Am I missing anything before I install UF and start sending data to the indexer? I have checked the document site but haven't found anything specific about the initial configuration; maybe I am not looking at the right place.  Thanks for your help in advance.   
@jwv wrote: I've tried this a few ways with no success, I tried to just append the where condition on the end of my query so it looks something like this    index="my-index" | search "http 404... See more...
@jwv wrote: I've tried this a few ways with no success, I tried to just append the where condition on the end of my query so it looks something like this    index="my-index" | search "http 404" | stats count | where count => 250 AND count <= 100   but this still just returns the number of matching events and I run into the same problem trying to set up the alert. That the search "just returns the number of matching events" is to be expected since that is all it is told to do.  The where command should cause the search to return a result only if that result is a number between 250 and 500; otherwise it should say "No results found".  That is what will trigger the alert - if there is a result (Number of events > 0) it's because the search criteria were met.
I've tried this a few ways with no success, I tried to just append the where condition on the end of my query so it looks something like this    index="my-index" | search "http 404" | stats count |... See more...
I've tried this a few ways with no success, I tried to just append the where condition on the end of my query so it looks something like this    index="my-index" | search "http 404" | stats count | where count => 250 AND count <= 100   but this still just returns the number of matching events and I run into the same problem trying to set up the alert.  I have also tried with eval to output a result like this    index="my-index" | search "http 404" | stats count | eval result=if(count>=250 AND count<=500, 1, 0) | table result   which properly returns 1 or 0 depending on if the number of results are in the range I am looking for, however it is still not alerting properly when the trigger is set to >1.  I think the trigger is still running against the number of events which is also being returned (and I like that I can see this) and not the result I set up.  . Any other suggestions would be much appreciated.
Hi everyone, I performed all the steps to instrument a php application into Splunk O11y Saas and there is not data(spans). Following the steps done below: 1. Installed the linux packages.   luiz... See more...
Hi everyone, I performed all the steps to instrument a php application into Splunk O11y Saas and there is not data(spans). Following the steps done below: 1. Installed the linux packages.   luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|egrep make ii automake 1:1.16.5-1.3 all Tool for generating GNU Standards-compliant Makefiles ii make 4.3-4.1build1 amd64 utility for directing compilation ii xxd 2:8.2.3995-1ubuntu2.21 amd64 tool to make (or reverse) a hex dump luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|egrep autoconf ii autoconf 2.71-2 all automatic configure script builder luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|grep gcc ii gcc 4:11.2.0-1ubuntu1 amd64 GNU C compiler ii gcc-11 11.4.0-1ubuntu1~22.04 amd64 GNU C compiler ii gcc-11-base:amd64 11.4.0-1ubuntu1~22.04 amd64 GCC, the GNU Compiler Collection (base package) ii gcc-12-base:amd64 12.3.0-1ubuntu1~22.04 amd64 GCC, the GNU Compiler Collection (base package) ii libgcc-11-dev:amd64 11.4.0-1ubuntu1~22.04 amd64 GCC support library (development files) ii libgcc-s1:amd64 12.3.0-1ubuntu1~22.04 amd64 GCC support library luizpolli@PCWIN11-LPOLLI:~$       2. Installed php extension using pecl and added the opentelemetry.so inside php.ini file.   3. Installed some extensions using composer.     php composer.phar install open-telemetry/exporter-otlp:^1.0.3 php composer.phar install php-http/guzzle7-adapter:^1.0 luizpolli@PCWIN11-LPOLLI:~$ composer show brick/math 0.12.1 Arbitrary-precision arithmetic library composer/semver 3.4.3 Semver library that offers utilities, version constraint parsing and validation. google/protobuf 4.29.1 proto library for PHP guzzlehttp/guzzle 7.9.2 Guzzle is a PHP HTTP client library guzzlehttp/promises 2.0.4 Guzzle promises library guzzlehttp/psr7 2.7.0 PSR-7 message implementation that also provides common utility methods nyholm/psr7 1.8.2 A fast PHP7 implementation of PSR-7 nyholm/psr7-server 1.1.0 Helper classes to handle PSR-7 server requests open-telemetry/api 1.1.1 API for OpenTelemetry PHP. open-telemetry/context 1.1.0 Context implementation for OpenTelemetry PHP. open-telemetry/exporter-otlp 1.1.0 OTLP exporter for OpenTelemetry. open-telemetry/gen-otlp-protobuf 1.2.1 PHP protobuf files for communication with OpenTelemetry OTLP collectors/servers. open-telemetry/sdk 1.1.2 SDK for OpenTelemetry PHP. open-telemetry/sem-conv 1.27.1 Semantic conventions for OpenTelemetry PHP. php-http/discovery 1.20.0 Finds and installs PSR-7, PSR-17, PSR-18 and HTTPlug implementations php-http/guzzle7-adapter 1.1.0 Guzzle 7 HTTP Adapter php-http/httplug 2.4.1 HTTPlug, the HTTP client abstraction for PHP php-http/promise 1.3.1 Promise used for asynchronous HTTP requests psr/container 2.0.2 Common Container Interface (PHP FIG PSR-11) psr/http-client 1.0.3 Common interface for HTTP clients psr/http-factory 1.1.0 PSR-17: Common interfaces for PSR-7 HTTP message factories psr/http-message 2.0 Common interface for HTTP messages psr/log 3.0.2 Common interface for logging libraries ralouphie/getallheaders 3.0.3 A polyfill for getallheaders. ramsey/collection 2.0.0 A PHP library for representing and manipulating collections. ramsey/uuid 4.7.6 A PHP library for generating and working with universally unique identifiers (UUIDs). symfony/deprecation-contracts 3.5.1 A generic function and convention to trigger deprecation notices symfony/http-client 6.4.16 Provides powerful methods to fetch HTTP resources synchronously or asynchronously symfony/http-client-contracts 3.5.1 Generic abstractions related to HTTP clients symfony/polyfill-mbstring 1.31.0 Symfony polyfill for the Mbstring extension symfony/polyfill-php82 1.31.0 Symfony polyfill backporting some PHP 8.2+ features to lower PHP versions symfony/service-contracts 3.5.1 Generic abstractions related to writing services tbachert/spi 1.0.2 Service provider loading facility luizpolli@PCWIN11-LPOLLI:~$       4. Set linux env variables and php.ini.     luizpolli@PCWIN11-LPOLLI:~$ env|grep OTEL OTEL_EXPORTER_OTLP_TRACES_HEADERS=x-sf-token=uv8z-g77txiCZigBV1OZVg OTEL_RESOURCE_ATTRIBUTES=deployment.environment=prod,service.version=1.0 OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.eu1.signalfx.com/trace/otlp OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$ cat /etc/php/8.1/apache2/php.ini |grep OTEL OTEL_RESOURCE_ATTRIBUTES="deployment.environment=prod,service.version=1.0" OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318" OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$       5. Restarted the application.     luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl restart apache2 [sudo] password for luizpolli: luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl status apache2 ● apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2024-12-10 17:32:43 CET; 3s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 53957 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS) Main PID: 53961 (apache2) Tasks: 6 (limit: 18994) Memory: 13.8M CGroup: /system.slice/apache2.service ├─53961 /usr/sbin/apache2 -k start ├─53962 /usr/sbin/apache2 -k start ├─53963 /usr/sbin/apache2 -k start ├─53964 /usr/sbin/apache2 -k start ├─53965 /usr/sbin/apache2 -k start └─53966 /usr/sbin/apache2 -k start Dec 10 17:32:43 PCWIN11-LPOLLI systemd[1]: Starting The Apache HTTP Server... Dec 10 17:32:43 PCWIN11-LPOLLI systemd[1]: Started The Apache HTTP Server. luizpolli@PCWIN11-LPOLLI:~$       6. Checking Splunk O11y SaaS apm page we cannot spans. Any ideas on what is wrong or missing?          
I'm working with the Windows TA for Splunk, however the metrics it obtains for CPU are not correct. On my server, nothing similar is reflected. The same thing happens to me when consulting the RAM. I... See more...
I'm working with the Windows TA for Splunk, however the metrics it obtains for CPU are not correct. On my server, nothing similar is reflected. The same thing happens to me when consulting the RAM. Is there any other way to consume the CPU or RAM usage? What other alternative would be the solution to make them match with my server data?