All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am think... See more...
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am thinking before I install UF on clients. Configure LDAP and other parameters Create users (Admin and other users)  Identify data ingestion disk partition  Enable Data receiving   Create indexes   Am I missing anything before I install UF and start sending data to the indexer? I have checked the document site but haven't found anything specific about the initial configuration; maybe I am not looking at the right place.  Thanks for your help in advance.   
@jwv wrote: I've tried this a few ways with no success, I tried to just append the where condition on the end of my query so it looks something like this    index="my-index" | search "http 404... See more...
@jwv wrote: I've tried this a few ways with no success, I tried to just append the where condition on the end of my query so it looks something like this    index="my-index" | search "http 404" | stats count | where count => 250 AND count <= 100   but this still just returns the number of matching events and I run into the same problem trying to set up the alert. That the search "just returns the number of matching events" is to be expected since that is all it is told to do.  The where command should cause the search to return a result only if that result is a number between 250 and 500; otherwise it should say "No results found".  That is what will trigger the alert - if there is a result (Number of events > 0) it's because the search criteria were met.
I've tried this a few ways with no success, I tried to just append the where condition on the end of my query so it looks something like this    index="my-index" | search "http 404" | stats count |... See more...
I've tried this a few ways with no success, I tried to just append the where condition on the end of my query so it looks something like this    index="my-index" | search "http 404" | stats count | where count => 250 AND count <= 100   but this still just returns the number of matching events and I run into the same problem trying to set up the alert.  I have also tried with eval to output a result like this    index="my-index" | search "http 404" | stats count | eval result=if(count>=250 AND count<=500, 1, 0) | table result   which properly returns 1 or 0 depending on if the number of results are in the range I am looking for, however it is still not alerting properly when the trigger is set to >1.  I think the trigger is still running against the number of events which is also being returned (and I like that I can see this) and not the result I set up.  . Any other suggestions would be much appreciated.
Hi everyone, I performed all the steps to instrument a php application into Splunk O11y Saas and there is not data(spans). Following the steps done below: 1. Installed the linux packages.   luiz... See more...
Hi everyone, I performed all the steps to instrument a php application into Splunk O11y Saas and there is not data(spans). Following the steps done below: 1. Installed the linux packages.   luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|egrep make ii automake 1:1.16.5-1.3 all Tool for generating GNU Standards-compliant Makefiles ii make 4.3-4.1build1 amd64 utility for directing compilation ii xxd 2:8.2.3995-1ubuntu2.21 amd64 tool to make (or reverse) a hex dump luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|egrep autoconf ii autoconf 2.71-2 all automatic configure script builder luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|grep gcc ii gcc 4:11.2.0-1ubuntu1 amd64 GNU C compiler ii gcc-11 11.4.0-1ubuntu1~22.04 amd64 GNU C compiler ii gcc-11-base:amd64 11.4.0-1ubuntu1~22.04 amd64 GCC, the GNU Compiler Collection (base package) ii gcc-12-base:amd64 12.3.0-1ubuntu1~22.04 amd64 GCC, the GNU Compiler Collection (base package) ii libgcc-11-dev:amd64 11.4.0-1ubuntu1~22.04 amd64 GCC support library (development files) ii libgcc-s1:amd64 12.3.0-1ubuntu1~22.04 amd64 GCC support library luizpolli@PCWIN11-LPOLLI:~$       2. Installed php extension using pecl and added the opentelemetry.so inside php.ini file.   3. Installed some extensions using composer.     php composer.phar install open-telemetry/exporter-otlp:^1.0.3 php composer.phar install php-http/guzzle7-adapter:^1.0 luizpolli@PCWIN11-LPOLLI:~$ composer show brick/math 0.12.1 Arbitrary-precision arithmetic library composer/semver 3.4.3 Semver library that offers utilities, version constraint parsing and validation. google/protobuf 4.29.1 proto library for PHP guzzlehttp/guzzle 7.9.2 Guzzle is a PHP HTTP client library guzzlehttp/promises 2.0.4 Guzzle promises library guzzlehttp/psr7 2.7.0 PSR-7 message implementation that also provides common utility methods nyholm/psr7 1.8.2 A fast PHP7 implementation of PSR-7 nyholm/psr7-server 1.1.0 Helper classes to handle PSR-7 server requests open-telemetry/api 1.1.1 API for OpenTelemetry PHP. open-telemetry/context 1.1.0 Context implementation for OpenTelemetry PHP. open-telemetry/exporter-otlp 1.1.0 OTLP exporter for OpenTelemetry. open-telemetry/gen-otlp-protobuf 1.2.1 PHP protobuf files for communication with OpenTelemetry OTLP collectors/servers. open-telemetry/sdk 1.1.2 SDK for OpenTelemetry PHP. open-telemetry/sem-conv 1.27.1 Semantic conventions for OpenTelemetry PHP. php-http/discovery 1.20.0 Finds and installs PSR-7, PSR-17, PSR-18 and HTTPlug implementations php-http/guzzle7-adapter 1.1.0 Guzzle 7 HTTP Adapter php-http/httplug 2.4.1 HTTPlug, the HTTP client abstraction for PHP php-http/promise 1.3.1 Promise used for asynchronous HTTP requests psr/container 2.0.2 Common Container Interface (PHP FIG PSR-11) psr/http-client 1.0.3 Common interface for HTTP clients psr/http-factory 1.1.0 PSR-17: Common interfaces for PSR-7 HTTP message factories psr/http-message 2.0 Common interface for HTTP messages psr/log 3.0.2 Common interface for logging libraries ralouphie/getallheaders 3.0.3 A polyfill for getallheaders. ramsey/collection 2.0.0 A PHP library for representing and manipulating collections. ramsey/uuid 4.7.6 A PHP library for generating and working with universally unique identifiers (UUIDs). symfony/deprecation-contracts 3.5.1 A generic function and convention to trigger deprecation notices symfony/http-client 6.4.16 Provides powerful methods to fetch HTTP resources synchronously or asynchronously symfony/http-client-contracts 3.5.1 Generic abstractions related to HTTP clients symfony/polyfill-mbstring 1.31.0 Symfony polyfill for the Mbstring extension symfony/polyfill-php82 1.31.0 Symfony polyfill backporting some PHP 8.2+ features to lower PHP versions symfony/service-contracts 3.5.1 Generic abstractions related to writing services tbachert/spi 1.0.2 Service provider loading facility luizpolli@PCWIN11-LPOLLI:~$       4. Set linux env variables and php.ini.     luizpolli@PCWIN11-LPOLLI:~$ env|grep OTEL OTEL_EXPORTER_OTLP_TRACES_HEADERS=x-sf-token=uv8z-g77txiCZigBV1OZVg OTEL_RESOURCE_ATTRIBUTES=deployment.environment=prod,service.version=1.0 OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.eu1.signalfx.com/trace/otlp OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$ cat /etc/php/8.1/apache2/php.ini |grep OTEL OTEL_RESOURCE_ATTRIBUTES="deployment.environment=prod,service.version=1.0" OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318" OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$       5. Restarted the application.     luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl restart apache2 [sudo] password for luizpolli: luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl status apache2 ● apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2024-12-10 17:32:43 CET; 3s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 53957 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS) Main PID: 53961 (apache2) Tasks: 6 (limit: 18994) Memory: 13.8M CGroup: /system.slice/apache2.service ├─53961 /usr/sbin/apache2 -k start ├─53962 /usr/sbin/apache2 -k start ├─53963 /usr/sbin/apache2 -k start ├─53964 /usr/sbin/apache2 -k start ├─53965 /usr/sbin/apache2 -k start └─53966 /usr/sbin/apache2 -k start Dec 10 17:32:43 PCWIN11-LPOLLI systemd[1]: Starting The Apache HTTP Server... Dec 10 17:32:43 PCWIN11-LPOLLI systemd[1]: Started The Apache HTTP Server. luizpolli@PCWIN11-LPOLLI:~$       6. Checking Splunk O11y SaaS apm page we cannot spans. Any ideas on what is wrong or missing?          
I'm working with the Windows TA for Splunk, however the metrics it obtains for CPU are not correct. On my server, nothing similar is reflected. The same thing happens to me when consulting the RAM. I... See more...
I'm working with the Windows TA for Splunk, however the metrics it obtains for CPU are not correct. On my server, nothing similar is reflected. The same thing happens to me when consulting the RAM. Is there any other way to consume the CPU or RAM usage? What other alternative would be the solution to make them match with my server data?    
Hi have you check that your raw event is what you are thinking? As it seems to be a JSON, it probably contains some other characters what you aren’t expecting! Open event and select from event acti... See more...
Hi have you check that your raw event is what you are thinking? As it seems to be a JSON, it probably contains some other characters what you aren’t expecting! Open event and select from event action “show source”. From there you see what event contains and then you can modify your rex to match it. r. Ismo
I am trying to regex out eligible with the answer field true, when i do it in the regex builder this works eligible\\":(?<eligibility_status>[^,]+) but when i do it in Splunk with adding the ... See more...
I am trying to regex out eligible with the answer field true, when i do it in the regex builder this works eligible\\":(?<eligibility_status>[^,]+) but when i do it in Splunk with adding the additional backslash to escape the quotation the query runs but the field is not there.   Name":null,"Id":null,"WaypointId":null}},"Body":{"APIServiceCall":{"ResponseStatusCode":"200","ResponsePayload":"{\"eligibilityIndicator\":[{\"service\":\"Mobile\",\"eligible\":true,\"successReasonCodes\":[],\"failureReasonCodes\":[]}]}"}}}
Location information for cyber data tends to be very inaccurate, especially if we're talking about mapping IP addresses to physical ones.  One may be able to narrow an IP address to a state or city, ... See more...
Location information for cyber data tends to be very inaccurate, especially if we're talking about mapping IP addresses to physical ones.  One may be able to narrow an IP address to a state or city, but a ZIP/postal code is too fine-grained.  If you try, you may find the postal code at the center of the city/state gets used the most because of the way iplocations are assigned much the same as how the city in the center of a state often is used for any IP addresses in that state.
Hello Everyone, I am trying to extract the unique browser name along with its count from the list of user agents(attached file) which is printed in user_agent field of splunk logs.   index=my_inde... See more...
Hello Everyone, I am trying to extract the unique browser name along with its count from the list of user agents(attached file) which is printed in user_agent field of splunk logs.   index=my_index "master" user-agent!="-" user-agent!="DIAGNOSTICS" | eval browser=case( searchmatch("*OPR*"),"Opera", searchmatch("*Edg*"),"Edge", searchmatch("*Chrome*Mobile*Safari*"),"Chrome", searchmatch("*firefox*"),"Firefox", searchmatch("*CriOS*safari"),"Safari") | stats count as page_hit by browser   I am sure the result count is incorrect as I am not covering all the combination of browser string from the attached list. Appreciate if someone can help me on this. Many Thanks
I’ve been diving deeper into using Splunk for analyzing various types of data, and recently I’ve been exploring how location-based data can provide more insightful trends. Specifically, I’ve been... See more...
I’ve been diving deeper into using Splunk for analyzing various types of data, and recently I’ve been exploring how location-based data can provide more insightful trends. Specifically, I’ve been curious about using zip codes as a meaningful filter for my searches. I’ve noticed that when I try to correlate events or patterns based on geographical areas, things get a little tricky. I’d love to hear your thoughts on how best to approach this issue or whether anyone else has encountered similar challenges. One thing I’ve realized is that Splunk offers robust tools for organizing and visualizing data, but when I’m dealing with a large dataset, like logs from multiple service locations, finding a way to cleanly incorporate zip codes as a key field for analysis feels like a unique challenge. For example, I recently wanted to track service outages and correlate them with specific zip codes. While I was able to extract the relevant fields using Splunk’s field extraction capabilities, I still felt there was a gap in how I could apply the zip code data dynamically across multiple dashboards. A zip code is a numerical identifier used by postal systems to organize and streamline the delivery of mail to specific geographic regions. In the United States, zip codes typically consist of five digits, with an optional four-digit extension for more precise location targeting. People often ask questions like "What is my zip code?" to clarify the code for their current area. Beyond its primary use in mailing, zip codes are extensively utilized in various fields such as marketing, logistics, and data analysis. In Splunk, incorporating zip codes into searches adds a powerful geographical layer that can reveal trends and patterns within datasets. What I found interesting was how zip codes can act as a lens to uncover patterns that might otherwise go unnoticed. For instance, seeing clusters of events in specific areas made me think differently about how I approach my data analysis in general. One time, I noticed a spike in certain service requests clustered within a few zip codes, and that insight led me to explore potential external factors (like weather or traffic conditions). This kind of context adds so much value, and I believe Splunk has the power to deliver it. That said, I wonder if there are specific tools or configurations within Splunk that would make this process smoother and more intuitive. If anyone has experience working with zip code data in Splunk, what are your tips for making the most of it? Are there specific apps or configurations I should look into for better results? I’d appreciate any advice or ideas.
Hi @ITWhisperer ,   I need small tweak in same query. I am trying to filter the same data but it should give only data which shouldn't have "hv_vmbus" pattern in same day    
Chart will put the columns in ascending order lexicographically. To get around this, you should use transpose, sort and transpose (back). Try something like this | bin span=1d _time aligntime=@d | s... See more...
Chart will put the columns in ascending order lexicographically. To get around this, you should use transpose, sort and transpose (back). Try something like this | bin span=1d _time aligntime=@d | stats count as myCount by _time, zbpIdentifier | chart values(myCount) over zbpIdentifier by _time limit=0 useother=f | transpose 0 column_name=date header_field=zbpIdentifier | sort 0 -date | eval date=strftime(date, "%Y %m %d") | transpose 0 column_name==zbpIdentifier header_field=date
That is the nature of the set diff command - it will tell there's a difference, but doesn't say what it is.  See https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/Set An alternative... See more...
That is the nature of the set diff command - it will tell there's a difference, but doesn't say what it is.  See https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/Set An alternative would be to count the members of each group and show those with only one member. | multisearch [ search index=db_assets sourcetype=assets_ad_users $user1$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | eval User=$user1$ | table Group User ] [ search index=db_assets sourcetype=assets_ad_users $user2$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | eval User=$user2$ | where Group!="" | table Group User ] | stats values(User) as Users by Group | where mvcount(Users)=1  
Hi @DCondliffe1 , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello All, I am trying to build a open telemetry collector for splunk_hec receiver.  I am able to get it working and route the data to a tenant based on the token value sent in.  What I wanted to ... See more...
Hello All, I am trying to build a open telemetry collector for splunk_hec receiver.  I am able to get it working and route the data to a tenant based on the token value sent in.  What I wanted to do was have a way to handle invalid tokens. Obviously I do not want to ingest traffic with an invalid token, but I would like visability into this. Is anyone aware of a way to log some sort of message to indicate that a bad token was sent in and what that token value was and log that to a specific tenant. Here is an example confiog like: - set(resource.attributes["log.source"], "otel.hec.nonprod.fm-mobile-backend-qa") where IsMatch(resource.attributes["com.splunk.hec.access_token"], "9ff3a68d-XXXX-XXXX-XXXX-XXXXXXXXXXXX") Can I do an else or a wild card value? - set(resource.attributes["log.source"], "otel.hec.nonprod.fm-mobile-backend-qa") where IsMatch(resource.attributes["com.splunk.hec.access_token"], "********-****-****-*********") Or some other way to log a message to the otel collector with info like host or ip and the token value that was sent?  I am just looking into gaining visibility into invalid token data sent. 
Hi @DCondliffe1 , probably it's an error, because there isn't any pre-built panel and any dashboard in this Add-on, also because, this is an Add-On and not an app. This is a Splunk Supported Add-On... See more...
Hi @DCondliffe1 , probably it's an error, because there isn't any pre-built panel and any dashboard in this Add-on, also because, this is an Add-On and not an app. This is a Splunk Supported Add-On s, open a case to Splunk Support for it. Ciao. Giuseppe
Please read the description above where it specifically mentions pre-built panels, there is also a utube video from Splunk showing a demo in which it demonstrates using pre-built panels.
I agree.  My last environment I managed had UF versions ranging from high 6.x to low 9.1.x.  Any upgrade readiness scans would lite up like a Christmas tree looking at the DS folder.