All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there any way to search for similar strings dynamically in different  logs? I want to group unique error string coming from different logs. Events are from different application having different... See more...
Is there any way to search for similar strings dynamically in different  logs? I want to group unique error string coming from different logs. Events are from different application having different logging format. I am creating a report that shows count of events for all the unique error string. Sample Events: error events found for key a1 Invalid requestTimestamp abc error event found for key a2 Invalid requestTimestamp def correlationID - 1234 Exception while calling some API ...java.util.concurrent.TimeoutException correlationID - 2345 Exception while calling some API ...java.util.concurrent.TimeoutException Required results: I am looking for the following stats from the above error log statements 1) Invalid requestTimestamp - 2 2) error events found for key - 2 3) Exception while calling some API ...java.util.concurrent.TimeoutException -2
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am think... See more...
I just installed Splunk Enterprise on Windows Server 2022. I am able to access web gui.  At this point, do i need make any changes to server.conf, inputs.conf?  Also, below are the steps I am thinking before I install UF on clients. Configure LDAP and other parameters Create users (Admin and other users)  Identify data ingestion disk partition  Enable Data receiving   Create indexes   Am I missing anything before I install UF and start sending data to the indexer? I have checked the document site but haven't found anything specific about the initial configuration; maybe I am not looking at the right place.  Thanks for your help in advance.   
Hi everyone, I performed all the steps to instrument a php application into Splunk O11y Saas and there is not data(spans). Following the steps done below: 1. Installed the linux packages.   luiz... See more...
Hi everyone, I performed all the steps to instrument a php application into Splunk O11y Saas and there is not data(spans). Following the steps done below: 1. Installed the linux packages.   luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|egrep make ii automake 1:1.16.5-1.3 all Tool for generating GNU Standards-compliant Makefiles ii make 4.3-4.1build1 amd64 utility for directing compilation ii xxd 2:8.2.3995-1ubuntu2.21 amd64 tool to make (or reverse) a hex dump luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|egrep autoconf ii autoconf 2.71-2 all automatic configure script builder luizpolli@PCWIN11-LPOLLI:~$ dpkg -l|grep gcc ii gcc 4:11.2.0-1ubuntu1 amd64 GNU C compiler ii gcc-11 11.4.0-1ubuntu1~22.04 amd64 GNU C compiler ii gcc-11-base:amd64 11.4.0-1ubuntu1~22.04 amd64 GCC, the GNU Compiler Collection (base package) ii gcc-12-base:amd64 12.3.0-1ubuntu1~22.04 amd64 GCC, the GNU Compiler Collection (base package) ii libgcc-11-dev:amd64 11.4.0-1ubuntu1~22.04 amd64 GCC support library (development files) ii libgcc-s1:amd64 12.3.0-1ubuntu1~22.04 amd64 GCC support library luizpolli@PCWIN11-LPOLLI:~$       2. Installed php extension using pecl and added the opentelemetry.so inside php.ini file.   3. Installed some extensions using composer.     php composer.phar install open-telemetry/exporter-otlp:^1.0.3 php composer.phar install php-http/guzzle7-adapter:^1.0 luizpolli@PCWIN11-LPOLLI:~$ composer show brick/math 0.12.1 Arbitrary-precision arithmetic library composer/semver 3.4.3 Semver library that offers utilities, version constraint parsing and validation. google/protobuf 4.29.1 proto library for PHP guzzlehttp/guzzle 7.9.2 Guzzle is a PHP HTTP client library guzzlehttp/promises 2.0.4 Guzzle promises library guzzlehttp/psr7 2.7.0 PSR-7 message implementation that also provides common utility methods nyholm/psr7 1.8.2 A fast PHP7 implementation of PSR-7 nyholm/psr7-server 1.1.0 Helper classes to handle PSR-7 server requests open-telemetry/api 1.1.1 API for OpenTelemetry PHP. open-telemetry/context 1.1.0 Context implementation for OpenTelemetry PHP. open-telemetry/exporter-otlp 1.1.0 OTLP exporter for OpenTelemetry. open-telemetry/gen-otlp-protobuf 1.2.1 PHP protobuf files for communication with OpenTelemetry OTLP collectors/servers. open-telemetry/sdk 1.1.2 SDK for OpenTelemetry PHP. open-telemetry/sem-conv 1.27.1 Semantic conventions for OpenTelemetry PHP. php-http/discovery 1.20.0 Finds and installs PSR-7, PSR-17, PSR-18 and HTTPlug implementations php-http/guzzle7-adapter 1.1.0 Guzzle 7 HTTP Adapter php-http/httplug 2.4.1 HTTPlug, the HTTP client abstraction for PHP php-http/promise 1.3.1 Promise used for asynchronous HTTP requests psr/container 2.0.2 Common Container Interface (PHP FIG PSR-11) psr/http-client 1.0.3 Common interface for HTTP clients psr/http-factory 1.1.0 PSR-17: Common interfaces for PSR-7 HTTP message factories psr/http-message 2.0 Common interface for HTTP messages psr/log 3.0.2 Common interface for logging libraries ralouphie/getallheaders 3.0.3 A polyfill for getallheaders. ramsey/collection 2.0.0 A PHP library for representing and manipulating collections. ramsey/uuid 4.7.6 A PHP library for generating and working with universally unique identifiers (UUIDs). symfony/deprecation-contracts 3.5.1 A generic function and convention to trigger deprecation notices symfony/http-client 6.4.16 Provides powerful methods to fetch HTTP resources synchronously or asynchronously symfony/http-client-contracts 3.5.1 Generic abstractions related to HTTP clients symfony/polyfill-mbstring 1.31.0 Symfony polyfill for the Mbstring extension symfony/polyfill-php82 1.31.0 Symfony polyfill backporting some PHP 8.2+ features to lower PHP versions symfony/service-contracts 3.5.1 Generic abstractions related to writing services tbachert/spi 1.0.2 Service provider loading facility luizpolli@PCWIN11-LPOLLI:~$       4. Set linux env variables and php.ini.     luizpolli@PCWIN11-LPOLLI:~$ env|grep OTEL OTEL_EXPORTER_OTLP_TRACES_HEADERS=x-sf-token=uv8z-g77txiCZigBV1OZVg OTEL_RESOURCE_ATTRIBUTES=deployment.environment=prod,service.version=1.0 OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.eu1.signalfx.com/trace/otlp OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$ cat /etc/php/8.1/apache2/php.ini |grep OTEL OTEL_RESOURCE_ATTRIBUTES="deployment.environment=prod,service.version=1.0" OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4318" OTEL_SERVICE_NAME=shopping OTEL_PHP_AUTOLOAD_ENABLED=true luizpolli@PCWIN11-LPOLLI:~$       5. Restarted the application.     luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl restart apache2 [sudo] password for luizpolli: luizpolli@PCWIN11-LPOLLI:~$ sudo systemctl status apache2 ● apache2.service - The Apache HTTP Server Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2024-12-10 17:32:43 CET; 3s ago Docs: https://httpd.apache.org/docs/2.4/ Process: 53957 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS) Main PID: 53961 (apache2) Tasks: 6 (limit: 18994) Memory: 13.8M CGroup: /system.slice/apache2.service ├─53961 /usr/sbin/apache2 -k start ├─53962 /usr/sbin/apache2 -k start ├─53963 /usr/sbin/apache2 -k start ├─53964 /usr/sbin/apache2 -k start ├─53965 /usr/sbin/apache2 -k start └─53966 /usr/sbin/apache2 -k start Dec 10 17:32:43 PCWIN11-LPOLLI systemd[1]: Starting The Apache HTTP Server... Dec 10 17:32:43 PCWIN11-LPOLLI systemd[1]: Started The Apache HTTP Server. luizpolli@PCWIN11-LPOLLI:~$       6. Checking Splunk O11y SaaS apm page we cannot spans. Any ideas on what is wrong or missing?          
I'm working with the Windows TA for Splunk, however the metrics it obtains for CPU are not correct. On my server, nothing similar is reflected. The same thing happens to me when consulting the RAM. I... See more...
I'm working with the Windows TA for Splunk, however the metrics it obtains for CPU are not correct. On my server, nothing similar is reflected. The same thing happens to me when consulting the RAM. Is there any other way to consume the CPU or RAM usage? What other alternative would be the solution to make them match with my server data?    
I am trying to regex out eligible with the answer field true, when i do it in the regex builder this works eligible\\":(?<eligibility_status>[^,]+) but when i do it in Splunk with adding the ... See more...
I am trying to regex out eligible with the answer field true, when i do it in the regex builder this works eligible\\":(?<eligibility_status>[^,]+) but when i do it in Splunk with adding the additional backslash to escape the quotation the query runs but the field is not there.   Name":null,"Id":null,"WaypointId":null}},"Body":{"APIServiceCall":{"ResponseStatusCode":"200","ResponsePayload":"{\"eligibilityIndicator\":[{\"service\":\"Mobile\",\"eligible\":true,\"successReasonCodes\":[],\"failureReasonCodes\":[]}]}"}}}
Hello Everyone, I am trying to extract the unique browser name along with its count from the list of user agents(attached file) which is printed in user_agent field of splunk logs.   index=my_inde... See more...
Hello Everyone, I am trying to extract the unique browser name along with its count from the list of user agents(attached file) which is printed in user_agent field of splunk logs.   index=my_index "master" user-agent!="-" user-agent!="DIAGNOSTICS" | eval browser=case( searchmatch("*OPR*"),"Opera", searchmatch("*Edg*"),"Edge", searchmatch("*Chrome*Mobile*Safari*"),"Chrome", searchmatch("*firefox*"),"Firefox", searchmatch("*CriOS*safari"),"Safari") | stats count as page_hit by browser   I am sure the result count is incorrect as I am not covering all the combination of browser string from the attached list. Appreciate if someone can help me on this. Many Thanks
I’ve been diving deeper into using Splunk for analyzing various types of data, and recently I’ve been exploring how location-based data can provide more insightful trends. Specifically, I’ve been... See more...
I’ve been diving deeper into using Splunk for analyzing various types of data, and recently I’ve been exploring how location-based data can provide more insightful trends. Specifically, I’ve been curious about using zip codes as a meaningful filter for my searches. I’ve noticed that when I try to correlate events or patterns based on geographical areas, things get a little tricky. I’d love to hear your thoughts on how best to approach this issue or whether anyone else has encountered similar challenges. One thing I’ve realized is that Splunk offers robust tools for organizing and visualizing data, but when I’m dealing with a large dataset, like logs from multiple service locations, finding a way to cleanly incorporate zip codes as a key field for analysis feels like a unique challenge. For example, I recently wanted to track service outages and correlate them with specific zip codes. While I was able to extract the relevant fields using Splunk’s field extraction capabilities, I still felt there was a gap in how I could apply the zip code data dynamically across multiple dashboards. A zip code is a numerical identifier used by postal systems to organize and streamline the delivery of mail to specific geographic regions. In the United States, zip codes typically consist of five digits, with an optional four-digit extension for more precise location targeting. People often ask questions like "What is my zip code?" to clarify the code for their current area. Beyond its primary use in mailing, zip codes are extensively utilized in various fields such as marketing, logistics, and data analysis. In Splunk, incorporating zip codes into searches adds a powerful geographical layer that can reveal trends and patterns within datasets. What I found interesting was how zip codes can act as a lens to uncover patterns that might otherwise go unnoticed. For instance, seeing clusters of events in specific areas made me think differently about how I approach my data analysis in general. One time, I noticed a spike in certain service requests clustered within a few zip codes, and that insight led me to explore potential external factors (like weather or traffic conditions). This kind of context adds so much value, and I believe Splunk has the power to deliver it. That said, I wonder if there are specific tools or configurations within Splunk that would make this process smoother and more intuitive. If anyone has experience working with zip code data in Splunk, what are your tips for making the most of it? Are there specific apps or configurations I should look into for better results? I’d appreciate any advice or ideas.
Hello All, I am trying to build a open telemetry collector for splunk_hec receiver.  I am able to get it working and route the data to a tenant based on the token value sent in.  What I wanted to ... See more...
Hello All, I am trying to build a open telemetry collector for splunk_hec receiver.  I am able to get it working and route the data to a tenant based on the token value sent in.  What I wanted to do was have a way to handle invalid tokens. Obviously I do not want to ingest traffic with an invalid token, but I would like visability into this. Is anyone aware of a way to log some sort of message to indicate that a bad token was sent in and what that token value was and log that to a specific tenant. Here is an example confiog like: - set(resource.attributes["log.source"], "otel.hec.nonprod.fm-mobile-backend-qa") where IsMatch(resource.attributes["com.splunk.hec.access_token"], "9ff3a68d-XXXX-XXXX-XXXX-XXXXXXXXXXXX") Can I do an else or a wild card value? - set(resource.attributes["log.source"], "otel.hec.nonprod.fm-mobile-backend-qa") where IsMatch(resource.attributes["com.splunk.hec.access_token"], "********-****-****-*********") Or some other way to log a message to the otel collector with info like host or ip and the token value that was sent?  I am just looking into gaining visibility into invalid token data sent. 
I cannot find the Pre-built panels in the Splunk Add-on for Apache Web Server Version 2.1.0.
i have a problem with the mention warning on my search head:(attached photo) i tried following the guide here: Configure Dashboards Trusted Domains List - Splunk Documentation and run : curl ... See more...
i have a problem with the mention warning on my search head:(attached photo) i tried following the guide here: Configure Dashboards Trusted Domains List - Splunk Documentation and run : curl -k -u admin:$password$ https://mysplunk.com:8000/servicesNS/nobody/system/web-features/feature:dashboards_csp -d dashboards_trusted_domain.exampleLabel=http://jenkins/ and got:  curl: (56) Received HTTP code 403 from proxy after CONNECT i tried running it on the splunk master and on some of the search heads and it didn't work. also tried  editting : /etc/system/local/web.conf with: [settings] dashboards_trusted_domains = http://jenkins https://jenkins and still the same error   what am i doing wrong? thanks in advanced to helpers!
Good day, I am trying to get a dashboard up and going to easily find the difference between two users groups. I get my information pulled from AD into splunk and then if user1 has a group that use... See more...
Good day, I am trying to get a dashboard up and going to easily find the difference between two users groups. I get my information pulled from AD into splunk and then if user1 has a group that user2 doesnt have then I can easily compare two users to see what is missing. Example users in the same department typically require the same access but one might have more privileges and that is what I want to see. So my search works fine, only problem is it only gives me the group difference and now I cant see who has that group in order to add it to the user that doesnt have the group. I want to add the user next to the group: example group user G-Google user1 G-Splunk user2 | set diff [ search index=db_assets sourcetype=assets_ad_users $user1$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | table Group ] [ search index=db_assets sourcetype=assets_ad_users $user2$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | table Group ]  
Dear experts My search index="abc" search_name="xyz" Umgebung="prod" earliest=-7d@d latest=@d zbpIdentifier IN (454-594, 256-14455, 453-12232) | bin span=1d _time aligntime=@d | stats count as my... See more...
Dear experts My search index="abc" search_name="xyz" Umgebung="prod" earliest=-7d@d latest=@d zbpIdentifier IN (454-594, 256-14455, 453-12232) | bin span=1d _time aligntime=@d | stats count as myCount by _time, zbpIdentifier | eval _time=strftime(_time,"%Y %m %d") | chart values(myCount) over zbpIdentifier by _time limit=0 useother=f produces the following chart:   For each zbpIdentifier I have a group within the graph showing the number of messages during several days.  How to change the order of the day values within the group? Green (yesterday) should be the most left, followed by pink (the day before yesterday) and orange, ..... | reverse will change the order of the whole groups, that's not what I need.  All kind of time sorting like  | sort +"_time" or | sort -"_time" before and after  | chart ...  does not change anything.
After bumping in to the "Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=Splunk_SA_Scientific_Python_linux_x86_64" problem, we too were f... See more...
After bumping in to the "Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=Splunk_SA_Scientific_Python_linux_x86_64" problem, we too were forced to double our max_content_length value on our search head cluster members. Upon closer look on "why is this app so big" I could see that several files unter bin/linux_x86_64/4_2_2/lib are actually duplicated. What we usually see as symlink to the library files, are current copies of the same file. After tweaking a little bit on those files, I guess we can reduce around 500MB of duplicated library files and bring it below the accepted standard of 2GB. Maybe someone overlooked that while compiling and packaging the app?
Salam Splunkers, I’m having a problem while configuring the Windows Remote Management app with the Splunk SOAR platform. When testing the connectivity with the transport type set to NTLM, it fa... See more...
Salam Splunkers, I’m having a problem while configuring the Windows Remote Management app with the Splunk SOAR platform. When testing the connectivity with the transport type set to NTLM, it fails and displays an error message. Following the error message, I disabled FIPS mode on the Windows Server and tested the connectivity again, but the issue persists. I then changed the transport type to Kerberos, but ran into a different issue. I have a  few questions: Is the targeted system for integration with this app the Windows Server or the Windows\Linux endpoint? Do we need to integrate the Windows Server itself in order to access the endpoints listed under the AD domain of that Windows Server with this app? Any guidance would be appreciated!
In the Splunk URA, it says that it includes the /etc/apps and /etc/peer-apps folders in the scans, but it does not include the deployment-apps folder as well. Therefore, the process for scanning app... See more...
In the Splunk URA, it says that it includes the /etc/apps and /etc/peer-apps folders in the scans, but it does not include the deployment-apps folder as well. Therefore, the process for scanning apps in the deployment-apps folder is to find these in other places within the environment where SplunkWeb is running and then install/update and run there.  There are more and more companies who are using SplunkCloud and the on-prem presence of Splunk is now mostly managed by the Splunk DS, so why can we not have the ability (in the Splunk URA) to scan the deployment-apps folder, so that it makes on-prem upgrades easier?
Hi,  We are going back and forth with Splunk support on an error coming from your automatic lookup as we can't seem to correct this from our end (no edit option on Splunk Cloud web console). And w... See more...
Hi,  We are going back and forth with Splunk support on an error coming from your automatic lookup as we can't seem to correct this from our end (no edit option on Splunk Cloud web console). And we need your help in fixing this. This error shows up when we run some correlation searches. x-------------------------Start of ERROR---------------------------------x Cannot expand lookup field 'severity' due to a reference cycle in the lookup configuration. Check search.log for details and update the lookup configuration to remove the reference cycle. x-------------------------End of ERROR----------------------------------x This error happens if one field is present in both input and output field in an automatic lookup. Splunk is saying the error is generating from "arista_switch_log : LOOKUP-syslogseverity" automatic lookup. The configs in this lookup needs to be corrected by removing the severity field from the output field. Current settings: syslogseverity severity OUTPUTNEW severity severity_desc Recommended settings by Splunk to avoid reference cycle error: syslogseverity severity OUTPUTNEW severity_desc Please assist.
Hi team I'm trying to attach multiline config for all the pods in a namespace. Is there a way to achieve that?  Adding the config for a single container works but adding it for all the pods with wi... See more...
Hi team I'm trying to attach multiline config for all the pods in a namespace. Is there a way to achieve that?  Adding the config for a single container works but adding it for all the pods with wildcard, it doesnot work.  Below example: app1 works, but app2 does not work (But removing the wildcard and adding specific namespace, container and pod name works)           logsCollection: containers: multilineConfigs: - namespaceName: value: app1-dev podName: value: app1.* useRegexp: true containerName: value: app1 firstEntryRegex: ^(?P<EventTime>\d+\-\w+\-\d+\s+\d+:\d+:\d+\.\d+\s+\w+) - namespaceName: value: app2-* podName: value: .* useRegexp: true containerName: value: .* firstEntryRegex: /^\d{1}\.\d{1}\.\d{1}\.\d{1}\.\d{1}/|^[^\s].*          
I want to create a Splunk dashboard that breaks down a splunk dashboard: What app does it belong too. what index or indexes feed it. what sourcetype or sourcetypes feed it. Users accessing it ... See more...
I want to create a Splunk dashboard that breaks down a splunk dashboard: What app does it belong too. what index or indexes feed it. what sourcetype or sourcetypes feed it. Users accessing it Any other detail you might find useful, this can be a very powerful tool for anyone, and I see to find bits and pieces of this around the community so it must mean someone either already did it, or is planning to. Something kinda this     <form version="1.6" theme="dark"> <label>Custom Dashboard Usage</label> <!-- 1.1 Added line view and host info 1.2 Added sort 1.3 Added sort by in dashboard 1.4 Fixed new forma 1.5 Fixed dashboard regex by adding space and added Pie chart 1.6 Fixed missing user --> <search id="base_search"> <query> index="_internal" "data/ui/views/" NOT "servicesNS/-" sourcetype=splunkd_ui_access | rex "(?&lt;app&gt;[^\/]+)\/data\/ui\/views\/(?&lt;dashboard&gt;[^? ]+)" | rex "servicesNS\/(?&lt;user2&gt;[^\/]+)" | rex mode=sed field=user2 "s/%40/@/" | eval user=if(user="-",user2,user) | search app=* host="$Host$" user="$User$" app="$App$" dashboard="$Dashboard$" | fields _time host user app dashboard </query> </search> <fieldset submitButton="false"> <input type="time"> <label>Max is 30 days back</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="Host"> <label>Host</label> <search base="base_search"> <query> | eval data=host | stats count by data | eval info=data." (".count.")" | sort -count </query> </search> <choice value="*">Any</choice> <fieldForLabel>info</fieldForLabel> <fieldForValue>data</fieldForValue> <default>*</default> </input> <input type="dropdown" token="User"> <label>User</label> <search base="base_search"> <query> | eval data=user | stats count by data | eval info=data." (".count.")" | sort -count </query> </search> <choice value="*">Any</choice> <fieldForLabel>info</fieldForLabel> <fieldForValue>data</fieldForValue> <default>*</default> </input> <input type="dropdown" token="App"> <label>Application</label> <search base="base_search"> <query> | eval data=app | stats count by data | eval info=data." (".count.")" | sort -count </query> </search> <choice value="*">Any</choice> <fieldForLabel>info</fieldForLabel> <fieldForValue>data</fieldForValue> <default>*</default> </input> <input type="dropdown" token="Dashboard"> <label>Dashboard</label> <search base="base_search"> <query> | eval data=dashboard | stats count by data | eval info=data." (".count.")" | sort -count </query> </search> <choice value="*">Any</choice> <fieldForLabel>info</fieldForLabel> <fieldForValue>data</fieldForValue> <default>*</default> </input> <input type="dropdown" token="Sort"> <label>Graph by</label> <choice value="dashboard">Dashboard</choice> <choice value="app">Application</choice> <choice value="user">User</choice> <choice value="host">Host</choice> <default>dashboard</default> </input> </fieldset> <row> <panel> <chart> <title>Dashboards usage frequency by count</title> <search base="base_search"> <query> | timechart limit=25 useother=f count by $Sort$ </query> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.fieldColors">{"m-cluster-01":#55C169,"m-cluster-02":#55C169,"m-cluster-03":#55C169,"m-cluster-04":#55C169,"m-test":#D41F1F,"m-master-deploy":#FFFF00,"m-monitor":#1182F3,"m-search-tphp":#E3723A,"m-search-helsemn":#D94E17,"m-deploy":#88527D,"m-collector-01":#65778A,"p-collector-01":#65778A,"h-collector-01":#65778A}</option> <option name="height">400</option> </chart> </panel> <panel> <chart> <title>Dashboards usage frequency by percent</title> <search base="base_search"> <query> | stats count by $Sort$ </query> </search> <option name="charting.chart">pie</option> <option name="charting.fieldColors">{"m-cluster-01":#55C169,"m-cluster-02":#55C169,"m-cluster-03":#55C169,"m-cluster-04":#55C169,"m-test":#D41F1F,"m-master-deploy":#FFFF00,"m-monitor":#1182F3,"m-search-tphp":#E3723A,"m-search-helsemn":#D94E17,"m-deploy":#88527D,"m-collector-01":#65778A,"p-collector-01":#65778A,"h-collector-01":#65778A}</option> <option name="height">385</option> </chart> </panel> </row> <row> <panel> <table> <title>Dashboards usage frequency by time, sh-server, user &amp; application</title> <search base="base_search"> <query> | sort 0 - _time | table _time host user app dashboard </query> </search> <option name="count">50</option> <format type="color" field="host"> <colorPalette type="map">{"m-cluster-01":#55C169,"m-cluster-02":#55C169,"m-cluster-03":#55C169,"m-cluster-04":#55C169,"m-test":#D41F1F,"m-master-deploy":#FFFF00,"m-monitor":#1182F3,"m-search-tphp":#E3723A,"m-search-helsemn":#D94E17,"m-deploy":#88527D,"m-collector-01":#65778A,"p-collector-01":#65778A,"h-collector-01":#65778A}</colorPalette> </format> <format type="color" field="user"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> <format type="color" field="app"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> <format type="color" field="dashboard"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> </form>    
Hi All, I’m trying to create a stacked Vertical bar chart in Splunk, where each bar represents a unique field (e.g., SWC), and the bar is segmented into multiple colors based on a specific status fi... See more...
Hi All, I’m trying to create a stacked Vertical bar chart in Splunk, where each bar represents a unique field (e.g., SWC), and the bar is segmented into multiple colors based on a specific status field (e.g., RAG_Status with values Green, Amber, and Red). Here’s what I’m trying to achieve: • Each horizontal bar corresponds to a unique SWC. • The bar is segmented based on the RAG_Status (e.g., Green, Amber, Red). • The length of each segment represents the count of records for that combination. • I want the segments to be stacked within the bar, with distinct colors for Green, Amber, and Red. Sample Query:   | inputlookup example_data.csv | eval RAG_Status = case( KPI_Score >= KPI_Threshold, "Green", KPI_Score >= (KPI_Threshold - 5), "Amber", KPI_Score < (KPI_Threshold - 5), "Red" ) | chart count BY SWC RAG_Status | sort SWC   Visualization Requirements: 1. Chart Type: Vertical Bar Chart. 2. Stacked Mode: Each bar should show Green, Amber, and Red segments stacked horizontally. 3. Color Scheme: • Green: #28a745 • Amber: #ffc107 • Red: #dc3545. Screenshot for Reference: The above is an example of horizontal but I am looking for vertical.  Current Issue: I’m unable to configure the Splunk visualization settings or XML code to properly display this data as a Vertical stacked bar chart. Either the entire bar shows as one solid color, or the segments are not stacking as expected. Any guidance or sample XML code to achieve this would be greatly appreciated! Current XML code:-    <dashboard version="1.1" theme="light"> <label>SWC KPI Performance and RAG Distribution_new</label> <row> <panel> <title>RAG Status Distribution by SWC</title> <chart> <search> <query>| inputlookup example_data.csv | eval RAG_Status = case( KPI_Score >= KPI_Threshold, "Green", KPI_Score >= (KPI_Threshold - 5), "Amber", KPI_Score < (KPI_Threshold - 5), "Red" ) | chart count BY SWC RAG_Status | sort SWC</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.seriesColors">#28a745,#ffc107,#dc3545</option> <option name="charting.legend.placement">right</option> <option name="charting.axisTitleX.text">SWC</option> <option name="charting.axisTitleY.text">count</option> </chart> </panel> </row> </dashboard>   Current situation:-  Thanks in advance!
Morning everyone, i want to display for my search two timecharts, one with and one without dedup of a certain field. Thanks!