All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I need to use the German standard to display number 287.560,5 as a single value visualization  instead of the English format that uses a comma to separate thousands and a dot for decimal numbers — 28... See more...
I need to use the German standard to display number 287.560,5 as a single value visualization  instead of the English format that uses a comma to separate thousands and a dot for decimal numbers — 287,560.8.   The solution that was described here did not help:  Decimal : how to use "," vs "." ? - Splunk Community When I look at the number before it is saved as a report in a dishboard, it does not have any commas. Could anyone help me with this question? At the moment, I just set the  "shouldUseThousandSeparators" in json  to false to remove the commas altogether. But I eventually want to use the dot for thousands and a comma for decimals for better legibility.  Thank you in advance
We have distributed Splunk Enterprise setup, we are trying to establish secure TLS communication between UF-> HF-> Indexer. We do have certificates configured for Search heads, Indexers and Heavy Fo... See more...
We have distributed Splunk Enterprise setup, we are trying to establish secure TLS communication between UF-> HF-> Indexer. We do have certificates configured for Search heads, Indexers and Heavy Forwarders. We have also opened required receiving ports on both Indexer and HF. On the other hand, we have around 200 UF's, can someone please tell me, if we need to generate 200 client certificates or we can use general certificate which we can deploy on all 200 UF's for establishing communication between UF and Indexers
can someone help me with this issue where splunk is reading the file, but 'adding' a information that is NOT in the original file.   If you search below index="acob_controls_summary" sourcety... See more...
can someone help me with this issue where splunk is reading the file, but 'adding' a information that is NOT in the original file.   If you search below index="acob_controls_summary" sourcetype=acob:json source="/var/log/acobjson/*100223*rtm*" |search system=CHE control_id=AU2_A_2_1 compliance_status=100% You will get two result, and mainly separated by “last_test_date” one showing "2023-10-02 15:42:30.784049" and other showing "2023-10-02 14:56:45.047265" ironically,   attached file is the SAME file (just changed the file name after copied onto my machine), that we are seeing from the splunk, yet there is only ONE entry which is the second the one "2023-10-02 14:56:45.047265 where does that “2023-10-02 15:42:30.784049” came from?   we have a cluster environment therefore many splunk-server auto creates but why is it making a new 'test date' which actually separates one entry into two, AND give one a good return yet another one with wrong info.   
Hi Splunkers!    How to assign the pie chart in same vertical if we are having dropdown in one specific pie chart. Having dropdown in one pie chart because of that other pie chart in same row g... See more...
Hi Splunkers!    How to assign the pie chart in same vertical if we are having dropdown in one specific pie chart. Having dropdown in one pie chart because of that other pie chart in same row got misaligned, need to have same height of pie charts in the row   Thanks in Advance!
Created a fresh Add-on using Splunk add-on builder v4.1.3 and getting check_for_vulnerable_javascript_library_usage failures in AppInspect. Tried this suggestion https://community.splunk.com/t5/Buil... See more...
Created a fresh Add-on using Splunk add-on builder v4.1.3 and getting check_for_vulnerable_javascript_library_usage failures in AppInspect. Tried this suggestion https://community.splunk.com/t5/Building-for-the-Splunk-Platform/How-to-update-a-Splunk-Add-on-Builder-built-app-or-add-on/m-p/587702 export, delete, import and generated add-on multiple times but still the issue persists. Please suggest a fix for this issue. Thanks.    
eploy command is pushing an app without the local folder from deployer -> shc Our deployer settings are set to full [shclustering] deployer_push_mode = full    
I'm not aware of such a document, but you can find the number of concurrent searches using the Monitoring Console (MC).  The maximum number of concurrent systems is 6+<numCPUs>.  That formula can be ... See more...
I'm not aware of such a document, but you can find the number of concurrent searches using the Monitoring Console (MC).  The maximum number of concurrent systems is 6+<numCPUs>.  That formula can be modified using limits.conf settings, but is good for most environments. If you see errors in the MC about searches being skipped because the maximum number of concurrent searches has been reached then you are not under-utilizing your server.  Try re-distributing your scheduled searches so fewer are running at the same time.  After that, if you still see the error then you are over-using the server and need more CPUs (or fewer searches). If there are times when the server is not running any searches, then you are under-using it at those times.  The CPUs need to be available for times when searches run, however. Perhaps you need lower-powered CPUs rather than fewer CPUs.
I see a few problems here. 1. The blacklist1 setting is not in the proper format.  It must be a list of event IDs or a keyword followed by "=" followed by a regular expression. 2. The regex shown i... See more...
I see a few problems here. 1. The blacklist1 setting is not in the proper format.  It must be a list of event IDs or a keyword followed by "=" followed by a regular expression. 2. The regex shown is trying to match XML, but the sample event is not in XML. 3. The regex is looking for text ("4688", "MsSense.exe", "TaniumCX.exe") that is not in the sample event. Any of these would cause the blacklist to fail.  To fix them: 1. Put the blacklist1 setting in an expected format. 2. Examine the log entry as Splunk sees it (_raw) rather than as shown by another program (which may have changed it for display purposes). 3. Ensure the regex matches the sample data.
These series colors from the documentation are not the real colors used. Instead, the colors are as follows (taken from $SPLUNK_HOME$/share/splunk/search_mrsparkle/exposed/js/splunk/palettes/ColorCod... See more...
These series colors from the documentation are not the real colors used. Instead, the colors are as follows (taken from $SPLUNK_HOME$/share/splunk/search_mrsparkle/exposed/js/splunk/palettes/ColorCodes.js):   [0x006d9c, 0x4fa484, 0xec9960, 0xaf575a, 0xb6c75a, 0x62b3b2, 0x294e70, 0x738795, 0xedd051, 0xbd9872, 0x5a4575, 0x7ea77b, 0x708794, 0xd7c6b7, 0x339bb2, 0x55672d, 0xe6e1ae, 0x96907f, 0x87bc65, 0xcf7e60, 0x7b5547, 0x77d6d8, 0x4a7f2c, 0xf589ad, 0x6a2c5d, 0xaaabae, 0x9a7438, 0xa4d563, 0x7672a4, 0x184b81, 0x7fb6ce, 0xa7d2c2, 0xf6ccb0, 0xd7abad, 0xdbe3ad, 0xb1d9d9, 0x94a7b8, 0xb9c3ca, 0xf6e8a8, 0xdeccb9, 0xb7acca, 0xb2cab0, 0xa5b2bf, 0xe9ddd4, 0x66c3d0, 0xaab396, 0xf3f0d7, 0xc1bcb3, 0xb6d7a3, 0xe1b2a1, 0xdec4ba, 0xabe6e8, 0x91b282, 0xf8b7ce, 0xcba3c2, 0xcccdce, 0xc3ab89, 0xc7e6a3, 0xada9c8, 0xa4bbe0]  
This kind of errors typically show problems either on the network level (some firewall in the middle not allowing traffic from the UF to the indexers) or the host firewall on the indexer not allowing... See more...
This kind of errors typically show problems either on the network level (some firewall in the middle not allowing traffic from the UF to the indexers) or the host firewall on the indexer not allowing the incoming traffic.
With Splunk there is no way to delete data from the index other than the normal rolling oldest buckets to frozen. There is the "delete" command but it doesn't actually delete the data from the index... See more...
With Splunk there is no way to delete data from the index other than the normal rolling oldest buckets to frozen. There is the "delete" command but it doesn't actually delete the data from the index files (since the index files are immutable after creation and may be - as mentioned above - only rolled as a whole) but it marks that data as not searchable. That's one of the reasons why you should test your configurations - especially the input-related elements - in a dev/test environment before deploying it to production.
The section of the troubleshooting guide your refer to is wrong in fixing this app's issue with respect to the certificates.  That section refers to splunk server side authentication not the app. 
Hi   How to delete only specific data from the specific index(note: not the entire data)in clustered environment
Good morning sorry for ask again for the same theme, im triying to install on Solaris 11.4 splunk in but i can bcs  my solaris doestn recognize the comand for the installation 
@richgalloway  we are getting these below error although splunk is up and running and configuration is also good 0-03-2023 08:04:43.963 -0400 ERROR TcpOutputFd [5866 TcpOutEloop] - Connectio... See more...
@richgalloway  we are getting these below error although splunk is up and running and configuration is also good 0-03-2023 08:04:43.963 -0400 ERROR TcpOutputFd [5866 TcpOutEloop] - Connection to host=10.246.250.154:9998 failed   10-04-2023 08:02:47.688 -0400 WARN TcpOutputFd [3703313 TcpOutEloop] - Connect to 10.246.250.155:9998 failed. No route to host 10-04-2023 08:02:47.750 -0400 WARN TcpOutputFd [3703313 TcpOutEloop] - Connect to 10.246.250.156:9998 failed. No route to host
The official docs trouble shooting page begs to differ  https://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting#:~:text=Azure%20Active%20Directory.-,Certificate%20verify%20faile... See more...
The official docs trouble shooting page begs to differ  https://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting#:~:text=Azure%20Active%20Directory.-,Certificate%20verify%20failed%20(_ssl.c%3A741)%20error%20message,-If%20you%20create
This subsearch iterates <<FIELD>> between IP and OS.  So, both. (<<FIELD>> is not meta code; it is part of foreach syntax.)
The app is using: /{splunk_home}/splunk/lib/python3.7/site-packages/certifi/cacert.pem which is the issue. The app is not using /{splunk_home}/etc/auth/cacert.pem rather than any certifi library cace... See more...
The app is using: /{splunk_home}/splunk/lib/python3.7/site-packages/certifi/cacert.pem which is the issue. The app is not using /{splunk_home}/etc/auth/cacert.pem rather than any certifi library cacert.pem
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found ... See more...
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found solutions the solution in AWS: https://repost.aws/knowledge-center/ebs-cloudwatch-metrics-throughput-iops, I don't know how to search it in Splunk.  This is the max I can do atm    | mpreview index=my-index | search namespace="AWS/EBS" metricName=VolumeReadOps    Really appreciate, if someone help me out, 
It is a little unclear how to help you as you haven't provided (anonymised) examples of the events you are dealing with. For example, do you get one event per host, with all their risks; one event pe... See more...
It is a little unclear how to help you as you haven't provided (anonymised) examples of the events you are dealing with. For example, do you get one event per host, with all their risks; one event per risk, with all the hosts; or, one event per host per risk, i.e. one host, one risk in each event. Also, coalesce() does not function the way you seem to be using it - it doesn't concatenate the fields, it merely finds the first non-null field in the list.