All Topics

Top

All Topics

Hello, I have a below values in lookup and trying to achieve below bar chart view.  Country     old_limit        old_spend_limit      new_limit          new_spend_limit    USA            84000    ... See more...
Hello, I have a below values in lookup and trying to achieve below bar chart view.  Country     old_limit        old_spend_limit      new_limit          new_spend_limit    USA            84000             37000                       121000                   43000   Canada     149000           103000                     214000                 128000 old_limit = PRE new_limit = POST    
How to display other fields on the same row when aggregating using stats max(field)? Thank you for your help.  For example: I am trying to display the same row that has the highest TotalScore=240 ... See more...
How to display other fields on the same row when aggregating using stats max(field)? Thank you for your help.  For example: I am trying to display the same row that has the highest TotalScore=240 Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name2 English 240 80 90 70 My Splunk Search | index=scoreindex    | stats values(Name) as Name, values(Subject) as Subject,  max(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Score2, max(Score3) as Score3 by Class | table Class Name, Subject, Total Score, Score1, Score2, Score3 I think my search below is going to display the following. Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name1 Name2 Name3 Math English 240 85 95 80 This is the whole data in table format from scoreindex Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name1 Math 170 60 40 70 ClassA Name1 English 195 85 60 50 ClassA Name2 Math 175 50 60 65 ClassA Name2 English 240 80 90 70 ClassA Name3 Math 170 40 60 70 ClassA Name3 English 230 55 95 80
We are using the Splunk Universal Forwarder on Windows servers to capture event viewer logs into Splunk.  We have a known issue with a product causing a large number of events to be recorded in the e... See more...
We are using the Splunk Universal Forwarder on Windows servers to capture event viewer logs into Splunk.  We have a known issue with a product causing a large number of events to be recorded in the event viewer which are then sent into Splunk.  How can we filter out a specific event from the Universal Forwarder so that it is not sent into Splunk?
In a modified  search_mrsparkle/templates/pages/base.html, we have a <script> tag inserted just before the </body> tag, as follows: <script src="${make_url('/static/js/abcxyz.js')}"></script></bod... See more...
In a modified  search_mrsparkle/templates/pages/base.html, we have a <script> tag inserted just before the </body> tag, as follows: <script src="${make_url('/static/js/abcxyz.js')}"></script></body> with abcxyz.js placed in the search_mrsparkle/exposed/js directory. The abcxyz.js file has the following code:   require(['splunkjs/mvc'], function(mvc) { ... } which performs some magical stuff on the web page.  But when the page loads, the debugging console reports "require is not defined".  This used to work under SE 9.0.0.1 (and earlier) but now fails under SE 9.1.1. Yes, we realize we are modifying Splunk-delivered code, but we have requirements that required us taking these drastic actions. Anyone have any ideas on how to remedy this issue? --------------------------------------------------------------------------- @mhoustonludlam_ @C_Mooney
How to assign the value of param name original to the source in the | collect statement index=123  | eval original=abcd,  | collect index=qaz source=original    
Hi Forum, I have written a script that pull off the receive power from optical transceivers on every hour.   All is well with this except, as the values are a measurement in loss, they are negative ... See more...
Hi Forum, I have written a script that pull off the receive power from optical transceivers on every hour.   All is well with this except, as the values are a measurement in loss, they are negative values in bells. I would really like to represent this with the single value radial - I can get it to work with a perfectly with a marker gauge but having that "rev counter" type representation would not only be so cool bit so useful to get power readings at  glance on our long range transmission kit, its such a perfect representation I think for this kind of measurement, and would really appeal to that more "scientific" engineering type of audience. When I use the single value radial I cannot for the life of me work out where I can adjust the scale (ideally -40dBm to 0dBm.  I just expected this to be like managing any other sort of float (I am working with a decimal number, not a string or anything), just to happens to be a negative value. Am I just missing something really silly?  Any help would be gratefully received - I'm using dashboard studio if that makes a difference.   Thank you
index=sample(Consumer="prod") ServiceName="product.services.prd.*" | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS 'fourxxErrors', count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) ... See more...
index=sample(Consumer="prod") ServiceName="product.services.prd.*" | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS 'fourxxErrors', count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) AS 'fivexxErrors', count AS 'TotalRequests' | eval 'fourxxPercentage' = if('TotalRequests' > 0, ('fourxxErrors' / 'TotalRequests') * 100, 0), 'fivexxPercentage' = if('TotalRequests' > 0, ('fivexxErrors' / 'TotalRequests') * 100, 0) | table "fourxxPercentage", "fivexxPercentage" The result is showing as 0 for both fields inside the table "fourxxPercentage", "fivexxPercentage". Actually, fourxxErrors and fivexxErrors count are greater than 0. Is that because it's not showing the decimal values?
I'm working with data from this search index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | s... See more...
I'm working with data from this search index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | rename url AS my_url | stats count by my_url | table my_url The events look like this 02ef65dc96524dabba54a950da7cb0d8.fp.measure.office.com/ 0434c399ca884247875a286a10c969f4.fp.measure.office.com/ 14f8c4d0e9b7be86933be5d3c9fb91d7.fp.measure.office.com/ 3d8e055913534ff7b3c23101fd1f3ca6.fp.measure.office.com/ 4334ede7832f44c5badcfd5a6459a1a2.fp.measure.office.com/ 5d44dec60c9b4788fb26426c1e151f46.fp.measure.office.com/ 5f021e1b8d3646398fab8ce59f8a6bbd.fp.measure.office.com/ 6f6c23c1671f72c36d6179fdeabd1f56.fp.measure.office.com/ 7106ea87c1e2ed0aebc9baca86f9af34.fp.measure.office.com/ 88c88084fe454cbc8629332c6422e8a4.fp.measure.office.com/ 982db5012df7494a88c242d426e07be6.fp.measure.office.com/ a478076af2deaf28abcbe5ceb8bdb648.fp.measure.office.com/ aad.cs.dds.microsoft.com/ In the my_list_of_urls.csv there are these entries *.microsoft.com/ microsoft.com/ *.office.com/ office.com/ What I'm trying to do is get the microsoft.com and office.com from the results instead of the full url.  I'm stumped on how to do it.  Any help is appreciated. TIA, Joe
Hi  Why is my saved search going back to 1970? I have run the following savedsearch (screenshot below) and I am passing in 30 minutes (In yellow below). Saved search is set to 30 minutes. ... See more...
Hi  Why is my saved search going back to 1970? I have run the following savedsearch (screenshot below) and I am passing in 30 minutes (In yellow below). Saved search is set to 30 minutes. Thanks in advance Robbie  
assuming I have this log history: [sent] task=abc, id=123 [sent] task=abc, id=456 [success] task=abc, id=123 I would like to get a list of all ids that are "sent" but did not get a "success", so ... See more...
assuming I have this log history: [sent] task=abc, id=123 [sent] task=abc, id=456 [success] task=abc, id=123 I would like to get a list of all ids that are "sent" but did not get a "success", so in the above example it should just be "456" my current query looks something like this   "abc" AND "sent" | table id | rename id as "sent_ids" | appendcols [ search "abc" AND "success" | table id | rename id as "success_ids" ]     this gets me a table with the 2 columns, and I'm stuck on how to "left exclusive join" the two columns to get the unique ids. or maybe I'm approaching this entirely wrong and there is a much easier solution?
How to write a cron schedule for every 3 hours exclude from 10pm to 7 am 
I need to use the German standard to display number 287.560,5 as a single value visualization  instead of the English format that uses a comma to separate thousands and a dot for decimal numbers — 28... See more...
I need to use the German standard to display number 287.560,5 as a single value visualization  instead of the English format that uses a comma to separate thousands and a dot for decimal numbers — 287,560.8.   The solution that was described here did not help:  Decimal : how to use "," vs "." ? - Splunk Community When I look at the number before it is saved as a report in a dishboard, it does not have any commas. Could anyone help me with this question? At the moment, I just set the  "shouldUseThousandSeparators" in json  to false to remove the commas altogether. But I eventually want to use the dot for thousands and a comma for decimals for better legibility.  Thank you in advance
We have distributed Splunk Enterprise setup, we are trying to establish secure TLS communication between UF-> HF-> Indexer. We do have certificates configured for Search heads, Indexers and Heavy Fo... See more...
We have distributed Splunk Enterprise setup, we are trying to establish secure TLS communication between UF-> HF-> Indexer. We do have certificates configured for Search heads, Indexers and Heavy Forwarders. We have also opened required receiving ports on both Indexer and HF. On the other hand, we have around 200 UF's, can someone please tell me, if we need to generate 200 client certificates or we can use general certificate which we can deploy on all 200 UF's for establishing communication between UF and Indexers
can someone help me with this issue where splunk is reading the file, but 'adding' a information that is NOT in the original file.   If you search below index="acob_controls_summary" sourcety... See more...
can someone help me with this issue where splunk is reading the file, but 'adding' a information that is NOT in the original file.   If you search below index="acob_controls_summary" sourcetype=acob:json source="/var/log/acobjson/*100223*rtm*" |search system=CHE control_id=AU2_A_2_1 compliance_status=100% You will get two result, and mainly separated by “last_test_date” one showing "2023-10-02 15:42:30.784049" and other showing "2023-10-02 14:56:45.047265" ironically,   attached file is the SAME file (just changed the file name after copied onto my machine), that we are seeing from the splunk, yet there is only ONE entry which is the second the one "2023-10-02 14:56:45.047265 where does that “2023-10-02 15:42:30.784049” came from?   we have a cluster environment therefore many splunk-server auto creates but why is it making a new 'test date' which actually separates one entry into two, AND give one a good return yet another one with wrong info.   
Hi Splunkers!    How to assign the pie chart in same vertical if we are having dropdown in one specific pie chart. Having dropdown in one pie chart because of that other pie chart in same row g... See more...
Hi Splunkers!    How to assign the pie chart in same vertical if we are having dropdown in one specific pie chart. Having dropdown in one pie chart because of that other pie chart in same row got misaligned, need to have same height of pie charts in the row   Thanks in Advance!
Created a fresh Add-on using Splunk add-on builder v4.1.3 and getting check_for_vulnerable_javascript_library_usage failures in AppInspect. Tried this suggestion https://community.splunk.com/t5/Buil... See more...
Created a fresh Add-on using Splunk add-on builder v4.1.3 and getting check_for_vulnerable_javascript_library_usage failures in AppInspect. Tried this suggestion https://community.splunk.com/t5/Building-for-the-Splunk-Platform/How-to-update-a-Splunk-Add-on-Builder-built-app-or-add-on/m-p/587702 export, delete, import and generated add-on multiple times but still the issue persists. Please suggest a fix for this issue. Thanks.    
eploy command is pushing an app without the local folder from deployer -> shc Our deployer settings are set to full [shclustering] deployer_push_mode = full    
Hi   How to delete only specific data from the specific index(note: not the entire data)in clustered environment
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found ... See more...
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found solutions the solution in AWS: https://repost.aws/knowledge-center/ebs-cloudwatch-metrics-throughput-iops, I don't know how to search it in Splunk.  This is the max I can do atm    | mpreview index=my-index | search namespace="AWS/EBS" metricName=VolumeReadOps    Really appreciate, if someone help me out, 
We have Splunk Heavy Forwarder running in a couple of different regions/accounts in AWS. We need to ingest the CloudWatch Logs into Splunk Heavy Forwarder. And the architecture proposed is as follow... See more...
We have Splunk Heavy Forwarder running in a couple of different regions/accounts in AWS. We need to ingest the CloudWatch Logs into Splunk Heavy Forwarder. And the architecture proposed is as follows CloudWatch Logs (multiple accounts) >> Near-real time streaming through KDF >> S3 Bucket (Centralized bucket) >> (SQS) >> Splunk Heavy Forwarder. We are looking for a implementation document mainly for aggregating CloudWatch logs to S3 (from multiple accounts) and to improve the architecture. Direct ingestion from CloudWatch logs or KDF to Splunk is not preferred.  S3 centralized logging is preferred. We would like to reduce management overhead (hence don't prefer managing lambdas unless we have to), and also be cost effective. Kindly include implementation documentation if available.