All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  Why is my saved search going back to 1970? I have run the following savedsearch (screenshot below) and I am passing in 30 minutes (In yellow below). Saved search is set to 30 minutes. ... See more...
Hi  Why is my saved search going back to 1970? I have run the following savedsearch (screenshot below) and I am passing in 30 minutes (In yellow below). Saved search is set to 30 minutes. Thanks in advance Robbie  
assuming I have this log history: [sent] task=abc, id=123 [sent] task=abc, id=456 [success] task=abc, id=123 I would like to get a list of all ids that are "sent" but did not get a "success", so ... See more...
assuming I have this log history: [sent] task=abc, id=123 [sent] task=abc, id=456 [success] task=abc, id=123 I would like to get a list of all ids that are "sent" but did not get a "success", so in the above example it should just be "456" my current query looks something like this   "abc" AND "sent" | table id | rename id as "sent_ids" | appendcols [ search "abc" AND "success" | table id | rename id as "success_ids" ]     this gets me a table with the 2 columns, and I'm stuck on how to "left exclusive join" the two columns to get the unique ids. or maybe I'm approaching this entirely wrong and there is a much easier solution?
How to write a cron schedule for every 3 hours exclude from 10pm to 7 am 
I need to use the German standard to display number 287.560,5 as a single value visualization  instead of the English format that uses a comma to separate thousands and a dot for decimal numbers — 28... See more...
I need to use the German standard to display number 287.560,5 as a single value visualization  instead of the English format that uses a comma to separate thousands and a dot for decimal numbers — 287,560.8.   The solution that was described here did not help:  Decimal : how to use "," vs "." ? - Splunk Community When I look at the number before it is saved as a report in a dishboard, it does not have any commas. Could anyone help me with this question? At the moment, I just set the  "shouldUseThousandSeparators" in json  to false to remove the commas altogether. But I eventually want to use the dot for thousands and a comma for decimals for better legibility.  Thank you in advance
We have distributed Splunk Enterprise setup, we are trying to establish secure TLS communication between UF-> HF-> Indexer. We do have certificates configured for Search heads, Indexers and Heavy Fo... See more...
We have distributed Splunk Enterprise setup, we are trying to establish secure TLS communication between UF-> HF-> Indexer. We do have certificates configured for Search heads, Indexers and Heavy Forwarders. We have also opened required receiving ports on both Indexer and HF. On the other hand, we have around 200 UF's, can someone please tell me, if we need to generate 200 client certificates or we can use general certificate which we can deploy on all 200 UF's for establishing communication between UF and Indexers
can someone help me with this issue where splunk is reading the file, but 'adding' a information that is NOT in the original file.   If you search below index="acob_controls_summary" sourcety... See more...
can someone help me with this issue where splunk is reading the file, but 'adding' a information that is NOT in the original file.   If you search below index="acob_controls_summary" sourcetype=acob:json source="/var/log/acobjson/*100223*rtm*" |search system=CHE control_id=AU2_A_2_1 compliance_status=100% You will get two result, and mainly separated by “last_test_date” one showing "2023-10-02 15:42:30.784049" and other showing "2023-10-02 14:56:45.047265" ironically,   attached file is the SAME file (just changed the file name after copied onto my machine), that we are seeing from the splunk, yet there is only ONE entry which is the second the one "2023-10-02 14:56:45.047265 where does that “2023-10-02 15:42:30.784049” came from?   we have a cluster environment therefore many splunk-server auto creates but why is it making a new 'test date' which actually separates one entry into two, AND give one a good return yet another one with wrong info.   
Hi Splunkers!    How to assign the pie chart in same vertical if we are having dropdown in one specific pie chart. Having dropdown in one pie chart because of that other pie chart in same row g... See more...
Hi Splunkers!    How to assign the pie chart in same vertical if we are having dropdown in one specific pie chart. Having dropdown in one pie chart because of that other pie chart in same row got misaligned, need to have same height of pie charts in the row   Thanks in Advance!
Created a fresh Add-on using Splunk add-on builder v4.1.3 and getting check_for_vulnerable_javascript_library_usage failures in AppInspect. Tried this suggestion https://community.splunk.com/t5/Buil... See more...
Created a fresh Add-on using Splunk add-on builder v4.1.3 and getting check_for_vulnerable_javascript_library_usage failures in AppInspect. Tried this suggestion https://community.splunk.com/t5/Building-for-the-Splunk-Platform/How-to-update-a-Splunk-Add-on-Builder-built-app-or-add-on/m-p/587702 export, delete, import and generated add-on multiple times but still the issue persists. Please suggest a fix for this issue. Thanks.    
eploy command is pushing an app without the local folder from deployer -> shc Our deployer settings are set to full [shclustering] deployer_push_mode = full    
Hi   How to delete only specific data from the specific index(note: not the entire data)in clustered environment
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found ... See more...
Hi, I am new to splunk metrics search. I am AWS/EBS metrics to splunk. I want to calculate the average throughput and number of IOPS for my Amazon Elastic Block Store (Amazon EBS) volume. I found solutions the solution in AWS: https://repost.aws/knowledge-center/ebs-cloudwatch-metrics-throughput-iops, I don't know how to search it in Splunk.  This is the max I can do atm    | mpreview index=my-index | search namespace="AWS/EBS" metricName=VolumeReadOps    Really appreciate, if someone help me out, 
We have Splunk Heavy Forwarder running in a couple of different regions/accounts in AWS. We need to ingest the CloudWatch Logs into Splunk Heavy Forwarder. And the architecture proposed is as follow... See more...
We have Splunk Heavy Forwarder running in a couple of different regions/accounts in AWS. We need to ingest the CloudWatch Logs into Splunk Heavy Forwarder. And the architecture proposed is as follows CloudWatch Logs (multiple accounts) >> Near-real time streaming through KDF >> S3 Bucket (Centralized bucket) >> (SQS) >> Splunk Heavy Forwarder. We are looking for a implementation document mainly for aggregating CloudWatch logs to S3 (from multiple accounts) and to improve the architecture. Direct ingestion from CloudWatch logs or KDF to Splunk is not preferred.  S3 centralized logging is preferred. We would like to reduce management overhead (hence don't prefer managing lambdas unless we have to), and also be cost effective. Kindly include implementation documentation if available.
Hello ,  i am new in Splunk and need help i get every week a vulnerability scan log with 2 main fields: "extracted_Host" and "Risk"  Risk values are: Critical, High and Medium (in the log is of... See more...
Hello ,  i am new in Splunk and need help i get every week a vulnerability scan log with 2 main fields: "extracted_Host" and "Risk"  Risk values are: Critical, High and Medium (in the log is often Medium so i must only search for Risk Medium and everything else is excluded) Extracted_Host: i get many different Host IP  I must filter which Host get which Risk (Hosts can have multiple Risk values) and what risk is falling away on which date and what risk is new  right now i am here:  Problem is i get only one host with all value fields and not how many Risk classification are really on this Host without any Time  index=nessus Risk IN (Critical,High,Medium) | fields extracted_Host Risk | eval Host=coalesce(extracted_Host,Risk,) | stats values(*) as * by Host thanks for the help  
Just wanting to know if there is a way that i check in one of the fields whether an email containing a malware has been deleted or whether it is still in the inbox?
I do a local splunk-appinspect on packages before uploading them to Splunk Cloud. Each jenkins run will 'pip install splunk-appinspect'. If the same agent has been installed, it will of course not... See more...
I do a local splunk-appinspect on packages before uploading them to Splunk Cloud. Each jenkins run will 'pip install splunk-appinspect'. If the same agent has been installed, it will of course not get installed again. Here's the job run console logs: 09:11:18 LEVEL="CRITICAL" TIME="2023-10-10 01:11:18,718" NAME="root" FILENAME="main.py" MODULE="main" MESSAGE="An unexpected error occurred during the run-time of Splunk AppInspect" 09:11:18 Traceback (most recent call last): 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/main.py", line 581, in validate 09:11:18 groups_to_validate = splunk_appinspect.checks.groups( 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/checks.py", line 205, in groups 09:11:18 check_group_modules = import_group_modules(check_dirs) 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/checks.py", line 73, in import_group_modules 09:11:18 group_module = imp.load_source(group_module_name, filepath) 09:11:18 File "/usr/lib/python3.9/imp.py", line 171, in load_source 09:11:18 module = _load(spec) 09:11:18 File "<frozen importlib._bootstrap>", line 711, in _load 09:11:18 File "<frozen importlib._bootstrap>", line 680, in _load_unlocked 09:11:18 File "<frozen importlib._bootstrap_external>", line 790, in exec_module 09:11:18 File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/checks/check_source_and_binaries.py", line 17, in <module> 09:11:18 import splunk_appinspect.check_routine as check_routine 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/check_routine/__init__.py", line 15, in <module> 09:11:18 from .find_endpoint_usage import find_endpoint_usage 09:11:18 File "/home/jkagent/.local/lib/python3.9/site-packages/splunk_appinspect/check_routine/find_endpoint_usage.py", line 7, in <module> 09:11:18 from pyparsing import Generator 09:11:18 ImportError: cannot import name 'Generator' from 'pyparsing' (/usr/lib/python3/dist-packages/pyparsing.py) Any idea what exactly is broken and suggestion on how to solve them?
I'm getting an error when I input earliest and latest keywords with the search query. I have set the time picker corresponding to the values used in the query.  It's showing 'Unknown search command '... See more...
I'm getting an error when I input earliest and latest keywords with the search query. I have set the time picker corresponding to the values used in the query.  It's showing 'Unknown search command 'earliest'' when trying to use those commands. I'm using splunk Enterprise version.   This is my query: index=sample ServiceName="cet.prd.*" |  earliest=-3d latest=now()
Hi, I have 2 lookup which is lookup A and lookup B. My lookup A will be keep update by splunk query and my lookup B is maintain manually. Both lookup contain same fields which is Hostname, IP and OS... See more...
Hi, I have 2 lookup which is lookup A and lookup B. My lookup A will be keep update by splunk query and my lookup B is maintain manually. Both lookup contain same fields which is Hostname, IP and OS. I need to compare both lookup and bring out the non match Hostname and IP. Please assist me on this. Thank You
I have a search that gives me the total license usage in gb's for a given time:   index=_internal source=*license_usage.log type=Usage pool=* | stats sum(b) as bu | eval gbu=round(bu/1024/1024/1024... See more...
I have a search that gives me the total license usage in gb's for a given time:   index=_internal source=*license_usage.log type=Usage pool=* | stats sum(b) as bu | eval gbu=round(bu/1024/1024/1024,3) | fields gbu   I'd like to have a timechart/graph to show what the total is each hour of a given day. Is this possible to do with this timechart?  
Compatibility This is compatibility for the latest version Splunk Enterprise, Splunk Cloud, Splunk IT Service Intelligence Platform Version: 9.0, 8.2 CIM Version: 4.X
Hello, I'm trying to create a dashboard in dashboard studio but I can't use the map visualization because it doesn't render all the way through. I'll give you two examples of queries and respective ... See more...
Hello, I'm trying to create a dashboard in dashboard studio but I can't use the map visualization because it doesn't render all the way through. I'll give you two examples of queries and respective visualizations: 1) Map (Bubble)   index=************* | iplocation src | geostats latfield=lat longfield=lon count by Country     2) Map - Choropleth   index=************** | iplocation src | lookup geo_countries latitude AS lat longitude AS lon OUTPUT featureId AS country | stats count by country | geom geo_countries featureIdField=country     This only happens in dashboard studio, maps render perfectly in classic dashboards. How can I fix this? Any configuration missing?   Thank you!