All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This project is to test for a potential on-prem to cloud migration. I need to thaw several terabytes of frozen splunk data. It has been frozen over the past several years from an indexer cluster to ... See more...
This project is to test for a potential on-prem to cloud migration. I need to thaw several terabytes of frozen splunk data. It has been frozen over the past several years from an indexer cluster to offline repos. The storage array where my existing indexer cluster resides doesn't have enough disk space to bring it all back. I have a secondary storage array that I can use that has plenty of space, but I can't move my existing cluster. I need help understanding/deciding: Should I build new indexers on the secondary array, add them to the existing cluster and thaw data to them. Should I build a new cluster with new indexers on the secondary array and thaw the data there.  Maybe it's easiest to just build one new standalone indexer on the secondary array and thaw all data to this one new standalone indexer? The data will need to be searchable/exportable, I have only one search head (no search head cluster).
Hi there,   We have an on prem Exchange mailbox which we monitor via the Exchange logs. We pick out key words from the subject line to trigger alerts.   Our mailbox is moving into Exchange online... See more...
Hi there,   We have an on prem Exchange mailbox which we monitor via the Exchange logs. We pick out key words from the subject line to trigger alerts.   Our mailbox is moving into Exchange online so i've been working with our Azure team and managed to integrate Splunk Enterprise (on prem) with a test online mailbox and so far i am ingesting generic information about the mailbox via the Splunk Add-on for Microsoft Office 365. Information like information like Issue Warning Quota (Byte), Prohibit, Send Quota (Byte) and Prohibit Send/Receive Quota. The 2 inputs i've created are Message Trace and Mailbox (which ingests the mailbox data above). What i want to do is to ingest the emails themselves. The key information like subject, the body (if possible), from address and to address. Is this possible using is add on?
I currently have this to group IPs into subnets and list the counts, I want it to also show the IP it has listed aswell | rex field=SourceIP "(?<Subnet>\d+\.\d+\.\d+\.*)" example Subnet    Count  ... See more...
I currently have this to group IPs into subnets and list the counts, I want it to also show the IP it has listed aswell | rex field=SourceIP "(?<Subnet>\d+\.\d+\.\d+\.*)" example Subnet    Count   IPs 1.1.1       20            1.1.1.1, 1.1.1.2,1.1.1.3  How do I create another field or use the existing field to show what it has grouped?
I would like to do a search of a rolling 60-minute period, looking for 3 or more occurrences in that period. I set up a Splunk alert to run every 15 minutes, looking back 1 hour, which works, but the... See more...
I would like to do a search of a rolling 60-minute period, looking for 3 or more occurrences in that period. I set up a Splunk alert to run every 15 minutes, looking back 1 hour, which works, but then I get multiple alerts for the same 3 events, as they still are in the 60-minute look back period for multiple runs, which happen every 15 minutes. How can I set this up (or code it) so it only reports the 3 occurrences in the previous 60 minutes once?
I have playbook that validates a url given and assigns scores to it. I am able to run the playbook successfully but do not see the output. where do I see it in the crowdstrike app ? I am new here and... See more...
I have playbook that validates a url given and assigns scores to it. I am able to run the playbook successfully but do not see the output. where do I see it in the crowdstrike app ? I am new here and trying to learn SOAR.
Hi All,   I am trying to create a modular input in splunk cloud that gets splunk observability metadata. Input has fields : realm, token and object_type. I can see the input form under "Settings ->... See more...
Hi All,   I am trying to create a modular input in splunk cloud that gets splunk observability metadata. Input has fields : realm, token and object_type. I can see the input form under "Settings -> Data Inputs" with all these fields and is fine.   I have created another script that masks token. But when I upgrade the app with this script and restmap.conf, the modular input is getting disappered from "Settings" --> "Data Inputs" .   splunk_observability.py ---> this is to create modular input schema. observability_object_helper.py --> This is helper script that makes ARI call. observability_admin_TA_rh_account.py --> This creates restmodel and encrypts token input field.   Directory Structure :  App Folder Name --> Splunk_Observability_Metadata metadata --> default.meta bin --> import_declare_test.py, splunk_observability.py,observability_object_helper.py,observability_admin_TA_rh_account.py README --> inputs.conf.spec local --> inputs.conf, app.conf lib --> required splunklib splunk_observability.py import import_declare_test import sys,os import json from splunklib import modularinput as smi sys.path.insert(0, os.path.dirname(__file__)) from observability_object_helper import stream_events, validate_input class SPLUNK_OBSERVABILITY(smi.Script): def __init__(self): super(SPLUNK_OBSERVABILITY, self).__init__() def get_scheme(self): scheme = smi.Scheme("Splunk Observability Metadata Input") scheme.use_external_validation = False scheme.use_single_instance = False scheme.description = "Modular Input" scheme.streaming_mode_xml = True scheme.add_argument( smi.Argument( 'realm', required_on_create=True ) ) scheme.add_argument( smi.Argument( 'name', required_on_create=True, description="Name should not contain whitespaces", ) ) scheme.add_argument( smi.Argument( 'token', required_on_create=True, description="Add API Key required to connect to Splunk Observability Cloud", ) ) scheme.add_argument( smi.Argument( 'object_type', required_on_create=True ) ) return scheme def validate_input(self, definition: smi.ValidationDefinition): return validate_input(definition) def stream_events(self, inputs: smi.InputDefinition, ew: smi.EventWriter): return stream_events(inputs, ew) if __name__ == "__main__": sys.exit(SPLUNK_OBSERVABILITY().run(sys.argv)) observability_object_helper.py import json import logging import time import requests # import import_declare_test from solnlib import conf_manager, log, credentials from splunklib import modularinput as smi ADDON_NAME = "splunk_observability" def get_key_name(input_name: str) -> str: # `input_name` is a string like "example://<input_name>". return input_name.split("/")[-1] def logger_for_input(input_name: str) -> logging.Logger: return log.Logs().get_logger(f"{ADDON_NAME.lower()}_{input_name}") def splunk_observability_get_endpoint(type, realm): BASE_URL = f"https://api.{realm}.signalfx.com" ENDPOINT = "" types = { "chart": f"{BASE_URL}/v2/chart", "dashboard": f"{BASE_URL}/v2/dashboard", "detector": f"{BASE_URL}/v2/detector", "heartbeat": f"{BASE_URL}/v2/detector", "synthetic": f"{BASE_URL}/v2/synthetics/tests", } for type_key in types: if type.lower() == type_key.lower(): ENDPOINT = types.get(type_key) return ENDPOINT def splunk_observability_get_sourcetype(type): sourcetypes = { "chart": "observability:chart_api:json", "dashboard": "observability:dashboard_api:json", "detector": "observability:detector_api:json", "synthetic": "observability:synthetic_api:json", "token": "observability:token_api:json", } for type_key in sourcetypes: if type.lower() == type_key.lower(): return sourcetypes.get(type_key) def splunk_observability_get_objects(type, realm, token, logger): TOKEN = token ENDPOINT_URL = splunk_observability_get_endpoint(type, realm) limit = 200 offset = 0 pagenation = True headers = {"Content-Type": "application/json", "X-SF-TOKEN": TOKEN} processStart = time.time() objects = [] while pagenation: params = {"limit": limit, "offset": offset} try: response = requests.get(ENDPOINT_URL, headers=headers, params=params) response.raise_for_status() except requests.exceptions.RequestException as e: log.log_exception(logger, e, "RequestError", msg_before="Error fetching data:") return [] data = response.json() if isinstance(data, list): results = data elif isinstance(data, dict): results = data.get("results", []) else: logger.error("Unexpected response format") objects.extend(results) logger.info(f"pagenating {type} result 'length': {len(results)} , offset: {offset}, limit {limit}") if len(results) < limit: pagenation = False # too many objects to query, splunk will max out at 10,000 elif (offset >= 10000-limit): pagenation = False logger.warn("Cannot ingest more than 10,000 objects") else: offset += limit count = offset+len(results) timeTakenProcess = str(round((time.time() - processStart) * 1000)) log.log_event(logger, {"message": f"{type} ingest finished", "time_taken": f"{timeTakenProcess}ms", "ingested": count}) return objects def splunk_observability_get_objects_synthetics(type, realm, token, logger): processStart = time.time() BASE_URL = f"https://api.{realm}.signalfx.com" ENDPOINT_URL = f"{BASE_URL}/v2/synthetics/tests" page = 1 pagenating = True headers = {"Content-Type": "application/json", "X-SF-TOKEN": token} synthetics_objects = [] while pagenating: params = {"perPage": 100, "page": page} try: response = requests.get(ENDPOINT_URL, headers=headers, params=params) response.raise_for_status() except requests.exceptions.RequestException as e: log.log_exception(logger, e, "RequestError", msg_before="Error fetching synthetic data:") return [] data = response.json() tests = data["tests"] for test in tests: synthetic = {"id": test["id"], "type": test["type"]} SYNTHETIC_TYPE = synthetic["type"] SYNTHETIC_ID = synthetic["id"] detail_url = f"{BASE_URL}/v2/synthetics/tests/{SYNTHETIC_TYPE}/{SYNTHETIC_ID}" if type=="synthetic_detailed": try: detail_response = requests.get(detail_url, headers=headers) detail_response.raise_for_status() synthetics_objects.append(detail_response.json()) except requests.exceptions.RequestException as e: log.log_exception(logger, e, "RequestError", msg_before=f"Error fetching synthetic details for ID: {SYNTHETIC_ID}") else: synthetics_objects.append(test) pagenating = data.get("nextPageLink") is not None page += 1 timeTakenProcess = str(round((time.time() - processStart) * 1000)) log.log_event(logger, {"message": "synthetic ingest finished", "time_taken": f"{timeTakenProcess}ms", "ingested": len(synthetics_objects)}) return synthetics_objects def validate_input(definition: smi.ValidationDefinition): return False def stream_events(inputs: smi.InputDefinition, event_writer: smi.EventWriter): for input_name, input_item in inputs.inputs.items(): normalized_input_name = input_name.split("/")[-1] logger = logger_for_input(normalized_input_name) try: observability_type = input_item.get("object_type") observability_token = input_item.get("token") observability_realm = input_item.get("realm") log.modular_input_start(logger, normalized_input_name) if observability_type.lower() == "synthetic": objects = splunk_observability_get_objects_synthetics(observability_type, observability_realm, observability_token, logger) else: objects = splunk_observability_get_objects(observability_type, observability_realm, observability_token, logger) # source_type = splunk_observability_get_sourcetype(observability_type) for obj in objects: logger.debug(f"DEBUG EVENT {observability_type} :{json.dumps(obj)}") event_writer.write_event( smi.Event( data=json.dumps(obj, ensure_ascii=False, default=str), index=input_item.get("index"), sourcetype=input_item.get("sourcetype"), ) ) log.events_ingested(logger, input_name, sourcetype, len(objects), input_item.get("index")) log.modular_input_end(logger, normalized_input_name) except Exception as e: log.log_exception(logger, e, "IngestionError", msg_before="Error processing input:") ​ observability_admin_TA_rh_account.py from splunktaucclib.rest_handler.endpoint import ( field, validator, RestModel, DataInputModel, ) from splunktaucclib.rest_handler import admin_external, util import logging util.remove_http_proxy_env_vars() fields = [ field.RestField('name', required=True, encrypted=False), field.RestField('realm', required=True, encrypted=False), field.RestField('token', required=True, encrypted=True), field.RestField('interval', required=True, encrypted=False, default="300"), ] model = RestModel(fields, name='splunk_observability') endpoint = DataInputModel(model, input_type='splunk_observability') if __name__ == '__main__': logging.getLogger().addHandler(logging.NullHandler()) admin_external.handle(endpoint) restmap.conf [endpoint:admin/input/splunk_observability] match = splunk_observability python.version = python3 handlerfile = observability_admin_TA_rh_account.py Please help me to resolve this issue. Thanks, PNV
Hi, We are using Splunk Enterprise on-premise. Now, I launched another one with a trial license and I would like to test Security features. However the app download is restricted unfortunately. ... See more...
Hi, We are using Splunk Enterprise on-premise. Now, I launched another one with a trial license and I would like to test Security features. However the app download is restricted unfortunately. Can I have the download permission onto the test Splunk? Thanks, Regards, hikan
Hi I tried to set up the splunk lab but the url for instance is not working 
Hello, I'm trying to get SentinelOne data into my cloud instance but I'm getting errors similar to this related to the inputs. At first I was having an issue with authentication errors using the API... See more...
Hello, I'm trying to get SentinelOne data into my cloud instance but I'm getting errors similar to this related to the inputs. At first I was having an issue with authentication errors using the API. I believe that's resolved after regenerating the key, because these are the only logs I can see in the index I created for S1. error_message="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/********?output_mode=json" error_type="&lt;class 'splunk.ResourceNotFound'&gt;" error_arguments="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/***********?output_mode=json" error_filename="s1_client.py" error_line_number="162" input_guid="*****************" input_name="Threats"
The splunk universal forwarder image currently is not compatible with OpenShift. The image architecture requires the usage of sudo to switch between users and to run as specific UIDs which are not co... See more...
The splunk universal forwarder image currently is not compatible with OpenShift. The image architecture requires the usage of sudo to switch between users and to run as specific UIDs which are not compatible with OpenShift UIDs. Are you planning to ever fix your image to make it compatible with OpenShift to run as a sidecar container?
Wondering if there's a blacklist parameter I can add to one of my Azure inputs so that Splunk will ignore pulling the event across the WAN. I already have a working ingest action, but the amount of d... See more...
Wondering if there's a blacklist parameter I can add to one of my Azure inputs so that Splunk will ignore pulling the event across the WAN. I already have a working ingest action, but the amount of data that's coming across is causing memory issues on my forwarder ... my working ingest action is this ... NETWORKSECURITYGROUPS\\/NSGBLAHBLAH.*?(IP\\.IP\\.IP\\.IP|IP\\.IP\\.IP\\.IP) but is there an inputs.conf parameter I can set to this regex so that the data will be ignored at the source.
I have configured the microsoft 365 office 365, all are working except message trace. I rebuilt the input but still getting this error message when checking the internal logs. all other exchange mail... See more...
I have configured the microsoft 365 office 365, all are working except message trace. I rebuilt the input but still getting this error message when checking the internal logs. all other exchange mailbox data is coming in and all use the same acct.
Hello Splunk Community! Welcome to the June edition of Splunk Answers Community Content Calendar! Get ready for this week’s post dedicated to Splunk Dashboards! We're celebrating the power of com... See more...
Hello Splunk Community! Welcome to the June edition of Splunk Answers Community Content Calendar! Get ready for this week’s post dedicated to Splunk Dashboards! We're celebrating the power of community by sharing solutions for common dashboard challenges (like panel widths and time range configurations) and spotlighting the invaluable contributions of our Splunk users and experts on the Dashboards & Visualizations board. Dashboard CSS Width setup doesn't work anymore with 9.x version Upgrading to the latest Splunk version (9.x) can bring a host of improvements and new features. However, sometimes updates can introduce unexpected challenges. One issue that some Splunk users have encountered is related to custom CSS styling in classic XML dashboards, specifically affecting panel widths. This issue was brought to light by JulienKVT The Problem: Custom CSS Panel Widths No Longer Working Many Splunk administrators and developers rely on custom CSS to fine-tune the layout and appearance of their dashboards. A common use case is setting specific widths for panels within a row, allowing for a more tailored and visually appealing presentation of data. The intention of the code by JulienKVT is to set #Panel1 to 15% width and #Panel2 to 85% width within the row. However, after upgrading to Splunk 9.x, JulienKVT mentioned that this CSS styling no longer works as expected. Panels revert to their default layouts (e.g., 50/50, 33/33/33), ignoring the custom CSS rules. This issue can be frustrating for users who have carefully crafted their dashboards and rely on specific panel layouts for optimal data visualization. While Splunk's Dashboard Studio offers a more modern approach to dashboard creation, migrating existing dashboards can be a time-consuming and complex task. Many users need a solution that allows them to maintain their existing classic dashboards while still benefiting from the latest Splunk version. The Solution: Replace width with max-width in Your CSS Our contributor Paul_Dixon suggested a solution that involves a minor modification to your existing CSS code. Instead of using the width property, try using max-width instead. The key difference between width and max-width lies in how they're interpreted by the browser. width sets a fixed width for the element, while max-width sets the maximum allowed width. The element can be smaller than the max-width if other constraints apply. In the context of Splunk 9.x dashboards, it's possible that changes in the underlying layout engine are interfering with the width property. By using max-width, you're essentially giving the panel a hint about its desired size, while still allowing it to adapt to other layout constraints. The important tag ensures that this style takes precedence over other conflicting styles. Thanks to our contributor Paul_Dixon for providing a clear solution. Give it a try and let us know in the comments if it works for you! Thanks to the community for sharing this valuable tip! Splunk Dashboard: Combining Time Ranges Splunk dashboards are indispensable for visualizing and analyzing data. Often, you need to tailor your search queries to achieve the precise results you're after. This post was brought to light by Punnu. We will explore a common scenario: using different time ranges for different parts of a query and hiding specific columns from the output. But more importantly, we'll celebrate the power of the Splunk Answers community in finding solutions to even the most complex challenges. The Problem: Dynamic Time Ranges and Column Control Use a dashboard input (time picker) for the initial part of a search query. They wanted users to select a specific time range using a dashboard time picker. Run the remaining part of the query using a different time range (e.g., the entire day). The reasoning was that events triggered during the initial time range might be processed later in the day. Hide specific columns (e.g., message GUID, request time, output time) from the final displayed results. This simplifies the view and focuses on relevant information. The Solution (Partial): Dynamic Time Range Adjustment (and a Community Discovery!) While a complete solution requires a more complex setup, the user Punnu themselves discovered a clever technique for dynamically adjusting the time range based on the current time, thanks to an old post by somesoni2.  Punnu found a solution buried in the archives of Splunk Answers, a testament to the long-lasting contributions of expert users like somesoni2. The solution involves a subsearch to dynamically adjust the time range. There is a wealth of knowledge available on Splunk Answers and this highlights the incredible value of the Splunk community. The Power of Splunk Answers This solution exemplifies the power of the Splunk Answers community. Expert users have generously shared their knowledge and solutions over the years, creating a vast and invaluable resource. The fact that the user was able to find a working solution from 2017 demonstrates the enduring relevance of the information shared on Splunk Answers. Remember to leverage this incredible resource when facing your own Splunk challenges! The answer you need might already be waiting for you on Splunk Answers. Kudos to the Expert Users! We want to give a shout-out to the users like JulienKVT and Punnu who bring these questions to light and the countless expert users who contribute to Splunk Answers like Paul_Dixon and somesoni2. Their dedication to helping others and sharing their expertise makes the Splunk community a truly special place. Because of them, you can often find a solution to almost any Splunk challenge, no matter how complex. These unsung heroes are the backbone of the Splunk ecosystem. Would you like to feature more solutions like this? Reach out @Anam Siddique on Slack in our Splunk Community Slack workspace to highlight your question, answer, or tip in an upcoming Community Content post! Beyond Splunk Answers, the Splunk Community offers a wealth of valuable resources to deepen your knowledge and connect with other professionals! Here are some great ways to get involved and expand your Splunk expertise: Role-Based Learning Paths: Tailored to help you master various aspects of the Splunk Data Platform and enhance your skills. Splunk Training & Certifications: A fantastic place to connect with like-minded individuals and access top-notch educational content. Community Blogs: Stay up-to-date with the latest news, insights, and updates from the Splunk community. User Groups: Join meetups and connect with other Splunk practitioners in your area. Splunk Community Programs: Get involved in exclusive programs like SplunkTrust and Super Users where you can earn recognition and contribute to the community. And don’t forget, you can connect with Splunk users and experts in real-time by joining the Slack channel. Dive into these resources today and make the most of your Splunk journey!
Hello folks, We use Splunk cloud platform for our logging system. I was trying to use the Search Filter under the Restrictions tab in Edit Role to add a filter which masks JWT tokens and emails in a... See more...
Hello folks, We use Splunk cloud platform for our logging system. I was trying to use the Search Filter under the Restrictions tab in Edit Role to add a filter which masks JWT tokens and emails in a search but keep running into an error with the litsearch command: unbalanced parenthesis. Regex used in the search filter: | eval _raw=replace(_raw, "token=([A-Za-z0-9-]+\.[A-Za-z0-9-]+\.[A-Za-z0-9-_]+)", "token=xxx.xxx.xxx") | eval _raw=replace(_raw, "[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}", "xxx@xxx.xxx") When a user with the role that has the restriction above tries to search for anything the job inspector shows that there are parenthesis around the literal search and the search filter above so it would look like this litsearch (<search terms> | eval _raw....) I tried changing the search filter to be  | rex mode=sed "s/token=([A-Za-z0-9-]+\.[A-Za-z0-9-]+\.[A-Za-z0-9-_]+)/token=xxx.xxx.xxx/g" to only replace the tokens but still run into the same issue. When previewing the filter the results work fine, but when doing an actual query with the user it will fail. Any suggestions to make the search filter simpler, or any other methods I could use for role based search filtering?
Hi, I have this field in this format and i am using eval to convert but sometimes there is an extra space in it after :  Mon 2 Jun 2025 20:51:24 : 792 EDT  - with extra space after hhmmss (space be... See more...
Hi, I have this field in this format and i am using eval to convert but sometimes there is an extra space in it after :  Mon 2 Jun 2025 20:51:24 : 792 EDT  - with extra space after hhmmss (space before 792) Mon 2 Jun 2025 20:51:24 :792 EDT - this is another scenario where there will be no space i have to get 2 scenarios in this eval - any help | eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y")  
We are getting this particular error Waiting for queued jobs to start for most of our customers. When they click on manage jobs they are not jobs are there to delete. Why is this happening? Is there ... See more...
We are getting this particular error Waiting for queued jobs to start for most of our customers. When they click on manage jobs they are not jobs are there to delete. Why is this happening? Is there in concurrent searches concept here where to modify and how much can I increase? We nearly have 100 roles created as of now.
Hello, I'm not finding info on the limits within Splunk's data rebalancing. Some context, I have ~40 indexers and stood up 8 new ones. The 40 old ones had an avg of ~150k buckets each. At some poi... See more...
Hello, I'm not finding info on the limits within Splunk's data rebalancing. Some context, I have ~40 indexers and stood up 8 new ones. The 40 old ones had an avg of ~150k buckets each. At some point the rebalance reported that it was completed (above the .9 threshold) even though there were only ~40k buckets on the new indexers. When I kicked off a second rebalance, it started from 20% again and continued rebalancing because the new indexers were NOT space limited on the smartstore caches yet. The timeout was set to 11 hours and the first one finished in ~4. The master did not restart during this balancing. Can anyone shed some more light on why the first rebalance died? Like, is there a 350k bucket limit per rebalance or something?
I am thinking about which way is better to use LDAP(AD) or SAML for authentication of Splunk Cloud. Unlike Splunk standalone, the cloud version looks like a little tricky. I read some document that... See more...
I am thinking about which way is better to use LDAP(AD) or SAML for authentication of Splunk Cloud. Unlike Splunk standalone, the cloud version looks like a little tricky. I read some document that Splunk Cloud is not recommend to connect to AD- LDAP directly somewhere. But I could not find where they are. I am trying to connect LDAP from Splunk Cloud, but always got error and there were very few inforamtion showing in splunkd.log Can someone let me know if the direct connect to AD LDAP from Spunk cloud is recommended or not? Also if there is any trouble shooting tool can easily built the connection?
Hi,
Currently, we receive a single email alert via Notable Event Aggregation Policies (NEAP) whenever our ITSI services transition from normal to high or critical. However, we need an automated process t... See more...
Currently, we receive a single email alert via Notable Event Aggregation Policies (NEAP) whenever our ITSI services transition from normal to high or critical. However, we need an automated process that sends recurring email alerts every 5 minutes if the service remains degraded and hasn't reverted back to normal. From my research, many forums and documentation suggest achieving this through Correlation Searches. However, since we rely on KPI alerting, and none of our Correlation Searches (even the out-of-the-box ones) seem to function properly, this approach hasn't worked for us... Given the critical nature of the services we monitor, we’re seeking guidance on setting up recurring alerts using NEAPs or any other reliable method within Splunk ITSI. Any assistance or insights on how to configure this would be greatly appreciated.