All Topics

Top

All Topics

Hi, I have a splunk server that has tonnes of data in it. What we would like to do is have a system on a dedicated search head, that does a search lookup, then exports the data it finds to an S3 buc... See more...
Hi, I have a splunk server that has tonnes of data in it. What we would like to do is have a system on a dedicated search head, that does a search lookup, then exports the data it finds to an S3 bucket for another system to ingest and do analysis on. I have looked at several adds including Export Everything, and S3 Uploader for Splunk, but neither of them have clear instructions and I am having issues. Are there any resources that are clear on how to setup the connection to export search results from Splunk into an S3 bucket?
Hi, if an environment encounters the error 'Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch' multiple times (meaning the issue persists even ... See more...
Hi, if an environment encounters the error 'Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch' multiple times (meaning the issue persists even after cleaning the dispatch directory), what corrective actions should be taken? Should the dispatch directory be cleaned regularly? This is for a standalone environment.
Please help me to extract multiple values from one single value.  
I am trying to take the results of one search, extract a field from those results (named "id") and take all of those values (deduped) and use them to get results from another search. Unfortunately th... See more...
I am trying to take the results of one search, extract a field from those results (named "id") and take all of those values (deduped) and use them to get results from another search. Unfortunately the second search doesn't have this field name directly in the sourcetype either so it has to be extracted with rex.  I've been having issues with this though. From what I've read I need to use the subsearch to extract the id's for the outer search. It's not working though. Each search is from a competely different data set that has very little in common.   index=index1 source="/somefile.log" uri="/path/with/id/some_id/" | rex field=uri "/path/with/id/(?<some_id>[^/]+)/*" [ search index=index2 source="/another.log"" "condition-i-want-to-find" | rex field=_raw "some_id:(?<some_id>[^,]+),*" | dedup some_id | fields some_id ]   I've tried a bunch of variations of this with no luck. Including renaming field some_id to "search" as  some have said that would help. I don't necessarily need the original uri="/path/with/id/some_id" in the outer search but that would be nice to limit those results.
Hi, Please help me in extracting multivalue fields from email body logs: LOG: "Computer Name","Patch List Name","Compliance Status","Patch List Name1","Compliance Status1","OS Type1" "XXXX.e... See more...
Hi, Please help me in extracting multivalue fields from email body logs: LOG: "Computer Name","Patch List Name","Compliance Status","Patch List Name1","Compliance Status1","OS Type1" "XXXX.emea.intra","ACN - Windows Server - PL - Up to Oct24","Compliant","[ACN - Windows Server - PL - Up to Aug24] + [ACN - Windows Server - PL - Sep24]","Compliant","Windows" "XXXX.na.intra","ACN - Windows Server - PL - Up to Oct24","Compliant","[ACN - Windows Server - PL - Up to Aug24] + [ACN - Windows Server - PL - Sep24]","Compliant","Windows" Fields i want to extract are these: "Computer Name","Patch List Name","Compliance Status","Patch List Name1","Compliance Status1","OS Type1" I have applied rex to bring out all the fields  The rex is giving me total number of 3131 computer_names but when i am using mvexpand command to expand in into multiple rows , it is giving me only 1500 results not sure why rest are getting truncated. Attaching the search query and snippet for reference: index=mail "*tanium*" |spath=body |rex field=body max_match=0 "\"(?<Computer_name>.*)\",\"ACN" |rex field=body max_match=0 "\"(?<Computer_name1>.*)\",\"\[n" |rex field=Computer_name1 max_match=0 "(?<Computer_name2>.*)\",\"\[n" |eval Computer_name=mvappend(Computer_name,Computer_name2)|table Computer_name |dedup Computer_name | mvexpand Computer_name | makemv Computer_name delim="," index=mail "*tanium*" |spath=body |rex field=body max_match=0 "\"(?<Computer_name>.*)\",\"ACN" |rex field=body max_match=0 "\"(?<Computer_name1>.*)\",\"\[n" |rex field=Computer_name1 max_match=0 "(?<Computer_name2>.*)\",\"\[n" |eval Computer_name=mvappend(Computer_name,Computer_name2) |rex field=body max_match=0 "\,(?<Patch_List_Name1>.*)\"\[" |rex field=Patch_List_Name1 max_match=0 "\"(?<Patch_List_Name>.*)\",\"" |rex field=Patch_List_Name1 max_match=0 "\",\""(?<Compliance_status>.*)\" |table Computer_name Patch_List_Name Compliance_status |dedup Computer_name Patch_List_Name Compliance_status | eval tagged=mvzip(Computer_name,Patch_List_Name) | eval tagged=mvzip(tagged,Compliance_status) | mvexpand tagged | makemv tagged delim="," | eval Computer_name=mvindex(tagged,0) | eval Patch_List_Name=mvindex(tagged,1) |eval Compliance_status=mvindex(tagged,-1) |table Computer_name Patch_List_Name Compliance_status      
Hi,  I am dealing with an issue where I am ingesting some logs that contains a few regular line then followed by xml data, but I am only seeing 1 event show up properly with the regular lines and ... See more...
Hi,  I am dealing with an issue where I am ingesting some logs that contains a few regular line then followed by xml data, but I am only seeing 1 event show up properly with the regular lines and 2 other events get cut short after ingesting the first few lines (examples below).  So each event is meant to be structured like event1 however they are cut and when I check the actual log file everything is present.  I tried changing the limits.conf and including maxKBps to 0 but no luck. [thruput] maxKBps = 0 Any other ideas as to what could be causing the issue?  Event1: 2024-11-01 10:04:24,488 23 INFO Sample1 - Customer:11111 ApiKey:xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx DateTime:2024-11-01 10:04:24 RequestBody: <?xml version="1.0" encoding="utf-16"?>........<closing tag> Event2: 2024-11-01 10:04:26,488 23 INFO Sample1 - Customer:11111 ApiKey:xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Event3:  2024-11-01 10:04:28,488 23 INFO Sample1 - Customer:11111 ApiKey:xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
I gave splunk table dashboard view. I need to highlight the "user" field all value with green. all these field value in number and letter. how do I highlight all the value with green. When select Col... See more...
I gave splunk table dashboard view. I need to highlight the "user" field all value with green. all these field value in number and letter. how do I highlight all the value with green. When select Color "values" I can only Automatic but it giving random color. how do I give only green.
2024-11-01 12:25:49,065 +0000 ERROR startup:116 - Unable to read in product version information; isSessionKeyDefined=False error=__init__() got an unexpected keyword argument 'context' 2024-11-01 12... See more...
2024-11-01 12:25:49,065 +0000 ERROR startup:116 - Unable to read in product version information; isSessionKeyDefined=False error=__init__() got an unexpected keyword argument 'context' 2024-11-01 12:25:49,066 +0000 INFO startup:148 - Splunk appserver version=UNKNOWN_VERSION build=000 isFree=False isTrial=True productType=splunk instanceType=UNKNOWN 2024-11-01 12:25:49,066 +0000 INFO decorators:130 - loading uri: /en-US/ 2024-11-01 12:25:49,068 +0000 INFO error:342 - GET /en-US/ 127.0.0.1 8065 2024-11-01 12:25:49,068 +0000 INFO error:345 - 500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request. 2024-11-01 12:25:49,068 +0000 ERROR error:346 - Traceback (most recent call last): File "/opt/splunk/lib/python3.9/site-packages/cherrypy/_cprequest.py", line 628, in respond self._do_respond(path_info) File "/opt/splunk/lib/python3.9/site-packages/cherrypy/_cprequest.py", line 687, in _do_respond response.body = self.handler() File "/opt/splunk/lib/python3.9/site-packages/cherrypy/lib/encoding.py", line 219, in __call__ self.body = self.oldhandler(*args, **kwargs) File "/opt/splunk/lib/python3.9/site-packages/splunk/appserver/mrsparkle/lib/htmlinjectiontoolfactory.py", line 78, in wrapper resp = handler(*args, **kwargs) File "/opt/splunk/lib/python3.9/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__ return self.callable(*self.args, **self.kwargs) File "&lt;/opt/splunk/lib/python3.9/site-packages/decorator.py:decorator-gen-1740&gt;", line 2, in index File "/opt/splunk/lib/python3.9/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 52, in rundecs return fn(*a, **kw) File "&lt;/opt/splunk/lib/python3.9/site-packages/decorator.py:decorator-gen-1738&gt;", line 2, in index File "/opt/splunk/lib/python3.9/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 134, in check return fn(self, *a, **kw) File "&lt;/opt/splunk/lib/python3.9/site-packages/decorator.py:decorator-gen-1737&gt;", line 2, in index File "/opt/splunk/lib/python3.9/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 185, in validate_ip return fn(self, *a, **kw) File "&lt;/opt/splunk/lib/python3.9/site-packages/decorator.py:decorator-gen-1736&gt;", line 2, in index File "/opt/splunk/lib/python3.9/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 264, in preform_sso_check update_session_user(sessionKey, remote_user) File "/opt/splunk/lib/python3.9/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 207, in update_session_user en = splunk.entity.getEntity('authentication/users', user, sessionKey=sessionKey) File "/opt/splunk/lib/python3.9/site-packages/splunk/entity.py", line 276, in getEntity serverResponse, serverContent = rest.simpleRequest(uri, getargs=kwargs, sessionKey=sessionKey, raiseAllErrors=True) File "/opt/splunk/lib/python3.9/site-packages/splunk/rest/__init__.py", line 573, in simpleRequest h = httplib2.Http(timeout=timeout, proxy_info=None, context=ctx) TypeError: __init__() got an unexpected keyword argument 'context'
Hello team, I’ve developed a custom command script that works perfectly when executed through the CLI, but it fails to run in the Splunk Web UI. I suspect this may be due to permissions or configura... See more...
Hello team, I’ve developed a custom command script that works perfectly when executed through the CLI, but it fails to run in the Splunk Web UI. I suspect this may be due to permissions or configuration issues, as both environments might not be using the same settings. Details Environment: Splunk Enterprise 9.2.2 Script: A custom Python script located in the bin directory of my app. The script runs successfully when executed via the CLI, but in the UI, it either returns errors or no results. Troubleshooting Steps Taken Verified that the script is in the correct bin directory with appropriate execution permissions. Checked commands.conf, authorization.conf, app.conf files for any configuration inconsistencies. Ensured that roles in the UI environment have the necessary permissions. Could this issue be related to role-based restrictions or specific configurations in the UI? Any insights on additional configuration checks or steps to align CLI and UI permissions would be greatly appreciated. Thank you in advance!
I have been working through the splunk data models module and have been trying to get 100% on the data models quiz. I have gone through it about 20 times, getting 93%, and I have narrowed the questio... See more...
I have been working through the splunk data models module and have been trying to get 100% on the data models quiz. I have gone through it about 20 times, getting 93%, and I have narrowed the question that is wrong down to: What do Pivots require to create visualizations in Splunk? Select all that apply. I have tried every combination I can think of that could be valid but cannot get a correct answer. I have seen in another post that someone was having problems with this quiz so maybe the quiz has a wrong answer? Any help answering this would be appreciated as it has been frustrating me
We deal with hundreds of iocs ( mostly flagged IP's) that come in monthly, and we need to check them for hits in our network. We do not want to continue using summary search one at a time. Is it poss... See more...
We deal with hundreds of iocs ( mostly flagged IP's) that come in monthly, and we need to check them for hits in our network. We do not want to continue using summary search one at a time. Is it possible to use lookup table ( or any other way) to search hundreds at a time or does this have to be done one at a time. I am very new to splunk and still learning. I am needing to see if we have had any traffic from these or to these IP's. 
Hi, i made changes on my indexer storage but when i see on monitoring console part disk usage, the value is negative. Have anyone face this?. I already refresh the asset with monitoring console refre... See more...
Hi, i made changes on my indexer storage but when i see on monitoring console part disk usage, the value is negative. Have anyone face this?. I already refresh the asset with monitoring console refresh and restart the instance but nothing changed.  
I've got data so: "[clientip]  [host] - [time] [method] [uri_path] [status] [useragent]" ..   and do the following search:   index=web uri_path="/somepath" status="200" OR status="400" | rex f... See more...
I've got data so: "[clientip]  [host] - [time] [method] [uri_path] [status] [useragent]" ..   and do the following search:   index=web uri_path="/somepath" status="200" OR status="400" | rex field=useragent "^(?<app_name>[^/]+)/(?<app_version>[^;]+)?\((?<app_platform>[^;]+); *" | eval app=app_platform+" "+app_name+" "+app_version   I've split up the useragent just fine and verified the output. I want to now compare status  by "app". So I've added the following:   | stats count by app, status   Which gives me: app status count android app 1.0 200 5000 ios app 2.0 400 3 android app 1.1 200 500 android app 1.0 400 12 ios app 2.0 200 3000 How can I compare, for a given "app" (combo of platform, name, version) the rate of success where success is when the response = 200 and failure if 400. I understand that I need to take success and divide by success + failure count.. But how do I combine this data?  Also note that I need to consider that some apps may not have any 400 errors. 
  1. I need to fetch data based on deviceMac such that row gets corresponding data from each column. 2. It should fill NA or NULL if there is not corresponding data 3. If you see column Id, yo... See more...
  1. I need to fetch data based on deviceMac such that row gets corresponding data from each column. 2. It should fill NA or NULL if there is not corresponding data 3. If you see column Id, you are seeing more data.       for example: if deviceMac 90:dd:5d:bf:10:54 is connected to SA91804F4A, then id has 2 values : SA91804F4A and f452465ee7ab but if devicemac d4:54:8b:bd:a1:c8 is connected to f452465ee7ab, then id has 1 value : f452465ee7ab. But I want to have my output like this: 90:dd:5d:bf:10:54 SA91804F4A ( do not include f452465ee7ab) d4:54:8b:bd:a1:c8 f452465ee7ab Splunk query used to get output:   | search | rex field=_raw "(?msi)(?<json>{.+}$$)" | spath input=json | spath input=json output=deviceMac audit.result.devices{}.mac | spath input=json output=deviceName audit.result.devices{}.name | spath input=json output=status audit.result.devices{}.health{}.status | spath input=json output=connectionState audit.result.devices{}.connectionState | spath input=json output=id audit.result.devices{}.leafToRoot{}.id | eval time=strftime(_time,"%m/%d/%Y %H:%M:%S.%N") | dedup deviceMac, id | table time, deviceMac, connectionState, id, deviceName, status  
Hi, I’m currently setting up a pipeline to send logs from AWS Kinesis Firehose to Splunk. I'm using Splunk’s Cloud Trial version as the destination endpoint, and my goal is to send data without requ... See more...
Hi, I’m currently setting up a pipeline to send logs from AWS Kinesis Firehose to Splunk. I'm using Splunk’s Cloud Trial version as the destination endpoint, and my goal is to send data without requiring an SSL handshake. Here's a summary of my setup: Service: AWS Kinesis Firehose Destination: Splunk Cloud Trail (using Splunk HEC URL) Goal: Send data directly from Firehose to Splunk without SSL validation, if possible. "errorCode": "Splunk.SSLHandshake", "errorMessage": "Could not connect to the HEC endpoint. Make sure that the certificate and the host are valid." To troubleshoot, I also tested sending a record with the following command: aws firehose put-record --delivery-stream-name FirehoseSplunkDeliveryStream \ --record='{"Data":"eyJldmVudCI6eyJrZXkxIjoidmFsdWUxIiwia2V5MiI6InZhbHVlMiJ9fQ=="}'  The SSL handshake error persists when connecting to the Splunk HEC endpoint. Has anyone configured a similar setup, or is there a workaround to disable SSL validation for the Splunk endpoint? I'm new to splunk and just trying it out, any insights or suggestions would be greatly appreciated! Thanks!
In my company's Splunk server, when I do a search, I usually see a difference in time between the "Time" column and the "Event" column for each log entry.  An example: Time: 10/21/24 11:06:37.000 AM... See more...
In my company's Splunk server, when I do a search, I usually see a difference in time between the "Time" column and the "Event" column for each log entry.  An example: Time: 10/21/24 11:06:37.000 AM Event: 2024-10-21 11:31:59,232 priority=WARN  ... Why would the Time column have 11:06:37 but the Event field (the actual logged data) show 11:31:59,232 
I would like to graph a table that has 3 fields Label                                 X                                     Y Value1                            27                                  4... See more...
I would like to graph a table that has 3 fields Label                                 X                                     Y Value1                            27                                  42 Value2                           92                                   87 Value3                           61                                  74   I think it would be a scatter graph, I am currently using dashboard studio (splunk 9.3.x) - maybe this in not available in dashboard studio yet, if so, is there an option in the classic dashboards?  Using the standard scatter graph panel, I currently get the X value plotted and the value as the legend. Thanks for any assistance, Jason
We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about integrating cloud-managed Kubernetes platforms? In this post, we’ll dig into how to monitor Go... See more...
We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about integrating cloud-managed Kubernetes platforms? In this post, we’ll dig into how to monitor Google Kubernetes Engine (GKE) by integrating with Splunk Observability Cloud.  Wait, but why? GKE dashboards provide observability metrics around infrastructure and application health for clusters and workloads from within the Google Cloud Platform (GCP) itself. So why should we integrate GKE with an observability backend? Typically, not every piece of application infrastructure lives in the same cloud platform. In the middle of an incident when seconds matter, no one wants to navigate between platforms and remember where each discrete observability tool lives. Instead, having everything in one unified observability platform reduces toil and time to incident resolution. With end-to-end observability all in one place, root-cause analysis becomes a lot easier thanks to the ability to correlate issues impacting multiple parts of the stack. Third-party observability platforms provide advanced, flexible, and configurable dashboards, charts, and detectors along with support for application, service, and incident management integrations. Plus, depending on which platform you choose, you can avoid vendor lock-in by using an OpenTelemetry-native observability platform.  Observability Metrics in GKE We can monitor our applications running in Google Kubernetes Engine from GCP itself by configuring the Kubernetes Engine Monitoring setting on our cluster to System and workload wogging and monitoring. We can click into our deployed workload and see the CPU, memory, and disk utilization: And dig into the observability details around the health of our application – like container errors and restarts:  We can even go into Google Observability Monitoring and further explore GKE cluster performance: And build custom dashboards and charts:  These things are all great and useful, but again, to unify our observability tools and improve the time to incident resolution, how can we get all of this information into a single backend observability platform? Integrate GKE and Splunk Observability Cloud There are a couple of ways you can integrate GKE into Splunk Observability Cloud. We’ll first follow along with the Connect to Google Cloud Platform documentation.  Integrate Google Cloud Platform From within the GCP UI, we’ll go into the project we want to monitor and add a new Splunk Service Account under IAM & admin: Once the new service account is created, we can edit it to create a new service account key:  And then download the key as JSON:  Note: GCP Cloud Resource Manager API must be activated so Splunk Infrastructure Monitoring can use it to validate permissions on the service account key. Also, if you want to monitor multiple GCP projects, the above steps need to be repeated for each one. Next, we’ll navigate to Splunk Observability Cloud and add our Google Cloud Platform integration:  We can then import our newly created service account key: A benefit of integrating GCP in this way is that we can sync all supported GCP services automatically. However, for our purposes, we’ll optionally refine the GCP synced services to only include GKE:  Once saved, we’ll see Google Cloud Platform under our actively deployed integrations:  Depending on the polling interval, it might take a few minutes for our GCP metric data to populate within Splunk Observability Cloud. Integrate Google Kubernetes Engine  If you’re only interested in integrating GKE without additional GCP service integration (or if your security team won’t give you a service account), you can integrate with Splunk Observability Cloud via Helm chart. We can search for Google Kubernetes Engine in the list of available integrations and follow the guided Kubernetes Monitoring instructions: We’ll specify Google Cloud Platform as the provider and Google GKE as the distribution:  And then we’ll follow along with the installation instructions: In the GKE UI, activate a cloud shell, and then enter the above commands within the instance:  You’ll notice our helm-install splunk-otel-collector command failed because of a missing instrumentation.endpoint value. This issue results from the fact that the version of helm pre-installed in the Google Cloud Shell is out of date. To resolve the error, we need to upgrade the helm version to one supported for the current Kubernetes version and rerun the commands in the Splunk Observability Cloud installation instructions:  After this completes successfully, we can go back over to Splunk Observability Cloud and see our integrated telemetry data: From here, we can explore data like CPU and memory usage/utilization, resource capacity, container restarts, node state, and others right alongside the rest of our application and infrastructure data within Splunk Observability Cloud, even if other parts of the app run on-premises or on another public cloud provider.   Observability Metrics in Splunk Observability Cloud Now that our GKE integration is complete, we can use all of the available Splunk Observability Cloud products and features to monitor our GKE environment. From Infrastructure Monitoring we can view our Google Cloud Platform navigators:  And dive into the health of our clusters:  Dig into a specific cluster:  Monitor pod health and performance:  And view critical usage metrics:  We can create detectors and alerting rules from within our Navigators and manage them alongside the detectors for the rest of our applications and infrastructure: In a previous post, we looked in detail at how to use the Kubernetes Navigators to Detect and Resolve Issues in a Kubernetes Environment. With GKE now integrated with Splunk Observability Cloud, we can use these same tools to proactively monitor, detect, and alert on anomalies in our GKE environment.   Wrap up  We can monitor our GKE environment from within GCP itself, but integrating with a backend observability platform like Splunk Observability Cloud unifies our monitoring solution. With one unified observability platform, we can more easily detect incidents and resolve them faster without having to navigate between different observability tooling. Want to try integrating GKE with Splunk Observability Cloud for yourself? Try Splunk Observability Cloud free for 14 days! Resources Connect to Google Cloud Platform: Guided setup and other options Install the Collector for Kubernetes using Helm Splunk OpenTelemetry Collector for Kubernetes
Hello, I’m experiencing a connectivity issue when trying to send events to my Splunk HTTP Event Collector (HEC) endpoint. I have confirmed that HEC is enabled, and I am using a valid authorization t... See more...
Hello, I’m experiencing a connectivity issue when trying to send events to my Splunk HTTP Event Collector (HEC) endpoint. I have confirmed that HEC is enabled, and I am using a valid authorization token. Here’s the command I am using: curl -k "https://[your-splunk-instance].splunkcloud.com:8088/services/collector/event" \ -H "Authorization: Splunk [your-token]" \ -H "Content-Type: application/json" \ -d '{"event": "Hello, Splunk!"}' Unfortunately, I receive the following error: curl: (28) Failed to connect to [your-splunk-instance].splunkcloud.com port 8088 after [time] ms: Couldn't connect to server Troubleshooting Steps Taken: Successful Connection from Another User: Notably, another user from a different system was able to successfully use the same curl command to reach the same endpoint Network Connectivity: I verified network connectivity by using ping and received a timeout for all requests. I performed a traceroute and found that packets are lost after the second hop. Despite these efforts, the issue persists. If anyone has encountered a similar issue or has suggestions for further troubleshooting, I would greatly appreciate your help. Thank you!
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does an... See more...
Dashboard panels have recently stopped being a consistent size across the row introducing grey space on our boards. This is happening throughout the app on Splunk Cloud Version:9.2.2403.111. Does anyone know of any changes or settings which may have affected this and how it can be resolved?  Thanks