All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Now we have a big shcluster for many department users, for some reason, we must spilt a department for independent use. We considered creating a new cluster directly, but we have too many things to m... See more...
Now we have a big shcluster for many department users, for some reason, we must spilt a department for independent use. We considered creating a new cluster directly, but we have too many things to migrat We plan to network isolate the existing cluster nodes, and then configure the isolated part to another cloned one, and finally delete the unnecessary apps on both clusters. Is this feasible?
Hey @Splunkers, Looking for valuable insights for this use case.   I wanted to extract the numbers at the end of the log (highlighted in bold). Pls help. Sample log: 74.133.120.000 - LASTHOP:142... See more...
Hey @Splunkers, Looking for valuable insights for this use case.   I wanted to extract the numbers at the end of the log (highlighted in bold). Pls help. Sample log: 74.133.120.000 - LASTHOP:142.136.168.1 - [19/May/2025:23:30:12 +0000] "GET /content/*/residential.existingCustomerProfileLoader.json HTTP/1.1" 200 143 "/cp/activate-apps?cmp=dotcom_sms_selectapps_111324" "Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Mobile Safari/537.36" 384622
I am new to Splunk SOAR and I have a custom python code block that I am creating and exporting a variable to a Splunk action block. The variable in the custom code block is set fine and with debug s... See more...
I am new to Splunk SOAR and I have a custom python code block that I am creating and exporting a variable to a Splunk action block. The variable in the custom code block is set fine and with debug statements I can see it set correctly.  I then export that variable. In the splunk action block , I import that variable but when I try to use it the value is set to "None".  When I import soar system variables, it works fine.   There are no error messages.  SOAR has the auto fill for the variables so not like I have a typo.   Screen shot below {0} is my customer code variable that gets set to none. {1} is from the extract ip utility and that is set fine.  
We are receiving the following Meraki sourcetypes, and we wonder if there is any app that presents this set of sourcetypes nicely - meraki:securityappliances meraki:devicesavailabilitieschangehisto... See more...
We are receiving the following Meraki sourcetypes, and we wonder if there is any app that presents this set of sourcetypes nicely - meraki:securityappliances meraki:devicesavailabilitieschangehistory meraki:assurancealerts meraki:licensessubscriptionentitlements meraki:apirequestshistory meraki:appliancesdwanstatuses meraki:licensesoverview
How will get /add pre-populated fields as checkboxes severity field
Hello.  I am working on creating an alert in Splunk for detecting when a firewall stops sending logs. We have all logs from firewalls forwarded to syslog in Splunk as sourcetype=pan:traffic . The pr... See more...
Hello.  I am working on creating an alert in Splunk for detecting when a firewall stops sending logs. We have all logs from firewalls forwarded to syslog in Splunk as sourcetype=pan:traffic . The problem is we have ha-pairs/ active and passive firewall and I don't see how to construct the query to check when BOTH firewalls (let's say active city-fw01 and passive city-fw02) don't send logs. We have more than 100 devices so I am using a lookup table with the list.  Any idea would be great, thanks.
Hello , My splunk query is simple:   index=abc,source=xxx.trc | transaction host source max events=100000 | table _time host source _raw   Now when i execute this until transaction command, it... See more...
Hello , My splunk query is simple:   index=abc,source=xxx.trc | transaction host source max events=100000 | table _time host source _raw   Now when i execute this until transaction command, it is fine "<", ">" they appear as it is.   But when i give table command, "<", ">" changed to "&lt;" and "&gt;"   Is there anyway i can prevent this?
Hello everybody, is there a way to customize the default values for robots.txt and sitemap.xml on splunk?
So this app has three parts IA, TA, and main App itself. Installed IA on a forwarder, TA on the Cluster Master, and App - on the search head. All three have API configuration options. So where we ent... See more...
So this app has three parts IA, TA, and main App itself. Installed IA on a forwarder, TA on the Cluster Master, and App - on the search head. All three have API configuration options. So where we enter API settings? I hardly imagine entering on all three.
Does Linux universal forwarder use kernel hook technology? Such as eBPF? The forwarder version is  8.2.1.
Hello everyone, We have a distributed deployment of Splunk Enterprise with 3 indexers. Recently, it has been raising Detecting bucket ID conflicts warnings:   So far I have tried : https:/... See more...
Hello everyone, We have a distributed deployment of Splunk Enterprise with 3 indexers. Recently, it has been raising Detecting bucket ID conflicts warnings:   So far I have tried : https://community.splunk.com/t5/Splunk-Enterprise/Why-are-we-encountering-an-issue-after-a-data-migration-with/m-p/567695 https://splunk.my.site.com/customer/s/article/ERROR-Detecting-bucket-ID-conflicts   Tried renaming the conflicting bucket, moving DISABLED buckets out, combining these options and separately.  The warning is raised when a rolling restart is executed. When it is resolved on one indexer, at next rolling restart it is raised on the next indexer and so on in circles.   Please, advise.
I have the below configuration in my logback.xml. While the url, token, index sourcetype and disableCertificateValidation fields are getting picked up, the batchInterval, batchCount and sendMode are ... See more...
I have the below configuration in my logback.xml. While the url, token, index sourcetype and disableCertificateValidation fields are getting picked up, the batchInterval, batchCount and sendMode are not. I ran my application in debug mode, and I did see that the `ch.qos.logback.core.model.processor.AppenderModelHandler` is picking up the these tags as submodels correctly. Can someone please help me understand if I'm doing anything wrong here? <?xml version="1.0" encoding="UTF-8"?> <configuration> <appender name="SPLUNK_HTTP" class="com.splunk.logging.HttpEventCollectorLogbackAppender"> <url>my-splunk-url</url> <token>my-splunk-token</token> <index>my-index</index> <sourcetype>${USER}_local</sourcetype> <disableCertificateValidation>true</disableCertificateValidation> <batchInterval>1</batchInterval> <batchCount>1000</batchCount> <sendMode>parallel</sendMode> <retriesOnError>1</retriesOnError> <layout class="my-layout-class"> <!-- some custom layout configs --> </layout> </appender> <logger name="com.myapplication" level="DEBUG" additivity="false"> <appender-ref ref="SPLUNK_HTTP"/> </logger> <root level="DEBUG"> <appender-ref ref="SPLUNK_HTTP"/> </root> </configuration>   I'm using the following dependency for splunk, if it matters -  <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.7</version> </dependency>    
Hi Splunkers, I’m running a Splunk Search Head Cluster (SHC) with 3 search heads, authenticated via Active Directory (AD). We have several custom apps deployed. Currently, users are able to: Crea... See more...
Hi Splunkers, I’m running a Splunk Search Head Cluster (SHC) with 3 search heads, authenticated via Active Directory (AD). We have several custom apps deployed. Currently, users are able to: Create alerts Delete alerts Create reports However, they are unable to delete reports. Investigation Details From the _internal logs, here’s what I observed: When deleting an alert — the deletion works fine: 192.168.0.1 - user [17/May/2025:11:06:59.687 +0000] "DELETE /en-US/splunkd/__raw/servicesNS/username/SOC/saved/searches/test-user-alert?output_mode=json HTTP/1.1" 200 421 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" - eac203572253a2bd3db35ee0030c6a76 68ms 192.168.0.1 - user [17/May/2025:11:06:59.690 +0000] "DELETE /servicesNS/username/SOC/saved/searches/test-user-alert HTTP/1.1" 200 421 "-" "Splunk/9.4.1 (Linux 6.8.0-57-generic; arch=x86_64)" - - - 65ms   When deleting a report — it fails with a 404 Not Found: 192.168.0.1 - user [17/May/2025:10:27:51.699 +0000] "DELETE /en-US/splunkd/__raw/servicesNS/nobody/SOC/saved/searches/test-user-report?output_mode=json HTTP/1.1" 404 84 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:138.0) Gecko/20100101 Firefox/138.0" - eac203572253a2bd3db35ee0030c6a76 5ms 192.168.0.1 - user [17/May/2025:10:27:51.702 +0000] "DELETE /servicesNS/nobody/SOC/saved/searches/test-user-report HTTP/1.1" 404 84 "-" "Splunk/9.4.1 (Linux 6.8.0-57-generic; arch=x86_64)" - - - 1ms   Alerts are created under the user’s namespace (servicesNS/username/...) and can be deleted by the user. Reports appear to be created under the nobody namespace (servicesNS/nobody/...), which may be the reason users lack permission to delete them. Has anyone faced a similar issue?
[cmd_data=list cm device recusive] splunk auto extracts just [cmd_data=list] End result - be able to filter on cmd data and get the full cmd / mutiple values.   Will these configs work? tran... See more...
[cmd_data=list cm device recusive] splunk auto extracts just [cmd_data=list] End result - be able to filter on cmd data and get the full cmd / mutiple values.   Will these configs work? transforms.conf [full_cmd] SOURCE_KEY = cmd_data REGEX = (cmd_data)\S(?<full_cmd>.*) FORMAT = full_cmd::$1 props.conf EXTRACT-field full_cmd
Hi Splunk Gurus, We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observabil... See more...
Hi Splunk Gurus, We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observability at this time. Is it possible to manage or configure these OTel Collectors through the traditional Splunk Deployment Server? If so, could you please share any relevant documentation or guidance? I came across documentation related to the Splunk Add-on for the OpenTelemetry Collector, but it appears to be focused on Splunk Observability. Any clarification or direction would be greatly appreciated. Thanks in advance for your support!
Hello all, I am reviewing the Splunk add-on for vCenter Log and the Splunk add-on for VMware ESXi logs guides and have the following question:  Are both required in an environment where vCenter is p... See more...
Hello all, I am reviewing the Splunk add-on for vCenter Log and the Splunk add-on for VMware ESXi logs guides and have the following question:  Are both required in an environment where vCenter is present, or is the Splunk add-on for VMware ESXi logs not necessary if so? Or in other words, is the add-on for VMware ESXi just for simple bare metal installs that do not use vCenter?  Are ESXi builds with vCenter sending all of the ESXi logs up to vCenter anyhow and one needs only use the add-on for vCenter?   Second question, am I reading correctly that the add-on for vCenter requires BOTH a syslog output and a vCenter user account for API access?
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for ... See more...
We are creating a small cluster with minimal ingestions of around 2 GB a day on-prem. I wonder what would be the best way to approach the license, is the license per usage vs ingestion available for an on-prem environment? for something so small, does it make sense to switch and have it on the cloud?
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form. ... See more...
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form.  Please note, i really need to know the exact logs format of data stored in Splunk. Please confirm. Regards, Asad Nafees
Have events like below 1) date-Timestamp Server - hostname Status - host is down Threshold - unable to ping   2)  Date-Timestamp Db - dbname Status- database is down Instance status- DB ins... See more...
Have events like below 1) date-Timestamp Server - hostname Status - host is down Threshold - unable to ping   2)  Date-Timestamp Db - dbname Status- database is down Instance status- DB instance is not available    I would need to write Eval condition and create new field description that if field status is " database is down" , I need to add date, dB, status, Instances status fields to description field   And if status is host down, need to add date,server, status, threshold to description field.
Hi all, I am trying to deploy my apps from the deployment server with the command:  /opt/splunk/bin/splunk apply shcluster-bundle -target https://splunksrc:8089 -preserve-lookups true It never fai... See more...
Hi all, I am trying to deploy my apps from the deployment server with the command:  /opt/splunk/bin/splunk apply shcluster-bundle -target https://splunksrc:8089 -preserve-lookups true It never failed to do the task but now I am getting this error: Error while deploying apps to first member, aborting apps deployment to all members: Error while deleting app=rest_ta on target=https://splunksrc:8089: Non-200/201 status_code=500; {"messages":[{"type":"ERROR","text":"\n In handler 'localapps': Cannot update application info: /nobody/rest_ta/app/install/state = disabled: Could not find writer for: /nobody/rest_ta/app/install/state [0] [/opt/splunk/etc]"}]} Both the nodes (deployment and splunksrc) have enough disk space. Any ideas? Thanks Francesco