All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I'm planning an architecture with a redundant Heavy Forwarder and double Syslog collector Servers. Where do i place a Load Balancer? and how do these Components communicate in terms of Port... See more...
Hi all, I'm planning an architecture with a redundant Heavy Forwarder and double Syslog collector Servers. Where do i place a Load Balancer? and how do these Components communicate in terms of Ports and Firewalls? What do i need to plan?  I cant find the right places to read about this in the documentation. Thank you for your help in advance. Oj.
Hello, I am working on a Distributed environment with: - 1x SH with Splunk ES installed (Deployment Server) - 7x Indexers (Search Peers) On my SH, I see a lot of skipped executions on scheduled s... See more...
Hello, I am working on a Distributed environment with: - 1x SH with Splunk ES installed (Deployment Server) - 7x Indexers (Search Peers) On my SH, I see a lot of skipped executions on scheduled searches related to Splunk CIM app.  Specifically I see a 99% skip ratio to scheduled reports with a name format of: _ACCELERATE_DM_Splunk_SA_CIM_Splunk_CIM_Validation.[Datamode_Name]_ACCELERATE_ I accessed the Data Models page and expanded the CIM Validation (S.o.S) data model. The information I got is: "Access Count: 0 - Last Access: -) while size is 750MB and frequently updated. My question: Can I disable acceleration on this Data Model since it is never accessed? Thank you in advance. With kind regards, Chris
Hey! I have a html form. Can I call her in the alert to send a message? so that not just a message comes, but a message in the form of html
Previously, my heavy forwarder is working fine. Able to search from latest logs in my searchhead. But upon testing another app for another SIEM in the heavy forwarder, it has been routing to there si... See more...
Previously, my heavy forwarder is working fine. Able to search from latest logs in my searchhead. But upon testing another app for another SIEM in the heavy forwarder, it has been routing to there since. But after the POC ended, we want to switch back to sending it back to our splunk indexer.  We remove the app for the SIEM and left with our outputs for this forwarder which is towards the splunk indexer IP. I tried restarting the splunk service in this heavy forwarder but still unable to search those host in the searchhead.  Is there anything to look out for? 
I tried to upgrade Python for Scientific Computing to v5.3 on my cluster.  I followed the instructions in here and first un-tar the add-on on my master node. However, whenever I ran /opt/splunk/bin/... See more...
I tried to upgrade Python for Scientific Computing to v5.3 on my cluster.  I followed the instructions in here and first un-tar the add-on on my master node. However, whenever I ran /opt/splunk/bin/splunk apply shcluster-bundle, I will always get this error. Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=Splunk_SA_Scientific_Python_linux_x86_64 on target=https://172.18.109.2:8089: Network-layer error: Broken pipe This kind of error didn't happen when I installed another smaller add-on earlier this morning.  And after failing with that error multiple times, I noticed the doc says: If you attempt to push a very large tarball (>200 MB), the operation might fail due to various timeouts. Delete some of the contents from the tarball's app, if possible, and try again. Since the original python-for-scientific-computing-for-linux-64-bit_300.tgz is already 480MB, and the size of the un-tar and  unzipped /opt/splunk/etc/shcluster/apps/Splunk_SA_Scientific_Python_linux_x86_64 folder on the master node is 2.5G!  I bet the large tarball size is the problem. But how can I solve this problem?  What contents from the tarball's app can I delete?
How does DMC calculate load average?  I understand the number comes from rest api. But it does not explain how exactly the number is calculated. And the number is quite different from Linux Load Av... See more...
How does DMC calculate load average?  I understand the number comes from rest api. But it does not explain how exactly the number is calculated. And the number is quite different from Linux Load Average when you use uptime command.   Thanks. Hanny.
We have Splunk 8.0.3 deployed to a private AWS cloud. We use AWS i3.8xlarge instance types for our indexers, recently upgraded from i3.4xlarge. We combine the 1.7TB "ephemeral" volumes into a logic... See more...
We have Splunk 8.0.3 deployed to a private AWS cloud. We use AWS i3.8xlarge instance types for our indexers, recently upgraded from i3.4xlarge. We combine the 1.7TB "ephemeral" volumes into a logical volume group and use them for splunk index buckets mounted on /opt/splunk/var/lib/splunk. When we were running on i3.4xlarge instances where we had two 1.7 TB volumes, we were using 3 TB of the 3.4 TB logical volume group per indexer as Splunk  indexes. When we upgraded to i3.8xlarges we removed the old indexers and the new indexers are only using 200GB of the 6.8TB logical volume groups, slowly creeping up about 4GB/hour. I have tried running searches over long periods of time, but they fail with: ! DAG Execution Exception: Search has been cancelled ! Search auto-canceled ! The search job has failed due to an error.  You may be able view the job in the Job Inspector How do I get the cache volumes to fill up again quickly with index data from the S3 storage so my searches will be fast and complete again?  
One of th requirement of Multisite indexer cluster with SmartStore is   Site locations host two object stores in an active-active replicated relationship. Depending on the deployment type, the set ... See more...
One of th requirement of Multisite indexer cluster with SmartStore is   Site locations host two object stores in an active-active replicated relationship. Depending on the deployment type, the set of cluster peer nodes can be sending data to one or both object stores.   To fulfill this requirement, is there some configuration I have to do in Splunk or in S3 bucket ?
Hello All,  Anyone know how I can get the latest date from a lookup file? I am using the script below: | inputlookup append=t Blue_Marbles_Report.csv | rename "Last Scan Date" as "Last_Scan_Date"... See more...
Hello All,  Anyone know how I can get the latest date from a lookup file? I am using the script below: | inputlookup append=t Blue_Marbles_Report.csv | rename "Last Scan Date" as "Last_Scan_Date" | eval updated=strptime(Last_Scan_Date,"%FT%T%:z") | eval desired_time=strftime(updated, "%B %d, %Y") | stats latest(desired_time) as desired_time | table Marbles, desired_time But the latest(desired_time) does not deliver any results. This is what I have on my original file: Marbles Last_Scan_Date Blue 08/01/2020 Blue 10/04/2020 Blue 11/08/2021 Desired Result: Marbles desired_time Blue 11/08/2021 Hope to get some help on this, thanks in advance. 
I have a multi-site indexer cluster with 3 sites and 2 indexers in each site. RF and SF set to 3 RF = origin:1, total:3 SF = origin:1, total:3 However, I am getting numerous such errors missing e... See more...
I have a multi-site indexer cluster with 3 sites and 2 indexers in each site. RF and SF set to 3 RF = origin:1, total:3 SF = origin:1, total:3 However, I am getting numerous such errors missing enough suitable candidates to create replicated copy in order to meeting replicaiton policy. Missing={site3:1} missing enough suitable candidates to create replicated copy in order to meeting replicaiton policy. Missing={site2:1} missing enough suitable candidates to create replicated copy in order to meeting replicaiton policy. Missing={site1:1}  I am suspecting this might be due to incorrect RF/SF.   Can anyone please help confirm ?
I have commonly seen in various deployments where there have been separate partition for hot/warm data, however, I am keen to know, if I am using Smartstore and if I have Splunk homePath, that is, ho... See more...
I have commonly seen in various deployments where there have been separate partition for hot/warm data, however, I am keen to know, if I am using Smartstore and if I have Splunk homePath, that is, hot/warm bucket directory on the same file system as Splunk software installation, would that cause any issue or would that be against Splunk recommendation?
Hi All, I'm wondering if Automated Root Cause Analysis is supported in an On-premise environment? Where can I find a list of features that are supported in an on-prem environment? Thanks
Has any one yet created or written any Regex for it. Thanks  
I'm having issue with a search of mine. I've been trying to organize the matrix so that it will be ready for my pivot and then eventually a dashboard visual, but there are three columns that seem to ... See more...
I'm having issue with a search of mine. I've been trying to organize the matrix so that it will be ready for my pivot and then eventually a dashboard visual, but there are three columns that seem to be troublesome.  It seem as though my eval command is only working with one of the start_DateNo and returning results for only one instance (see pictorial below). Is there a order of operations that I'm missing with my formula, or is there a better command to get the data to what I want? In addition it seems like my "slaName" isn't being reflected accurately as well.  Below I have a snip-it of the error, and then a row/column matrix goal to what I'm ultimately trying to get the data to.  Error:    Goal: key team_name start_DateNo start_weekNo start_yearNo slaName   UNIQUE_SLA_Count ADVANA-104 ADVANA 2020-6-11 24 20 DSDE Pending Approval SLA ADVANA-104 / 24 / 20 / DSDE Pending Approval SLA ADVANA-104 ADVANA 2020-6-11 24 20 DSDE Ready to Start SLA ADVANA-104 / 24 / 20 / DSDE Ready to Start SLA ADVANA-104 ADVANA 2021-5-14 19 21 DSDE In Progress SLA ADVANA-104 / 19 / 21 / DSDE In Progress SLA   Any help would be much appreciated, I've been going back a forth for a few hours now trying to get this to where I need it.    For editing purposes, here is the SPL from the picture above: index=jira sourcetype="jira:sla:json" OR sourcetype="jira:issues:json" | rex field=startDate "(?P<start_DateNo>\d+-\d+-\d+)" | rex field=startDate "(?P<start_TimeNo>\d+:\d+:\d+)" | eval start_weekNo=strftime(strptime(start_DateNo,"%Y-%m-%d"),"%V") | eval start_yearNo=strftime(strptime(start_DateNo,"%Y-%m-%d"),"%y") | eval key=coalesce(key,issueKey) | stats values(team_name) as team_name values(start_DateNo) as start_DateNo values(start_weekNo) as start_weekNo values(start_yearNo) as start_yearNo values(slaName) as slaName values(fields.status.name) as fields.status.name by key | mvexpand slaName | mvexpand start_DateNo | mvexpand start_weekNo | mvexpand start_yearNo | where team_name="ADVANA" | where key="ADVANA-104" | strcat key " / " start_weekNo " / " start_yearNo " / " slaName UNIQUE_SLA_Count | search UNIQUE_SLA_Count="ADVANA-104 / 19 / 20 / DSDE Pending Approval SLA "   Thank you!  
I need to use federated search which does not support search time lookup at this time in splunk 8.2.2.1. I came across splunk doc to add fields at ingest time (index time) based on ingest time looku... See more...
I need to use federated search which does not support search time lookup at this time in splunk 8.2.2.1. I came across splunk doc to add fields at ingest time (index time) based on ingest time lookup.  https://docs.splunk.com/Documentation/Splunk/8.2.3/Data/IngestLookups What I am trying to do is during event ingestion I am looking for value of field "application" and match that with the CSV file as shown below and trying to add fields APP and COMP based on application value.  e.g. if incoming event has application=Linux add APP field with value 9001 and COMP field as 8001. But it does not work.  Please help.  Here are the following files I created as documented.  more /opt/splunk/etc/system/lookups/APP_COMP.csv application,APP,COMP Linux,9001,8001 Console,9002,8002 Windows,9003,8003 more /opt/splunk/etc/system/local/props.conf  [access_combine_wcookie] TRANSFORMS = Active_Events /opt/splunk/etc/system/local/transforms.conf [Active_Events] INGEST_EVAL= APPCOMP=lookup("APP_COMP.csv", json_object("application", application), json_array("APP", "COMP")) more /opt/splunk/etc/system/local/fields.conf [APP] INDEXED = True [COMP] INDEXED = True
I have a SplunkBase app for a few years and noticed the install count has decreased (maybe it was reset at one time?) however my download count has continued increasing. what explains this happening... See more...
I have a SplunkBase app for a few years and noticed the install count has decreased (maybe it was reset at one time?) however my download count has continued increasing. what explains this happening? For what it’s worth, my app has more than 1 version (ie v1, v2)
Anyone know where I can download v21.1.1.31776 java agent. 
I have nested events that look like this in Splunk: container_id: 13243d84e63d8d5b56c5 container_name: /ecs-stg-compute-instances-226-ur-2-c499f4ac log: {"module": "ur.uhg", "functions": ["unlock_... See more...
I have nested events that look like this in Splunk: container_id: 13243d84e63d8d5b56c5 container_name: /ecs-stg-compute-instances-226-ur-2-c499f4ac log: {"module": "ur.uhg", "functions": ["unlock_user_processing"], "session-id": "XUHWnDAAkR3AwrsXxtL339z9rEf-l", "email": "xxx@gmail.com", "user-id": 3, "user-account-id": 3, "start-time": "2021-11-08T19:59:36.711483", "end-time": null, "callback-function": "calculate_metrics", "emails-processed": 316, "emails-left-to-process": 0, "images-processed": 316, "iterations": 5, "iteration-times": [56.61728, 162.878587, 43.512794, 24.918005, 0.954233], "event": "chained_functions() called.", "level": "debug", "timestamp": "2021-11-08T20:04:25.905376Z"}  source: stdout and a 'log' value is seen like a string even so it's a JSON object. How can I  parse "log" value into key/value pairs?? 
Hello There has anyone done this by now Deliver IBM z/OS RACF, ACF2, & Top Secret User and Db2 Access I know they are called SMF Recods Logged I also know they are called SMF and have numbers Ha... See more...
Hello There has anyone done this by now Deliver IBM z/OS RACF, ACF2, & Top Secret User and Db2 Access I know they are called SMF Recods Logged I also know they are called SMF and have numbers Has anyone able to move logs into Splunk  I am looking to find out the easy way to move this data into Splunk enterprise If have any idea or even little knowledge its welcome Any tips and hints are great full to me Thanks
Has anyone sent logs from BMC AMI Defender to Splunk I would like to know Thanks