All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to install standalone Splunk using splunk-ansible project. What I want is to install Splunk as a standalone but I am new to Ansible and i could not be successful with using the ... See more...
Hello, I am trying to install standalone Splunk using splunk-ansible project. What I want is to install Splunk as a standalone but I am new to Ansible and i could not be successful with using the project(which playbook to run with which variables). Can anyone help me with the basic installation? My server is Centos7.
I am receiving Syslog data from the firewall and I would like to send a subset of it to the nullQueue. The issue I am having is that I have two set values (action and srcip) but 6 values for the ... See more...
I am receiving Syslog data from the firewall and I would like to send a subset of it to the nullQueue. The issue I am having is that I have two set values (action and srcip) but 6 values for the destination. The formats of the fields of the raw data are - action="deny" scrip=10.12.55.55 dstip=192.168.10.0 The regex I have come up with is: (?:srcip\=10\\.12\\.55\\.55)|(?:dstip\=192\\.168\\.10\\.*)|(?:dstip\=172\\.16\\.10\\.*)|(?:dstip\=172\\.18\\.10\\.*)|(?:action\=\"deny\") What happens is that the "|" acts as an "or" so I will be dumping too much information. My question is what is the format to place in transforms.conf to filter out the events to be dropped? Regards, Scott
Is there any Splunk search which lists all the active indexers that my search head can pull the data?
Question: Are there any locking or concurrency guarantees when playbooks are operating on a container? Issue I am trying to solve: I have a playbook (call it sub-playbook ) that is called by two d... See more...
Question: Are there any locking or concurrency guarantees when playbooks are operating on a container? Issue I am trying to solve: I have a playbook (call it sub-playbook ) that is called by two different parent playbooks ( parent1 and parent2 ). So parent1 will set param1 to True using save_object , keyed to containerN ( phantom.save_object(container_id=container['id'], key='param1', value={'param1':True}) . parent2 sets param1 to False (also keyed to containerN ) and calls sub-playbook . sub-playbook gets the value of param1 , and acts different depending on whether it's True/False. sub-playbook is called synchronously by both parent1 and parent2 As far as I know there are no concurrency guarantees (e.g. playbook1 -> sub-playbook could get interrupted after setting param1 , but before sub-playbook checks/reads param1 ). This would cause a race condition. Is that accurate or does phantom lock all access to a container to the playbook (and sub-playbooks) that are operating on it? Assuming my assumption is correct, and there are no concurrency guarantees, are there any built in patterns that people use to work around this? I can envision using some external capability (redis, maybe a custom app that uses sqllite, etc) to add my own locking, but I don't want to reinvent the wheel if it's not necessary. Thanks!
I have a table in my dashboard: my search********* "$search_field$" |lookup customer.csv license_hash as license_hash OUTPUT customer_id customer_name |ip location client_ip |table customer_id... See more...
I have a table in my dashboard: my search********* "$search_field$" |lookup customer.csv license_hash as license_hash OUTPUT customer_id customer_name |ip location client_ip |table customer_id,customer_name,client_ip....... In the dashboard I have a search field and on the table a drill down on this search field. The problem: w When I click on values of customer_id,client_ip, City,_Customer_name..... I get no results....of course not because of the lookup after the search field in the search. Wich changes I have to do to make it possible to search with this values? to add |search "§search_field§" after the lookup doesn't work.
We have many Sun Solaris 10/11 servers that don't report data for the Unix TA vmstat.sh script. Is there a known issue?
I'm trying to find documentation that states how we setup the controller for Database monitoring when dealing with a SQL Server instance (myserver\myinstance).  Does myserver\myinstance go into the h... See more...
I'm trying to find documentation that states how we setup the controller for Database monitoring when dealing with a SQL Server instance (myserver\myinstance).  Does myserver\myinstance go into the hostname?   Thanks
I'm looking to extract a JSON object from a JSON array using a dynamic index number ( based on data in another part of the JSON ) Simplified JSON example........ { "header": {"index_n... See more...
I'm looking to extract a JSON object from a JSON array using a dynamic index number ( based on data in another part of the JSON ) Simplified JSON example........ { "header": {"index_number": 2}, "rows": [ {"carrot": 108, "apple": 29}, {"carrot": 12, "apple": 44}, {"carrot": 54, "apple": 23}, {"carrot": 67, "apple":9} ] } In this example the value from the field index_number is used to select the correct JSON object from the JSON array. The index_number is not a static number so the JSON object returned from the JSON array will change. Using the spath command I can statically extract a JSON object but not do it dynamically using the value in index_number as path=rows{index_number_to_use} does not work. Example SPL....... | makeresults | eval json_field = "{\"header\": {\"index_number\": 1}, \"rows\": [{\"carrot\": 108, \"apple\": 29}, {\"carrot\": 12, \"apple\": 44}, {\"carrot\": 54, \"apple\": 23}, {\"carrot\": 67, \"apple\":9}]}" | spath input=json_field output=index_number_to_use path=header.index_number | spath input=json_field output=row_found path=rows{1} Any ideas on how this could be achieved?
We are using splunk add-on for Symantec Endpoint Protection version 3.0.0 We noticed that the fields are not getting extracted automatically for the following sourcetype. symantec:ep:risk:fil... See more...
We are using splunk add-on for Symantec Endpoint Protection version 3.0.0 We noticed that the fields are not getting extracted automatically for the following sourcetype. symantec:ep:risk:file symantec:ep:security:file symantec:ep:traffic:file symantec:ep:packet:file symantec:ep:proactive:file symantec:ep:agt_system:file symantec:ep:scm_system:file symantec:ep:agent:file symantec:ep:scan:file symantec:ep:admin:file symantec:ep:policy:file
Hi, I have a dashboard that I am setting up for a customer. In the dashboard, I have mutilple horizontal Bar graphs that I need to align the label for each column across multiple graphs. I trie... See more...
Hi, I have a dashboard that I am setting up for a customer. In the dashboard, I have mutilple horizontal Bar graphs that I need to align the label for each column across multiple graphs. I tried to insert a screenshot but I am to new to do this. Can I get some help on how to align the labels.
When you run the offline command permanently on an indexer. 1) How much time does it take to reassign the data to other members in cluster 2)Can we run offline command on three indexers at a tim... See more...
When you run the offline command permanently on an indexer. 1) How much time does it take to reassign the data to other members in cluster 2)Can we run offline command on three indexers at a time or do we need to wait for anything? 3) Adding and removing members is correct order or the other way around?
Query 1: (sourcetype="PAYA:Enterprise:CDE:Web:App:Gateway.Bankcard" OR sourcetype="PAYA:Enterprise:CDE:Web:App:Gateway.ACH") AND Processing response: | stats count by host | eventstats sum(count... See more...
Query 1: (sourcetype="PAYA:Enterprise:CDE:Web:App:Gateway.Bankcard" OR sourcetype="PAYA:Enterprise:CDE:Web:App:Gateway.ACH") AND Processing response: | stats count by host | eventstats sum(count) as totalTransactions | eval percent=round(count*100/totalTransactions,2) | eval transPerMinute=round(totalTransactions/10) | where percent>30 AND transPerMinute>200 Query 2: (sourcetype="PAYA:Enterprise:CDE:Web:App:Gateway.Bankcard" OR sourcetype="PAYA:Enterprise:CDE:Web:App:Gateway.ACH") AND tsys_response_time>5000 | stats count by host Basically, I need to create an alert that if one web server has processed over 30% of the transactions in the past 10 minutes, and we are averaging over 200 transactions per minute... AND if it has two or more transactions over 5000ms I've been wrapping my brain around this for a long time... really hoping someone can help
I am trying to install java on my splunk search head in kubernetes. As indicated in previous forums, I tried adding the JAVA_VERSION environment variable and set it to openjdk:8 but that does not see... See more...
I am trying to install java on my splunk search head in kubernetes. As indicated in previous forums, I tried adding the JAVA_VERSION environment variable and set it to openjdk:8 but that does not seem to install java on my kubernetes pod. Can someone guide me to the steps to install java on my pod with the splunk:latest image? I saw a few ansible scripts which installs java but not in the search head yaml found under https://github.com/splunk/docker-splunk/blob/develop/test_scenarios/kubernetes/3idx1sh1cm-pvc/splunk-search-deploy-persistent.yaml which is what I have deployed in my cluster.
Hey everyone, I have an issue where I am ingesting data via REST API, though I am getting a lot of duplicate data into the index. It seems the issue resides on the table where the API sources fro... See more...
Hey everyone, I have an issue where I am ingesting data via REST API, though I am getting a lot of duplicate data into the index. It seems the issue resides on the table where the API sources from, so in the meantime I have to dedup the results. index=index1 sourcetype=dataset1 | dedup data_id | table column_1, column_2, column_3 My question is, is there a way to run the dedup command within the props.conf file ? I have read where I could do an eval =mvdedup(value) command, but I would need to dedup across the events and not just one field Any thoughts?
Hi Team, How to know whether all the indexers are working in a balanced way? We have 14 cloud indexers and we need to know all are working in a balanced way. There should not be load on one pa... See more...
Hi Team, How to know whether all the indexers are working in a balanced way? We have 14 cloud indexers and we need to know all are working in a balanced way. There should not be load on one particular indexer. Is there a dashboard available for this? Thanks, Vijay Sri S
Splunk MLTK replaces the previous Principal components when we try to apply PCA again with other fields. How can we resolve this issue?
I am working on a query where I have a data in below format: How can I show these hub Ids on the map with their status (if they are open or closed)? I am using Splunk Enterprise edition... See more...
I am working on a query where I have a data in below format: How can I show these hub Ids on the map with their status (if they are open or closed)? I am using Splunk Enterprise edition v 7.2.6
Currently, I'm pulling in the minemeld_domainthreatlist.csv lookup via the Palo Alto Splunk TA v 6.1.1. It's working as expected, but the CSV file gets rather large (currently over 300mb) with lot... See more...
Currently, I'm pulling in the minemeld_domainthreatlist.csv lookup via the Palo Alto Splunk TA v 6.1.1. It's working as expected, but the CSV file gets rather large (currently over 300mb) with lots of duplicate events. Is there a way of controlling the file size? Either by time or number of similar events?
Hello We have enabled CloudTrail from AWS Organization as a result CloudTrail creates the bucket with the following folder structure. {bucket-name}/AWSLogs/{organization-id}/{account-id}/CloudTr... See more...
Hello We have enabled CloudTrail from AWS Organization as a result CloudTrail creates the bucket with the following folder structure. {bucket-name}/AWSLogs/{organization-id}/{account-id}/CloudTrail/{Region ID}/{YYYY}/{MM}/{DD}/{file_name}.json.gz. When using the "Incremental S3" Splunk does not index the logs because of the "organization-id" within the path. Is there a way I can tell Splunk to accept the "organization-id" and proceed with indexing? Thanks in advance.
I am trying to integrate few servers into Splunk. The servers send syslog data only. Earlier I was having two servers(log sources), so I made the input traffic to come on port 514 and 515 . I used t... See more...
I am trying to integrate few servers into Splunk. The servers send syslog data only. Earlier I was having two servers(log sources), so I made the input traffic to come on port 514 and 515 . I used two port to get two host names in the logs. But now the servers count is about 5 servers and I dont feel like giving another 5 separate ports to this 5 servers for getting different host name. I want to use single port say port 514 as input to my HF for n number of server, and get the n distinguish HOSTs. Can I anyone suggest how can I acheive this in splunk.