All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

"service.indexes" in splunklib for Python return by default a collection with only event indexes (no metric indexe). Is it a way to get a collection from all metric and event indexes giving the filt... See more...
"service.indexes" in splunklib for Python return by default a collection with only event indexes (no metric indexe). Is it a way to get a collection from all metric and event indexes giving the filter parameter "datatype=all" ?   Thanks
I am looking for a splunk query which can calculate each sourcetype ingesting data in splunk. you can take below sample data for example:- summary_capacity 0.01per GB per month 0.2per GB per month... See more...
I am looking for a splunk query which can calculate each sourcetype ingesting data in splunk. you can take below sample data for example:- summary_capacity 0.01per GB per month 0.2per GB per month   Splunk license is $5 per cpu per day Indexer is 10.15$ per day so what could be the best efficient splunk query for calculating  the cost based on how much data ingest of each sourcetype     
Hello, We have a variety of different AWS logs (i.e. CloudWatch, Cloudtrail, Config, VPC Flow, Aurora) and non-AWS logs (i.e. Palo Alto, Trend Micro) routed to S3 buckets today. There are a total of... See more...
Hello, We have a variety of different AWS logs (i.e. CloudWatch, Cloudtrail, Config, VPC Flow, Aurora) and non-AWS logs (i.e. Palo Alto, Trend Micro) routed to S3 buckets today. There are a total of 15 S3 buckets (5 per AWS account). Upon recently purchasing and configuring an on-premise Splunk ES (distributed deployment w/ index clustering, no SH clustering yet), our goal is to begin forwarding these logs to our Splunk deployment. What are some considerations that we should keep in mind? Since we're going with a push approach, we're planning to do the following - Could someone confirm if this looks right? I'm open to suggestions. Send the logs from the S3 bucket to Amazon Kinesis Firehouse Firehose writes a batch of events to Splunk via HEC. Since indexers are not in an AWS VPC (they reside in a separate Oracle Cloud instance), I'm assuming that an SSL certificate needs to be installed on each indexer? We have 1 index cluster in a Production environment and a separate one in our Disaster Recovery environment. Assign DNS name that resolves to the set of indexers which shall collect data from Kinesis Firehose. Install Splunk Add-on for Amazon Kinesis on Enterprise and ES Search Head, as well as Cluster Master Ensure new index is created for AWS logs (1 sourcetype for each AWS log source) and existing indexes are used for the Palo Alto and Trend Micro logs. If new indexes are needed for Palo Alto and Trend Micro logs, I'm assuming that they would still adhere to the appropriate Splunk ES data models. Configure HEC and create new HEC Token. There will be a unique HEC token per sourcetype. Configure Amazon Kinesis Firehouse to send data to Splunk. Ensure all events are backed up to S3 bucket until it is confirmed that all events are processed by Splunk Search for data by source type to confirm that it is being indexed and visible.
I have the follow query index=index |spath output=traceSteps path=traceSteps{} |table traceSteps |mvexpand traceSteps |rex field=traceSteps "(message\"\:\"(?<mensagem>(?<=\")(.*?)(?=\")))" |wher... See more...
I have the follow query index=index |spath output=traceSteps path=traceSteps{} |table traceSteps |mvexpand traceSteps |rex field=traceSteps "(message\"\:\"(?<mensagem>(?<=\")(.*?)(?=\")))" |where mensagem LIKE "CPF%" |stats count when i change "|stats count" by "|timechart span=1d count" to show by date i have "no results found" Why? What do i make wrong?
Hello,   The question is pretty straightforward. I would like to alert if 3 failed logins followed by 1 successful login from one user is observed. For example: Minute user action 1st min... See more...
Hello,   The question is pretty straightforward. I would like to alert if 3 failed logins followed by 1 successful login from one user is observed. For example: Minute user action 1st minute xyz failure 2nd minute xyz failure 3rd minute xyz failure 4th minute xyz success   If this condition occurs.  I would like to create an alert.  Thanks in advance
I need some help with an alert i have been stuck on. I have a DBCONNECT lookup that returns a value once a day. This value contains 18 IPs at the moment all separated by "," - for example value=1.1.1... See more...
I need some help with an alert i have been stuck on. I have a DBCONNECT lookup that returns a value once a day. This value contains 18 IPs at the moment all separated by "," - for example value=1.1.1.1/24,2.2.2.2, 5.5.5.5/16. I need an search i can create an alert off of if there is an IP added to this compared to when it was last ran. IE - search 1 at 6am had 5 IPs search 2 the next day has 6 IPs - alert. right now i get the all the IPs in one field called "Value=" - looks like the below (ips changed for this post) value="1.526.323.176/2,133.58.35.4/2,10.199.0.99/14 I basically need the alert to send our team an email letting us know an IP has been added and we should look into it.    
We run some reports to list specific filenames that we've received over a period of time. These particular reports are predicated on account and file name matches. Can we create an alert from one of ... See more...
We run some reports to list specific filenames that we've received over a period of time. These particular reports are predicated on account and file name matches. Can we create an alert from one of these reports to identify an account and filename that was NOT received? Please know how it can be done. Thanks. An example of one of the reports below: index=log source="/logs/file_tracking.log" (Accountname IN("Account1") AND Filename IN ("File1*","File2*","File3*","File4*","File5*","File6*","File7*"))  | table Transfer, Account, File, Start_Time, End_Time | sort - Start_Time
We installed DB agent on the database server and it has been reporting to the controller. We would like to manage Health Rules for DB Agents (database availability alerts), but when we try to conf... See more...
We installed DB agent on the database server and it has been reporting to the controller. We would like to manage Health Rules for DB Agents (database availability alerts), but when we try to configure health rules we do not find our database agent names.
I have query something like this:  index=sample source=test (earliest=-1d@d latest=@d) OR (earliest=-2d@d latest=-1d@d) OR (earliest=-3d@d latest=-2d@d) |bin span=.1 Seconds | eval dayOfDate=strft... See more...
I have query something like this:  index=sample source=test (earliest=-1d@d latest=@d) OR (earliest=-2d@d latest=-1d@d) OR (earliest=-3d@d latest=-2d@d) |bin span=.1 Seconds | eval dayOfDate=strftime(_time,"%Y/%m/%d") | stats count by Seconds, dayOfDate | xyseries Seconds dayOfDate count   which displays results, something like this (as an example):  Seconds        8/9      8/10   8/08 0.0-0.1             42         22        33 0.1-0.2              22        32         44 How can i convert the data being shown under 8/8, 8/9 and 8/10 in percentage? 
Hello, I have the following Search that returns a percent_difference value. sourcetype="orderdetail-prod"|stats count(PriceModelLevel) AS total, count(eval(PriceModelLevel="DEFAULT_SITEONE_LIST")) ... See more...
Hello, I have the following Search that returns a percent_difference value. sourcetype="orderdetail-prod"|stats count(PriceModelLevel) AS total, count(eval(PriceModelLevel="DEFAULT_SITEONE_LIST")) AS Default_Siteone_List|eval percent_difference=((Default_Siteone_List/total)*100) | table percent_difference   However, I can't figure out how to trigger an alert if the percentage_difference is >=20.  I tried: search percentage_difference >=20 Does this seem correct?  If so, perhaps another setting in the Alert config is mucking it up as it never is triggered. Thanks for any help you can give.    
The EventHub input is throwing error while trying to collect eventhub data from Microsoft Azure. The Microsoft Cloud Services addon is installed on a Heavy Forwarder and is supposed to send data to t... See more...
The EventHub input is throwing error while trying to collect eventhub data from Microsoft Azure. The Microsoft Cloud Services addon is installed on a Heavy Forwarder and is supposed to send data to the SH. Following is a snippet of error : 2021-08-06 10:28:23,488 level=WARNING pid=1876189 tid=Thread-1 logger=azure.eventhub._eventprocessor.event_processor pos=event_processor.py:_do_receive:334 | EventProcessor instance '605f0c65-227a-435c-8a26-4018c4a498a6' of eventhub 'xyz' partition '1' consumer group 'abc'. An error occurred while receiving. The exception is KeyError('records'). We have double-checked all the access and permissions that are specified in the addon doc. I'm not sure if this error is due to permission issue or data format. Has anyone else faced the same issue with the addon?
Following https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Sharedatamodelsummaries I set up sharing acceleration summaries between two search-head clusters. I found guid of one of the c... See more...
Following https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Sharedatamodelsummaries I set up sharing acceleration summaries between two search-head clusters. I found guid of one of the clusters, set it up as a source_guid into a default stanza on the other cluster (first cluster uses CIM app and ES, the second one has just CIM app with datamodel settings migrated from first cluster). So datamodel settings on the second cluster is  a subset of settings from the first cluster (I did a btool dump of dataset settings and compared them with vimdiff). On first cluster I have some addiional datamodels from ES app, the rest datasets is identical on both clusters (of course apart from the source_guid attribute). As far as I understand the article, it should just work. But as far as I add the CIM app (define the datamodels) on the second cluster, it starts killing my indexers. I have 20CPU nodes with 64G of RAM and their load is typicaly around 6-7 and memory usage doesn't exceed 40G. Since the added the CIM app, load is doesn't fall below 40(!) and sometimes jumps to around 45 and the RAM is all used (I  even get oom-killers every half an hour or so). The monitoring console shows that most resources (by a great margin) is used by datamodel acceleration. And the top memory-consuming searches are various instances of _ACCELERATE_DM_Splunk_SA_CIM_Network_Traffic_ACCELERATE_ I don't understand however: 1) Why doesn't splunk just use the data I pointed it to? It seems to be "rebuilding" the summaries (and yes, I have a lot of network data, so it makes sense) 2) Why does it spawn the consecutive acceleration searches when the old ones didn't complete yet?
Splunkbase page for the app https://splunkbase.splunk.com/app/833/ says its supported for 7.3 But the Release notes says its only supported for 8.X https://docs.splunk.com/Documentation/AddOns/relea... See more...
Splunkbase page for the app https://splunkbase.splunk.com/app/833/ says its supported for 7.3 But the Release notes says its only supported for 8.X https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Releasenotes 8.3.0 package  is no longer on splunkbase Can someone let me know if 8.3.1 is supported on Splunk enterprise 7.3.X Many thanks, Jon
Hi Team,   How to set limitations for each models . If I change limitations for linear Regression in settings it will effect on all models with linear regression .I need to set limitations for eac... See more...
Hi Team,   How to set limitations for each models . If I change limitations for linear Regression in settings it will effect on all models with linear regression .I need to set limitations for each  model. Please advise 
Hello, I am working on a dashboard and I would like to keep only the first letter of the input (text) so that I can use it to call the correct datamodel in the query (my datamodels are untitled "A" ... See more...
Hello, I am working on a dashboard and I would like to keep only the first letter of the input (text) so that I can use it to call the correct datamodel in the query (my datamodels are untitled "A" "B" "C" in order to classify names). If you have an idea, feel free to post it.  Thanks in advance
hi I want to detect web vulnerabilities for example "XSS" or " SQLI" with splunk. for this target i collect apache log into my splunk server. and till now I find match string with signature based ru... See more...
hi I want to detect web vulnerabilities for example "XSS" or " SQLI" with splunk. for this target i collect apache log into my splunk server. and till now I find match string with signature based rule for detect them and its implement with Regex in search app of splunk. so my question is there any other way to detect this vulnerabilities without app or with app (ex :Splunk Enterprise Security)? thanks!
Hello! Is it possible to extract EDNS fields from DNS packets using Splunk Stream? In particular, I mean CSUBNET (option 8, see image).  
Hi, I am adding URL to be monitored under Website Monitoring Application. It works fine for sometime but later all of sudden it stops indexing logs for all the URL's and Splunk process stops. I se... See more...
Hi, I am adding URL to be monitored under Website Monitoring Application. It works fine for sometime but later all of sudden it stops indexing logs for all the URL's and Splunk process stops. I see Splunk PID increasing exponentially when this issue happens.If i grep splunk i see the below message  /opt/app/splunk/splunk/bin/python2.7 /opt/app/splunk/splunk/bin/runScript.py execute.   I see below errors in splunkd log   ERROR Unable to find the app configuration for the specified configuration stanza=proxy error="splunkd connection error", see url=http://lukemurphey.net/projects/splunk-website-monitoring/wiki/Troubleshooting   WARN HttpListener - Socket error from 127.0.0.1:60492 while accessing /servicesNS/nobody/website_monitoring/admin/website_monitoring/proxy: Broken pipe   P.S Earlier had placed website monitoring on Search Heads,since this issue was occuring we moved it to HF but still we see this issue.   Any suggestions on how to fix this issue.    
I am trying to run the splunk connect syslog via podman, here is the reference links - https://splunk-connect-for-syslog.readthedocs.io/en/latest/gettingstarted/#offline-container-installation http... See more...
I am trying to run the splunk connect syslog via podman, here is the reference links - https://splunk-connect-for-syslog.readthedocs.io/en/latest/gettingstarted/#offline-container-installation https://splunk-connect-for-syslog.readthedocs.io/en/latest/gettingstarted/podman-systemd-general/ My podman container is up and running, all the configuration on place as per doc instructions - But I am facing a issue related to sending logs HTTP request. Below is my configuration file and activity logs. My env_file [root@hostname ~]# cat /opt/sc4s/env_file SPLUNK_HEC_URL=https://http-singh-sudhir.splunkcloud.com:443 SPLUNK_HEC_TOKEN=Z93TSS87-F826-19V1-01W1-Q9Q8G1G8264 #Uncomment the following line if using untrusted SSL certificates #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DIR=/opt/sc4s/storage/volumes   Using above config the manual curl command is successful [root@hostname ~]# curl -k https://http-singh-sudhir.splunkcloud.com:443/services/collector/event?channel=Q9Q8G1W5-Z93T-F826-19V1-Q9Q8G1G8264 -H "Authorization: Splunk Z93TSS87-F826-19V1-01W1-Q9Q8G1G8264 " -d '{"event": "hello_world"}' {"text":"Success","code":0}[root@hostname ~]# ^C   But with same config, podman logs SC4S is throwing error [root@hostname ~]# /usr/bin/podman logs SC4S '/opt/syslog-ng/etc/conf.d/local/context/compliance_meta_by_source.conf.example' -> '/opt/syslog-ng/etc/conf.d/local/context/compliance_meta_by_source.conf' '/opt/syslog-ng/etc/conf.d/local/context/compliance_meta_by_source.csv.example' -> '/opt/syslog-ng/etc/conf.d/local/context/compliance_meta_by_source.csv' '/opt/syslog-ng/etc/conf.d/local/context/splunk_index.csv.example' -> '/opt/syslog-ng/etc/conf.d/local/context/splunk_index.csv' '/opt/syslog-ng/etc/conf.d/local/context/vendor_product_by_source.conf.example' -> '/opt/syslog-ng/etc/conf.d/local/context/vendor_product_by_source.conf' '/opt/syslog-ng/etc/conf.d/local/context/vendor_product_by_source.csv.example' -> '/opt/syslog-ng/etc/conf.d/local/context/vendor_product_by_source.csv' '/opt/syslog-ng/etc/local_config/destinations/README.md' -> '/opt/syslog-ng/etc/conf.d/local/config/destinations/README.md' '/opt/syslog-ng/etc/local_config/filters/README.md' -> '/opt/syslog-ng/etc/conf.d/local/config/filters/README.md' '/opt/syslog-ng/etc/local_config/filters/example.conf' -> '/opt/syslog-ng/etc/conf.d/local/config/filters/example.conf' '/opt/syslog-ng/etc/local_config/log_paths/README.md' -> '/opt/syslog-ng/etc/conf.d/local/config/log_paths/README.md' '/opt/syslog-ng/etc/local_config/log_paths/lp-example.conf.tmpl' -> '/opt/syslog-ng/etc/conf.d/local/config/log_paths/lp-example.conf.tmpl' '/opt/syslog-ng/etc/local_config/log_paths/lp-example.conf' -> '/opt/syslog-ng/etc/conf.d/local/config/log_paths/lp-example.conf' '/opt/syslog-ng/etc/local_config/sources/README.md' -> '/opt/syslog-ng/etc/conf.d/local/config/sources/README.md' syslog-ng checking config sc4s version=v1.12.0 syslog-ng starting Aug 16 11:44:12 hostname syslog-ng[1]: syslog-ng starting up; version='3.25.1' Aug 16 11:44:12 hostname syslog-ng-config: sc4s version=v1.12.0 Aug 16 11:44:12 hostname syslog-ng[1]: curl: error sending HTTP request; url='https://http-singh-sudhir.splunkcloud.com:443/services/collector/event', error='Couldn\'t connect to server', worker_index='1', driver='d_hec_internal#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec_internal.conf:2:5' Aug 16 11:44:12 hostname syslog-ng[1]: Server disconnected while preparing messages for sending, trying again; driver='d_hec_internal#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec_internal.conf:2:5', worker_index='1', time_reopen='10', batch_size='1' Aug 16 11:44:12 hostname syslog-ng[1]: curl: error sending HTTP request; url='https://http-singh-sudhir.splunkcloud.com:443/services/collector/event', error='Couldn\'t connect to server', worker_index='0', driver='d_hec_internal#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec_internal.conf:2:5' Aug 16 11:44:12 hostname syslog-ng[1]: Server disconnected while preparing messages for sending, trying again; driver='d_hec_internal#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec_internal.conf:2:5', worker_index='0', time_reopen='10', batch_size='1' I am not able to understand what is missing here from my side. if is curl fails then it should be in both cases, looking forward to your help. please point out what is wrong with this.
Hi,   The query was working fine for the lower environment. When we tried configuring the same in production, it is failing with the following error. Error: [DBX-QUERY-WORKER-166] ERROR com.spl... See more...
Hi,   The query was working fine for the lower environment. When we tried configuring the same in production, it is failing with the following error. Error: [DBX-QUERY-WORKER-166] ERROR com.splunk.dbx.command.DbxQueryServer - operation=dbxquery connection_name=XXXXX stanza_name= action=dbxquery_server_worker_failed   At our side, 1. We have checked the connection is working fine. 2. The input query is working fine in the batch mode. 3. We have checked the DB connect logs and we are not seeing anything other than the above mentioned error.   Kindly suggest how to proceed from here.