All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have the following Search that returns a percent_difference value. sourcetype="orderdetail-prod"|stats count(PriceModelLevel) AS total, count(eval(PriceModelLevel="DEFAULT_SITEONE_LIST")) ... See more...
Hello, I have the following Search that returns a percent_difference value. sourcetype="orderdetail-prod"|stats count(PriceModelLevel) AS total, count(eval(PriceModelLevel="DEFAULT_SITEONE_LIST")) AS Default_Siteone_List|eval percent_difference=((Default_Siteone_List/total)*100) | table percent_difference   However, I can't figure out how to trigger an alert if the percentage_difference is >=20.  I tried: search percentage_difference >=20 Does this seem correct?  If so, perhaps another setting in the Alert config is mucking it up as it never is triggered. Thanks for any help you can give.    
The EventHub input is throwing error while trying to collect eventhub data from Microsoft Azure. The Microsoft Cloud Services addon is installed on a Heavy Forwarder and is supposed to send data to t... See more...
The EventHub input is throwing error while trying to collect eventhub data from Microsoft Azure. The Microsoft Cloud Services addon is installed on a Heavy Forwarder and is supposed to send data to the SH. Following is a snippet of error : 2021-08-06 10:28:23,488 level=WARNING pid=1876189 tid=Thread-1 logger=azure.eventhub._eventprocessor.event_processor pos=event_processor.py:_do_receive:334 | EventProcessor instance '605f0c65-227a-435c-8a26-4018c4a498a6' of eventhub 'xyz' partition '1' consumer group 'abc'. An error occurred while receiving. The exception is KeyError('records'). We have double-checked all the access and permissions that are specified in the addon doc. I'm not sure if this error is due to permission issue or data format. Has anyone else faced the same issue with the addon?
Following https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Sharedatamodelsummaries I set up sharing acceleration summaries between two search-head clusters. I found guid of one of the c... See more...
Following https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Sharedatamodelsummaries I set up sharing acceleration summaries between two search-head clusters. I found guid of one of the clusters, set it up as a source_guid into a default stanza on the other cluster (first cluster uses CIM app and ES, the second one has just CIM app with datamodel settings migrated from first cluster). So datamodel settings on the second cluster is  a subset of settings from the first cluster (I did a btool dump of dataset settings and compared them with vimdiff). On first cluster I have some addiional datamodels from ES app, the rest datasets is identical on both clusters (of course apart from the source_guid attribute). As far as I understand the article, it should just work. But as far as I add the CIM app (define the datamodels) on the second cluster, it starts killing my indexers. I have 20CPU nodes with 64G of RAM and their load is typicaly around 6-7 and memory usage doesn't exceed 40G. Since the added the CIM app, load is doesn't fall below 40(!) and sometimes jumps to around 45 and the RAM is all used (I  even get oom-killers every half an hour or so). The monitoring console shows that most resources (by a great margin) is used by datamodel acceleration. And the top memory-consuming searches are various instances of _ACCELERATE_DM_Splunk_SA_CIM_Network_Traffic_ACCELERATE_ I don't understand however: 1) Why doesn't splunk just use the data I pointed it to? It seems to be "rebuilding" the summaries (and yes, I have a lot of network data, so it makes sense) 2) Why does it spawn the consecutive acceleration searches when the old ones didn't complete yet?
Splunkbase page for the app https://splunkbase.splunk.com/app/833/ says its supported for 7.3 But the Release notes says its only supported for 8.X https://docs.splunk.com/Documentation/AddOns/relea... See more...
Splunkbase page for the app https://splunkbase.splunk.com/app/833/ says its supported for 7.3 But the Release notes says its only supported for 8.X https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Releasenotes 8.3.0 package  is no longer on splunkbase Can someone let me know if 8.3.1 is supported on Splunk enterprise 7.3.X Many thanks, Jon
Hi Team,   How to set limitations for each models . If I change limitations for linear Regression in settings it will effect on all models with linear regression .I need to set limitations for eac... See more...
Hi Team,   How to set limitations for each models . If I change limitations for linear Regression in settings it will effect on all models with linear regression .I need to set limitations for each  model. Please advise 
Hello, I am working on a dashboard and I would like to keep only the first letter of the input (text) so that I can use it to call the correct datamodel in the query (my datamodels are untitled "A" ... See more...
Hello, I am working on a dashboard and I would like to keep only the first letter of the input (text) so that I can use it to call the correct datamodel in the query (my datamodels are untitled "A" "B" "C" in order to classify names). If you have an idea, feel free to post it.  Thanks in advance
hi I want to detect web vulnerabilities for example "XSS" or " SQLI" with splunk. for this target i collect apache log into my splunk server. and till now I find match string with signature based ru... See more...
hi I want to detect web vulnerabilities for example "XSS" or " SQLI" with splunk. for this target i collect apache log into my splunk server. and till now I find match string with signature based rule for detect them and its implement with Regex in search app of splunk. so my question is there any other way to detect this vulnerabilities without app or with app (ex :Splunk Enterprise Security)? thanks!
Hello! Is it possible to extract EDNS fields from DNS packets using Splunk Stream? In particular, I mean CSUBNET (option 8, see image).  
Hi, I am adding URL to be monitored under Website Monitoring Application. It works fine for sometime but later all of sudden it stops indexing logs for all the URL's and Splunk process stops. I se... See more...
Hi, I am adding URL to be monitored under Website Monitoring Application. It works fine for sometime but later all of sudden it stops indexing logs for all the URL's and Splunk process stops. I see Splunk PID increasing exponentially when this issue happens.If i grep splunk i see the below message  /opt/app/splunk/splunk/bin/python2.7 /opt/app/splunk/splunk/bin/runScript.py execute.   I see below errors in splunkd log   ERROR Unable to find the app configuration for the specified configuration stanza=proxy error="splunkd connection error", see url=http://lukemurphey.net/projects/splunk-website-monitoring/wiki/Troubleshooting   WARN HttpListener - Socket error from 127.0.0.1:60492 while accessing /servicesNS/nobody/website_monitoring/admin/website_monitoring/proxy: Broken pipe   P.S Earlier had placed website monitoring on Search Heads,since this issue was occuring we moved it to HF but still we see this issue.   Any suggestions on how to fix this issue.    
I am trying to run the splunk connect syslog via podman, here is the reference links - https://splunk-connect-for-syslog.readthedocs.io/en/latest/gettingstarted/#offline-container-installation http... See more...
I am trying to run the splunk connect syslog via podman, here is the reference links - https://splunk-connect-for-syslog.readthedocs.io/en/latest/gettingstarted/#offline-container-installation https://splunk-connect-for-syslog.readthedocs.io/en/latest/gettingstarted/podman-systemd-general/ My podman container is up and running, all the configuration on place as per doc instructions - But I am facing a issue related to sending logs HTTP request. Below is my configuration file and activity logs. My env_file [root@hostname ~]# cat /opt/sc4s/env_file SPLUNK_HEC_URL=https://http-singh-sudhir.splunkcloud.com:443 SPLUNK_HEC_TOKEN=Z93TSS87-F826-19V1-01W1-Q9Q8G1G8264 #Uncomment the following line if using untrusted SSL certificates #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DIR=/opt/sc4s/storage/volumes   Using above config the manual curl command is successful [root@hostname ~]# curl -k https://http-singh-sudhir.splunkcloud.com:443/services/collector/event?channel=Q9Q8G1W5-Z93T-F826-19V1-Q9Q8G1G8264 -H "Authorization: Splunk Z93TSS87-F826-19V1-01W1-Q9Q8G1G8264 " -d '{"event": "hello_world"}' {"text":"Success","code":0}[root@hostname ~]# ^C   But with same config, podman logs SC4S is throwing error [root@hostname ~]# /usr/bin/podman logs SC4S '/opt/syslog-ng/etc/conf.d/local/context/compliance_meta_by_source.conf.example' -> '/opt/syslog-ng/etc/conf.d/local/context/compliance_meta_by_source.conf' '/opt/syslog-ng/etc/conf.d/local/context/compliance_meta_by_source.csv.example' -> '/opt/syslog-ng/etc/conf.d/local/context/compliance_meta_by_source.csv' '/opt/syslog-ng/etc/conf.d/local/context/splunk_index.csv.example' -> '/opt/syslog-ng/etc/conf.d/local/context/splunk_index.csv' '/opt/syslog-ng/etc/conf.d/local/context/vendor_product_by_source.conf.example' -> '/opt/syslog-ng/etc/conf.d/local/context/vendor_product_by_source.conf' '/opt/syslog-ng/etc/conf.d/local/context/vendor_product_by_source.csv.example' -> '/opt/syslog-ng/etc/conf.d/local/context/vendor_product_by_source.csv' '/opt/syslog-ng/etc/local_config/destinations/README.md' -> '/opt/syslog-ng/etc/conf.d/local/config/destinations/README.md' '/opt/syslog-ng/etc/local_config/filters/README.md' -> '/opt/syslog-ng/etc/conf.d/local/config/filters/README.md' '/opt/syslog-ng/etc/local_config/filters/example.conf' -> '/opt/syslog-ng/etc/conf.d/local/config/filters/example.conf' '/opt/syslog-ng/etc/local_config/log_paths/README.md' -> '/opt/syslog-ng/etc/conf.d/local/config/log_paths/README.md' '/opt/syslog-ng/etc/local_config/log_paths/lp-example.conf.tmpl' -> '/opt/syslog-ng/etc/conf.d/local/config/log_paths/lp-example.conf.tmpl' '/opt/syslog-ng/etc/local_config/log_paths/lp-example.conf' -> '/opt/syslog-ng/etc/conf.d/local/config/log_paths/lp-example.conf' '/opt/syslog-ng/etc/local_config/sources/README.md' -> '/opt/syslog-ng/etc/conf.d/local/config/sources/README.md' syslog-ng checking config sc4s version=v1.12.0 syslog-ng starting Aug 16 11:44:12 hostname syslog-ng[1]: syslog-ng starting up; version='3.25.1' Aug 16 11:44:12 hostname syslog-ng-config: sc4s version=v1.12.0 Aug 16 11:44:12 hostname syslog-ng[1]: curl: error sending HTTP request; url='https://http-singh-sudhir.splunkcloud.com:443/services/collector/event', error='Couldn\'t connect to server', worker_index='1', driver='d_hec_internal#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec_internal.conf:2:5' Aug 16 11:44:12 hostname syslog-ng[1]: Server disconnected while preparing messages for sending, trying again; driver='d_hec_internal#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec_internal.conf:2:5', worker_index='1', time_reopen='10', batch_size='1' Aug 16 11:44:12 hostname syslog-ng[1]: curl: error sending HTTP request; url='https://http-singh-sudhir.splunkcloud.com:443/services/collector/event', error='Couldn\'t connect to server', worker_index='0', driver='d_hec_internal#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec_internal.conf:2:5' Aug 16 11:44:12 hostname syslog-ng[1]: Server disconnected while preparing messages for sending, trying again; driver='d_hec_internal#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec_internal.conf:2:5', worker_index='0', time_reopen='10', batch_size='1' I am not able to understand what is missing here from my side. if is curl fails then it should be in both cases, looking forward to your help. please point out what is wrong with this.
Hi,   The query was working fine for the lower environment. When we tried configuring the same in production, it is failing with the following error. Error: [DBX-QUERY-WORKER-166] ERROR com.spl... See more...
Hi,   The query was working fine for the lower environment. When we tried configuring the same in production, it is failing with the following error. Error: [DBX-QUERY-WORKER-166] ERROR com.splunk.dbx.command.DbxQueryServer - operation=dbxquery connection_name=XXXXX stanza_name= action=dbxquery_server_worker_failed   At our side, 1. We have checked the connection is working fine. 2. The input query is working fine in the batch mode. 3. We have checked the DB connect logs and we are not seeing anything other than the above mentioned error.   Kindly suggest how to proceed from here.
Hey Splunksters, I have a scripted input (powershell) that outputs correctly 6 fields on the screen like this: expiration_date          user       login                            cardRequired ... See more...
Hey Splunksters, I have a scripted input (powershell) that outputs correctly 6 fields on the screen like this: expiration_date          user       login                            cardRequired          location     account_last_changed mm/dd/yy 15:03        joblo      some_stats              true/false                  blah             mm/dd/yy 16:06 However, when splunk ingests these fields, it is cutting off the last one (account_last_changed) in the _raw Anybody know why?  Tried setting TRUNCATE=0 and TRUNCATE=500000   etc etc in the props.conf Cannot for the life of me get that last field to show up in Splunk I also thought that perhaps it was treating that last field like a new time_stamp ans was thus getting confused and cutting it off.  However, I tried moving the field closer to the first time_timestamp in the actual script (got all six fields to output correctly to the screen) but Splunk was still cutting it off. Any help is much appreciated! Thanks!
Hi, Having major issues with Perfmon collection. Values collected for "% Processor Time" (as well as privileged and user) sometimes contain invalid information. Just monitoring a single 6 vCPU mach... See more...
Hi, Having major issues with Perfmon collection. Values collected for "% Processor Time" (as well as privileged and user) sometimes contain invalid information. Just monitoring a single 6 vCPU machine. While for some processes the CPU usage is correctly returned as percentage value between 0 and 600, other processes every few minutes return values that are way above charts like a few thousand to for ex. like 1.5 Mio. I cannot see any of those numbers while running perfmon. The process IDs of those process also don't change. Regards        
Hello,  I'm working on a really complex search where I need to combine results from different lookup tables. One lookup table is really big with multiple million entries, while the other one is quit... See more...
Hello,  I'm working on a really complex search where I need to combine results from different lookup tables. One lookup table is really big with multiple million entries, while the other one is quite small with only a thousand entries.  Both tables have one common field, let's call it "office". The big tables has entries for task which are applied to a certain office. The other table has more information about the office.  Some example data for the task lookup: office city country importance xxx madrid spain very important yyy paris france important   Office table looks similar to this: office group name xxx this aaa yyy that bbb   I want to add the group and name fields to the first task table, without loosing any entries from the task table, so I can continue working with it. I've tried a lot of different approaches but none of them work. I got the best results with this search, but it's still not the outcome I want:     | inputlookup task_lookup | eval importance_very_important=if(match(importance, "very important"), 1, 0), importance_important=if(match(importance, "important"), 1, 0), importance_less_important=if(match(importance, "less important"), 1, 0) | eval source="task" | append [| inputlookup office_lookup | eval source="office"] | stats values(source) as source, values(country) as country, values(city) as city, sum(importance_*) as *, values(group) as group, values(name) as name by office | where mvcount(source)=2     This search gives me the right combination of fields BUT it also combines the different cities and countries, which I don't want, since I need them seperated so I can filter them. I get the following outcome (e.g.): office country city name group very_important important less_important xxx madrid paris spain france italy aaa this 3 7 8 yyy rome paris france spain bbb that 5 3 4   So all in all I need a result table that doesn't combine any values so I can work with them seperately. I'm at a point where I have no clue how to accomplish this, so any help would be highly appreciated!   Additional info: I don't want to use join since the first lookup has so many entries, I don't thinks that's going to work. I also can't just use mvexpand, since it doesn't properly expand the counts for the different task counts with their importance.
Hi all, I'm trying to dynamically add columns to two fixed columns based on the environment value selected. For instance, this is the input data: Environment Application CatridgeType Cartridg... See more...
Hi all, I'm trying to dynamically add columns to two fixed columns based on the environment value selected. For instance, this is the input data: Environment Application CatridgeType Cartridge Version DEV A-1 User Alpha 1.1 DEV A-2 Product Beta 1.2 UAT A-1 User Alpha 1.2 SVP A-1 User Alpha 1.4 SVP A-1 User Sigma 1.5 SVP A-2 Product Beta 1.2 SVP A-3 System Gamma 1.5   And I would like to create a table such as the following:  CartridgeType Cartridge DEV:A-1 DEV:A-2 User Alpha 1.1   Product Beta   1.2 Some key things to note: - The first two columns should stay constant, however depending on the environment value selected in the search (e.g. Environment="DEV"), the environment value should be combined with the 'Application' value to create another column, in which the values are the corresponding 'Version' value. The tricky party is making the fields after "Cartridge" dynamic, for instance, if Environment="SVP",  I would expect the following: CartridgeType Cartridge SVP:A-1 SVP:A-2 SVP:A-3 User Alpha 1.4     User Sigma 1.5     Product Beta   1.2   System Gamma     1.5   Is this possible to do whilst making to only show the latest version value?   Thank you so much for any help!
Hi, after the installation of ITE Works 4.9.2 and the exchange content pack. I checked all the dashboards to be sure the data was correctly processed and I realized that some panels were blank. One... See more...
Hi, after the installation of ITE Works 4.9.2 and the exchange content pack. I checked all the dashboards to be sure the data was correctly processed and I realized that some panels were blank. One of them, Inbound Messages - Microsoft Exchange, the panel related to the inbound message volume is empty. looking into the search,    `msgtrack-inbound-messages`|eval total_kb=total_bytes/1024|timechart fixedrange=t bins=120 per_second(total_kb) as "Bandwidth"   I realized that the first macro does not return a column total_bytes so the eval cannot create the new field total_kb so the timechart can not visualize anything. is there some configuration missing on my side or is it a known bug of the content pack? Cheers  
Hi In my search table are some multible events with one timestamp. I need to split them. Does somebody has any idea? Thanks in advance for your help
Hi, I have one index(test0) in a standalone server. I'm trying to make 3-month data searchable after 6 months of data to Archive And after 12 months of data to delete/retention Below is my con... See more...
Hi, I have one index(test0) in a standalone server. I'm trying to make 3-month data searchable after 6 months of data to Archive And after 12 months of data to delete/retention Below is my config [test0] coldPath = $SPLUNK_DB/test0/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/test0/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/test0/thaweddb maxDataSize = 750 maxWarmDBCount = 500 frozenTimePeriodInSecs = 31556926 coldToFrozenDir = /opt/backup/index Is above configuration is correct ? 
Hi all, have been using the splunklib package in Python to connect to the Splunk API for some time now, and it works fine. As sample search I use is provided below: searchquery = """search index=wi... See more...
Hi all, have been using the splunklib package in Python to connect to the Splunk API for some time now, and it works fine. As sample search I use is provided below: searchquery = """search index=wineventlog EventCode=4688 earliest=-4h | fields user, ETC, ETC, ETC | table user, ETC, ETC, ETC""" resolveQuery = SplunkQuery(host, port, username, password) df = resolveQuery.splunk_fetch(searchquery) The search return a pandas dataframe (in Python) containing the required information. When I try to retrieve an inputlookup however, the search doesn't return any information, only an empty dataframe.  Below is an example of a searchquery I use to try and retrieve an inputlookup: searchquery = """search | inputlookup infomation.csv""" Any help would be highly appreciated: how can I retrieve inputlookups using the Splunklib package in Python?
Wondered if someone can assist me, we're trying to send some log files from AWS in JSON format, coming over as an event. ive copied the log into a text file, gone ADD DATA and initially it fails but ... See more...
Wondered if someone can assist me, we're trying to send some log files from AWS in JSON format, coming over as an event. ive copied the log into a text file, gone ADD DATA and initially it fails but then changing sourcetype to _json it formats it fine. However when trying to send the data in properly, i just get a parsing error, is there an easy way to identify whats causing this? the format is as follows.   { "time": "1628855079519", "host": "sgw-3451B77A", "source": "share-114D5B31", "sourcetype": "aws:storagegateway", "sourceAddress": "xx.xx.xx.xx", "accountDomain": "XXX", "accountName": "server_name", "type": "FileSystemAudit", "version": "1.0", "objectType": "File", "bucket": "test-test-test", "objectName": "/random-210813-1230.toSend", "shareName": "test-test-test", "operation": "ReadData", "timestamp": "1333222111111", "gateway": "aaa-XXXXXXA", "status": "Success" }