All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey community, I have the situation that I need to set a bindDN password with special characters for the LDAP authentication without using the WebUI. I'm aware on the fact, that I can set the passw... See more...
Hey community, I have the situation that I need to set a bindDN password with special characters for the LDAP authentication without using the WebUI. I'm aware on the fact, that I can set the password in cleartext via authentication.conf - unfortunately it seems the special charaters within the password confuse Splunk. This results in the fact that each hash for the several LDAP strategies look different - even if its the same password. Also the authentication does not work at all. Unfortunately I cannot use the WebUI to set the password. I currently could image two solutions: 1. Is there a way to escape the special characters within the bindDN password within authentication.conf? 2. Is there a war so set the password via a CLI command, I wasn't able to find so far? Any ideas? Thank you   Regards, Christian
Hi All,  I'm experiencing a funny issue with a Report I've setup within the "Search" module of Splunk.  The report is  generating a straight-forward table, example below: I've it configured ... See more...
Hi All,  I'm experiencing a funny issue with a Report I've setup within the "Search" module of Splunk.  The report is  generating a straight-forward table, example below: I've it configured for the search runs once per day, with a trigger action to send an email with the search results in a .csv file.  I've had this alert up and running for months without issue, however in recent days when I open the attached .CSV file for this daily email, there are alternative blank rows being included in the CSV file: If I run this search and download the .csv manually, the report doesn't have any alternating blank rows included in the report.  Similarly, if I update this daily report to instead attach a PDF of the search results, the PDF does not include any alternating blank rows.  So, I think the issue lies with something to do with the Trigger Action 'send email' however I can't seem to figure out what's causing this.  Is anyone aware of this being a known issue, or is there somewhere obvious that I'm overlooking here?  Thanks
Hi please anyone help me to sort this issue. i can see logs getting populated in the syslog.but its not getting ingested into splunk since 26th November 2020 all of a sudden.What may have happened f... See more...
Hi please anyone help me to sort this issue. i can see logs getting populated in the syslog.but its not getting ingested into splunk since 26th November 2020 all of a sudden.What may have happened for such log drop.I have been looking around too many forums.But still not able to rectify issue.No configuration changes have been made in the HF too. please help me to sort it out.many thanks
hello,   We recently set up Splunk on our system so we are still learning. We have an issue where we are not getting older events in searches. For example: Event id 4625 (failed logon), we can see ... See more...
hello,   We recently set up Splunk on our system so we are still learning. We have an issue where we are not getting older events in searches. For example: Event id 4625 (failed logon), we can see the event on the same day it happens but the next day, it will not show up.  A few things I have tried: 1. removed the ignore older that 2d line in the inputs.conf file. 2. checked to make sure we are not over on bucket size. Any suggestions on configuring this correctly? I can post config info if requested. Thanks
Good afternoon,  Is there a source for learning JS for use with creating custom visualizations in 8.x+ I've looked around and not found anything - even a cheat sheet would be great. Many thanks
Hey Splunkers, Currently, I have 3 checkboxes to filter data for the panel. eg: My checkbox names are : Critical, Major and Minor Name Value Critical Sev 1 Major Sev 2 Minor Sev 3 ... See more...
Hey Splunkers, Currently, I have 3 checkboxes to filter data for the panel. eg: My checkbox names are : Critical, Major and Minor Name Value Critical Sev 1 Major Sev 2 Minor Sev 3 Based on the severity, the panel searches for all the respective tickets. I want to add 2 values in the checkbox, is there a way i can add two values to one checkbox? Name Value1 Value2 Critical Sev 1   Major Sev 2 Sev 3 Minor Sev 4 Sev 5   Or an alternative way to implement this please? Thanks in advance.
In data models, what is the reason for child datasets? Would it not be easier to just create a root dataset with no children?
I have setup the trial version of Splunk enterprise on my machine and have also created a dummy java spring boot service with log4j2 framework. The idea is to capture the logs in splunk from this ser... See more...
I have setup the trial version of Splunk enterprise on my machine and have also created a dummy java spring boot service with log4j2 framework. The idea is to capture the logs in splunk from this service using HEC. I did find a nice tutorial that i followed but i still seem to not receive any events in splunk.  Also, as there are no error messages that i can see, i am not sure what the issue is ? Can someone please guide me to the right place ? This is the guide that i followed : https://github.com/devadyuti/integration-repo/tree/master/spring-log4j2-splunk Please let me know if there is anything else i can provide that would be useful.  pom.xml       <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.4.0</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>splunk-log4j</artifactId> <version>0.0.1-SNAPSHOT</version> <name>splunk-log4j</name> <description>Demo project for Splunk with springboot</description> <properties> <java.version>11</java.version> </properties> <repositories> <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <!--<url>https://splunk.artifactoryonline.com/artifactory/ext-releases-local</url>--> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository> </repositories> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.8.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project>         log4j2.xml       <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="http://127.0.0.1:8000/services/collector/event" token="xxxxxxxxxxxxxxxxxxxxxx" index="http_log_event_collector_idx" host="127.0.0.1" type="raw" sourcetype="_json" messageFormat="text" disableCertificateValidation="true" > <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <!-- LOG everything at INFO level --> <Root level="trace"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration>        
Hi, I have a query: index="cisco" hostname=* (cat_name=passed OR cat_name=failed)  type="Ethernet" | eval site=case(substr(NetworkDeviceName,1,7)=="mysite",substr(NetworkDeviceName,1,7) + substr(N... See more...
Hi, I have a query: index="cisco" hostname=* (cat_name=passed OR cat_name=failed)  type="Ethernet" | eval site=case(substr(NetworkDeviceName,1,7)=="mysite",substr(NetworkDeviceName,1,7) + substr(NetworkDeviceName, -4),1=1,substr(NetworkDeviceName,1,7) ) | stats count by site mac_address cat_name type | eval type_cat_name=type."_".cat_name | eval site_mac=site."_".mac_address | xyseries site_mac type_cat_name count | rex field=site_mac "(?<site>.*)_(?<mac>.*)" | search "Call Check_CISE_Failed_Attempts">=1 AND "Call Check_CISE_Passed_Authentications"="NULL" AND "Framed_CISE_Failed_Attempts"="NULL" AND "Framed_CISE_Passed_Authentications"="NULL" | chart dc(mac) As Endpoints by site So the result is a column chart that shows for each site the count of mac address that correspond to the search condition. Now if I want to click on a column I go to another dashboard for the specific site and for the mac address I need additional fields to show in a table like site, mac_address, port, interface. I tried to add this field in the by clause after stats but it seems doesn't work. Have you any suggestions?
Hi everyone, I'm trying to create a simple list with all the devices found on the logs from globalprotect. The deal is, i'm using rex to match it with regular expressions.  I've already used  regex... See more...
Hi everyone, I'm trying to create a simple list with all the devices found on the logs from globalprotect. The deal is, i'm using rex to match it with regular expressions.  I've already used  regex101.com to double check my search but, when I run it on splunk it fails. My search: index="ind_Aaaabbbb" log_subtype="globalprotect" globalprotectgateway-config-succ OR globalprotectgateway-logout-succ | rex field=_raw (?<device>\w\w\w\w\w\w\s\w\w\w\w:\s+(?:\w+\-\w+\-\w+|\w+)) | table _time, user, event_id, src_ip, device, dvc_name, dvc   The ideal expresions to capture: Device name: DDD-AAA-BBBBB Device name: DDDAAABBBBBBB   Error returned by Splunk: Error in 'SearchParser': Missing a search command before '\'. Error at position '198' of search query 'search index="index" log_subtype="globalpro...{snipped} {errorcontext = -\w+\-\w+|\w+)) | tab}'.   Example data: SYSTEM,globalprotect,0,2020/11/29,,globalprotectgateway-config-succ,Gateway-XXX-XX-XXX-N,0,0,general,informational,"GlobalProtect gateway client configuration generated. username.5, Private IP: 00.000.000.00, Client version: 5.1.1-12, Device name: DDD-AAA-BBBBB, Client OS version: Microsoft Windows 10 Pro , 64-bit, VPN type: Device Level VPN.",000...,0x0,0,0,0,0,,FW-PA-0000-AAA-CCC-TTTT SYSTEM,globalprotect,0,2020/11/29 ,,globalprotectgateway-config-succ,Gateway-XXX-XX-N,0,0,general,informational,"GlobalProtect gateway client configuration generated. username.5, Private IP: 00.000.000.000, Client version: 5.1.5-20, Device name: DDDAAABBBBBBB, Client OS version: Microsoft Windows 10 Pro , 64-bit, VPN type: Device Level VPN.",000...,0x0,0,0,0,0,,FW-PA-0000-AAA-CCC-TTTT  
Splunk DB Connect sql server what privilege the login account require?
I have 2 multi value fields - script and instance. I joined them in another multi value field (steps) using mvappend I would like to order the values from this new field called steps in asc order I... See more...
I have 2 multi value fields - script and instance. I joined them in another multi value field (steps) using mvappend I would like to order the values from this new field called steps in asc order I found mvsort, but it only works for alphabetic order, not chronological order
Hello, I am running a search query on search head and getting below errors. When I am running same query in another environment (single node installation) its working fine.  Is there a way to fix th... See more...
Hello, I am running a search query on search head and getting below errors. When I am running same query in another environment (single node installation) its working fine.  Is there a way to fix this query.  Both machines are indexers  [hp924srv] Field 'x' does not exist in the data. [hp925srv] Field 'x' does not exist in the data. Query:     | tstats summariesonly=false avg(All_TPS_Logs.duration) AS average, count(All_TPS_Logs.duration) AS count, stdev(All_TPS_Logs.duration) AS stdev, median(All_TPS_Logs.duration) AS median, exactperc75(All_TPS_Logs.duration) AS perc75, exactperc95(All_TPS_Logs.duration) AS perc95, exactperc99.5(All_TPS_Logs.duration) AS perc99.5, min(All_TPS_Logs.duration) AS min, max(All_TPS_Logs.duration) AS max,earliest(_time) as start, latest(_time) as stop FROM datamodel=TPS_V7 WHERE (nodename=All_TPS_Logs host=AMBER_PSC47 All_TPS_Logs.duration <= 1000000000000 All_TPS_Logs.duration >= -10000000 (All_TPS_Logs.user=* OR NOT All_TPS_Logs.user=*) All_TPS_Logs.operationIdentity="*") NOT All_TPS_Logs.overflow=true GROUPBY All_TPS_Logs.fullyQualifiedMethod | rename All_TPS_Logs.fullyQualifiedMethod as fullyQualifiedMethod |eval time_slice_per_min = (stop-start)/60 | eval Throughput_per_minute=count/time_slice_per_min | eval Throughput_per_second=count/(stop-start) | append [ tstats summariesonly=false avg(All_TPS_Logs.duration) AS average, count(All_TPS_Logs.duration) AS count, stdev(All_TPS_Logs.duration) AS stdev, median(All_TPS_Logs.duration) AS median , exactperc75(All_TPS_Logs.duration) AS perc75 , exactperc95(All_TPS_Logs.duration) AS perc95, exactperc99.5(All_TPS_Logs.duration) AS perc99.5, min(All_TPS_Logs.duration) AS min, max(All_TPS_Logs.duration) AS max,earliest(_time) as start, latest(_time) as stop FROM datamodel=TPS_V7 WHERE (nodename=All_TPS_Logs host=AMBER_PSC47 All_TPS_Logs.duration <= 1000000000000 All_TPS_Logs.duration >= -10000000 All_TPS_Logs.overflow=true (All_TPS_Logs.user=* OR NOT All_TPS_Logs.user=*) All_TPS_Logs.operationIdentity="*" ) GROUPBY All_TPS_Logs.fullyQualifiedMethod | rename All_TPS_Logs.fullyQualifiedMethod as fullyQualifiedMethod |eval fullyQualifiedMethod = fullyQualifiedMethod." (overflow)" | eval time_slice_per_min = (stop-start)/60 | eval Throughput_per_minute= count/time_slice_per_min | eval Throughput_per_second=count/(stop-start) ] | eval average = round(average, 1) | eval stdev = round(stdev, 1) | sort - average            
In raw data, timestamp field value is 1606730113962778 but for the timestamp field in the interesting fields list i am getting two values which are none and 606730113962778. Below are the props confi... See more...
In raw data, timestamp field value is 1606730113962778 but for the timestamp field in the interesting fields list i am getting two values which are none and 606730113962778. Below are the props configuration Because of that _time is not properly getting configured. [Sourcetype] INDEXED_EXTACTION =json NO_BINARY_CHECK = true TIMESTAMP_FIELDS= timestamp TIME_FOMAT=%s%6N pulldown_type=1  
I want to integrate my cloud network monitoring instance webhook messages to splunk so that i can see/process  the  webhooks messages in splunk. 
Hi I'm writing a custom search command, and I'm running into the following error: Failed to write buffer of size 21 to external process file descriptor (Broken pipe) The custom search is an eventi... See more...
Hi I'm writing a custom search command, and I'm running into the following error: Failed to write buffer of size 21 to external process file descriptor (Broken pipe) The custom search is an eventing command (command name is 'sum'):    #!/usr/bin/python import exec_anaconda exec_anaconda.exec_anaconda() import pandas as pd import os,sys import logging, logging.handlers import splunk from splunklib.searchcommands import dispatch, EventingCommand, Configuration, Option, validators @Configuration() class ExEventsCommand(EventingCommand): def transform(self, records): l = list(records) l.sort(key=lambda r: r['_raw']) return l if __name__ == "__main__": dispatch(ExEventsCommand, sys.argv, sys.stdin, sys.stdout, __name__)   The error occurs only sometimes - it looks like it is dependent on the amount of data that is returned by the search. This is illustrated by the following searches:   index = _internal | head 10000 | sum (no error) index = _internal | head 100000 | sum (error)     
Hi,  I need to collect logs from a critical Database and I need to be sure that there will be no impact on the server.  Is there any option to limit the DB Connect for 10 seconds for each query?  ... See more...
Hi,  I need to collect logs from a critical Database and I need to be sure that there will be no impact on the server.  Is there any option to limit the DB Connect for 10 seconds for each query?  Thanks.
I'm trying to integrate NewRelic and Splunk for that I'm trying to use Curl commands and also Splunk add-on for Newrelic. For Curl commands I don't know how to set local host and port. I'm trying to... See more...
I'm trying to integrate NewRelic and Splunk for that I'm trying to use Curl commands and also Splunk add-on for Newrelic. For Curl commands I don't know how to set local host and port. I'm trying to use splunk url which was used for our appliactions for that I'm getting page not found error following by large set of decimal numbers.   For Splunk add-on for Newrelic I followed the steps mentioned in manual it throws an error " You (user=username) do not have permission to perform this operation (requires capability: admin_all_objects)." 
We are trying to ingest logs from Proofpoint TAP using the available addon. We have successfully created the TAP input in our Splunk Cloud but we see no data coming in. Upon further inspection the f... See more...
We are trying to ingest logs from Proofpoint TAP using the available addon. We have successfully created the TAP input in our Splunk Cloud but we see no data coming in. Upon further inspection the following error appears every time the input runs.   11-30-2020 13:27:57.345 +0000 ERROR ExecProcessor [11603 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-Proofpoint-TAP/bin/proofpoint_tap_siem.py" proofpoint_tap_siem://TAP_proofpoint_test: stream_events/proofpoint_tap_siem://TAP_proofpoint_test: Error updating inputs.conf: HTTP 400 Bad Request -- Argument "python.version" is not supported by this handler.     Any idea of what the problem might be and how we can fix it?
We were successfully ingesting O365 management logs using the Splunk O365 add-on. We noticed a drop in events and a really small number of events being forwarded for a day after the drop. After that ... See more...
We were successfully ingesting O365 management logs using the Splunk O365 add-on. We noticed a drop in events and a really small number of events being forwarded for a day after the drop. After that events stopped coming in completely. This was a working inputs with logs coming in as expected. No change to a configuration was made to Splunk or O365 whatsoever.   Currently we can see the following error in Splunk each time the input is run. The error appears after the input successfully acquires the access token and calls the management API.       2020-11-30 12:59:34,992 level=ERROR pid=27098 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:67 | datainput=b'O365_audit_general' start_time=1606741174 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 65, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 102, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/batch.py", line 47, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 126, in discover if not subscription.is_enabled(session): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 151, in is_enabled response = self._perform(session, 'GET', '/subscriptions/list') File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 169, in _perform return self._request(session, method, url, kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 181, in _request raise O365PortalError(response) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 26, in __init__ message = str(response.status_code) + ':' + payload TypeError: can only concatenate str (not "bytes") to str       The same error appears for all add-on inputs (status, message, audit, sharepoint etc.)   Any idea what the problem might be?