All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, I have a request by my customer. We have, like in many prod environments, Windows logs. We know that we can see events on Splunk Console, with Splunk Add-on for Microsoft Windows , in ... See more...
Hi Splunkers, I have a request by my customer. We have, like in many prod environments, Windows logs. We know that we can see events on Splunk Console, with Splunk Add-on for Microsoft Windows , in 2 way: Legacy format (like  the original ones on AD) or XML. Is it possible to see them on JSON format? If yes, we can achieve this directly with above addon or we need other tools?
Hello, I'm implementing Splunk Security Essentials in an environment that already has detection rules, based on the Mitre Att&CK framework. I have correctly entered the datasources in Data Inventor... See more...
Hello, I'm implementing Splunk Security Essentials in an environment that already has detection rules, based on the Mitre Att&CK framework. I have correctly entered the datasources in Data Inventory and indicated them as "Availables". In Content > Custom Content, I added our detection rules by hand. I've specified the Tactics, and the Mitre Techniques and SubTechniques. I've also indicated their status in bookmarking, and some are "Successfully implemented". When I go to Analytics Advisor > Mitre ATT&CK Framework, I see the "Content (Available)" in the MITRE ATT&CK matrix, and it's consistent with our detection rules. But when I select Threat Groups, in "2.Selected Content", in "Total Content Selected", I get zero, whereas detection rules relate to the sub-techniques used by the selected Thread Groups. How can I solve this problem?
Hi, may I know any documentation on how tokens work when using them in javascript files. The docs at https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/tokens don't present much info on Javascrip... See more...
Hi, may I know any documentation on how tokens work when using them in javascript files. The docs at https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/tokens don't present much info on Javascript usage. Particularly, I am trying to use tokens to delete KV store values and I am confused how this can be done. Just using tokens.unset() is not working. Any help would be appreciated!!
We are setting up the splunk otel collector in ssl and authorization enabled solr. We are facing issue in passing the username and password for solr endpoint in the agent_config.yaml file.  Refer the... See more...
We are setting up the splunk otel collector in ssl and authorization enabled solr. We are facing issue in passing the username and password for solr endpoint in the agent_config.yaml file.  Refer the bbelow content of the config file, due to security reason, we have masked the hostname, userid, password details.   receivers: smartagent/solr: type: collectd/solr host: <hostname> port: 6010 enhancedMetrics: true exporters: sapm: access_token: "${SPLUNK_ACCESS_TOKEN}" endpoint: "${SPLUNK_TRACE_URL}" signalfx: access_token: "${SPLUNK_ACCESS_TOKEN}" api_url: "${SPLUNK_API_URL}" ingest_url: "${SPLUNK_INGEST_URL}" sync_host_metadata: true headers: username: <username> password: <password> correlation: otlp tls: insecure: false cert_file: <certificate_file>.crt key_file: <key_file>.key   Error Log :    -- Logs begin at Fri 2023-11-17 23:32:38 EST, end at Tue 2023-11-28 02:46:22 EST. -- Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/site-packages/sfxrunner/scheduler/simple.py", line 57, in _call_on_interval Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: func() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 194, in read_metrics Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: solr_cloud = fetch_collections_info(data) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 328, in fetch_collections_info Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: get_data = _api_call(url, data["opener"]) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 286, in _api_call Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: resp = urllib.request.urlopen(req) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 216, in urlopen Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: return opener.open(url, data, timeout) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 519, in open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: response = self._open(req, data) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 536, in _open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: result = self._call_chain(self.handle_open, protocol, protocol + Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 496, in _call_chain Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: result = func(*args) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 1377, in http_open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: return self.do_open(http.client.HTTPConnection, req) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 1352, in do_open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: r = h.getresponse() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 1378, in getresponse Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: response.begin() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 318, in begin Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: version, status, reason = self._read_status() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 300, in _read_status Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: raise BadStatusLine(line) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: [30B blob data] Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: [3B blob data] Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: {"kind": "receiver", "name": "smartagent/solr", "data_type": "metrics", "createdTime": 1700334408.8198304, "lineno": 56, "logger": "root", "monitorID": "smartagentsolr", "monitorType": "collectd/solr", "runnerPID": 1703, "sourcePath": "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/site-packages/sfxrunner/logs.py"} Nov 18 14:06:58 aescrsbsql01.scr.dnb.net otelcol[1035]: 2023-11-18T14:06:58.821-0500 error signalfx/handler.go:188 Traceback (most recent call last):
Hi! I am trying to evaluate AppDynamics for monitoring IIS-sites but I get the error "unable to create application" when I try to create an application. / FKE
Hi Has anyone had issues monitoring splunk servers with zabbix agents. Just monitoring no integration or log ingestion? Thanks M
Hi, I'm trying to setup a way to automatically assign notables to the analysts, and evenly. The "default owner" in the notable adaptive response wouldn't help as it will keep on assigning the same r... See more...
Hi, I'm trying to setup a way to automatically assign notables to the analysts, and evenly. The "default owner" in the notable adaptive response wouldn't help as it will keep on assigning the same rule to the same person. Is there a way to shuffle the assignees automatically?
im getting this error from connection in  DB connect "There was an error processing your request. It has been logged (ID xxxx)" . I've done manually copy query in db_inputs.conf but doesnt worxs. ca... See more...
im getting this error from connection in  DB connect "There was an error processing your request. It has been logged (ID xxxx)" . I've done manually copy query in db_inputs.conf but doesnt worxs. can help me?
Hi  I am trying to set up an alert with the following query for the tickets that is not assigned to someone after 10 mins. I wanted the ticket number to get populated in the mail but I am not gettin... See more...
Hi  I am trying to set up an alert with the following query for the tickets that is not assigned to someone after 10 mins. I wanted the ticket number to get populated in the mail but I am not getting the same rather the mail is without the ticket number. index="servicenow"  sourcetype=":incident" |where assigned_to = "" | eval age = now() - _time |where age>600 |table ticket_number, age, assignment_group, team | lookup team_details.csv team as team OUTPUTNEW alert_email, enable_alert | where enable_alert = Y | sendemail to="$alert_email$" subject="Incident no. "$ticket_number$" is not assigned for more than 10 mins - Please take immediate action" message=" Hi Team, This is to notify you that the ticket: "$ticket_number$" is not assigned for more than 10 mins. Please take necessary action on priority"
I want to change the msg for a log i.e <list > <Header>.....</Header> <status> <Message>Thuihhh_4y3y27y234yy4 is pending</Message> </status> </list> to <list > <Header>.....</He... See more...
I want to change the msg for a log i.e <list > <Header>.....</Header> <status> <Message>Thuihhh_4y3y27y234yy4 is pending</Message> </status> </list> to <list > <Header>.....</Header> <status> <Message>request is pending</Message> </status> </list>   how can i achieve using rex+sed commands in splunk
I want to extract the  following information make it as a field as "error message" . index=os source="/var/log/syslog" "*authentication failure*" OR "Generic preauthentication failure" Events e... See more...
I want to extract the  following information make it as a field as "error message" . index=os source="/var/log/syslog" "*authentication failure*" OR "Generic preauthentication failure" Events example : Nov 28 01:02:31 server1 sssd[ldap_child[12010]]: Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]: Generic preauthentication failure. Unable to create GSSAPI-encrypted LDAP connection. Nov 28 01:02:29 server2  proxy_child[1939385]: pam_unix(system-auth-ac:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.177.46.57 user=hippm
Hello there, I would like to convert the  default time to the local country timezone and place the converted timezone next to the default one The defaut timezone is Central European time and based ... See more...
Hello there, I would like to convert the  default time to the local country timezone and place the converted timezone next to the default one The defaut timezone is Central European time and based on the country name available in the report, need to conver the timezone. I guess i need to have a lookup table which the coutryname and the timezone of that country Timestamp CountryCode CountryName Region 2023-10-29T13:15:51.711Z BR Brazil Americas 2023-10-30T10:13:19.160Z BH Bahrain APEC 2023-10-30T19:15:24.263Z AE Arab Emirates APEC  
Hi All, We have configured application log monitoring on windows application servers. The log path has a folder where all the _json files are stored. There are more that 300+ json files in each fold... See more...
Hi All, We have configured application log monitoring on windows application servers. The log path has a folder where all the _json files are stored. There are more that 300+ json files in each folder with different time stamps and dates. We have configured inputs.conf as shown below with ignoreOlderThan =2d so that Splunk should not consume more CPU/memory. But still we could see memory and CPU of the application server is going high. Kindly suggest best practice methods so that Splunk universal forwarder wont consume more CPU and memory.   [monitor://C:\Logs\xyz\zbc\*] disabled = false index = preprod_logs interval =300 ignoreOlderThan = 2d
Hi I want to inventory all Splunk tools related to artificial intelligence and observability Here is the list: Splunk AI Assistant - PREVIEW (formerly SPL Copilot) Splunk Machine Learning Too... See more...
Hi I want to inventory all Splunk tools related to artificial intelligence and observability Here is the list: Splunk AI Assistant - PREVIEW (formerly SPL Copilot) Splunk Machine Learning Toolkit (MLTK) Splunk App for Data Science and Deep Learning (DSDL) Splunk IT Service Intelligence Splunk App for Anomaly Detection Did I forget some tools? Thanks
"Hey Splunk experts! I'm a Splunk newbie and working with data where running `stats count by status` gives me 'progress' and 'Not Started'. I'd like to include 'Wip progress' and 'Completed' in the r... See more...
"Hey Splunk experts! I'm a Splunk newbie and working with data where running `stats count by status` gives me 'progress' and 'Not Started'. I'd like to include 'Wip progress' and 'Completed' in the results. When running `stats count by status`. Desired output is: - Not Started - Progress - Wip Progress - Completed  Any tips or examples on how to modify my query to achieve this would be fantastic! Thanks 
Hi, I am trying to report on access requests to actual logins. I have a list of events from our systems of when users have logged in: | table _time os host user clientName clientAddress signature ... See more...
Hi, I am trying to report on access requests to actual logins. I have a list of events from our systems of when users have logged in: | table _time os host user clientName clientAddress signature logonType I have a list of requests which cover a time frame and potentially multiple logins to multiple systems: | table key host reporterName reporterEmail summary changeStartDate changeEndDate So i want a list of events, with any corresponding requests (could be none, so i can alert the user/IT) joining on host, user, and _time between changeStartDate and changeEndDate. I do have this working by using map (see below), but it's very slow and not operable over large datasets/times. There must be a better way. I had issues with matching on the time range, and where it may not have a match, and optional username matching based on OS. Does anyone have any ideas? Existing search: ...search... | table _time os host user clientName clientAddress signature logonType | convert mktime(_time) as epoch | sort -_time | map maxsearches=9999 search=" | inputlookup Request_admin_access.csv | eval os=\"$os$\" | eval outerHost=\"$host$\" | eval user=\"$user$\" | eval clientName=\"$clientName$\" | eval clientAddress=\"$clientAddress$\" | eval signature=\"$signature$\" | eval logonType=\"$logonType$\" | eval startCheck=if(tonumber($epoch$)>=tonumber(changeStartDate), 1, 0) | eval endCheck=if(tonumber($epoch$)<=tonumber(changeEndDate), 1, 0) | eval userCheck=if(normalisedReporterName==\"$normalisedUserName$\", 1, 0) | where host=outerHost | eval match=case( os==\"Windows\" AND startCheck==1 AND endCheck==1,1, os==\"Linux\" AND startCheck==1 AND endCheck==1 AND userCheck==1,1) | appendpipe [ | makeresults format=csv data=\"_time,os,host,user,clientName,clientAddress,signature,logonType,wimMatch $epoch$,$os$,$host$,$user$,$clientName$,$clientAddress$,$signature$,$logonType$,1\" ] | where match==1 | eval _time=$epoch$ | head 1 | convert ctime(changeStartDate) timeformat=\"%F %T\" | convert ctime(changeEndDate) timeformat=\"%F %T\" | fields _time os host user clientName clientAddress signature logonType key reporterName reporterEmail summary changeStartDate changeEndDate"  
Hi, I mistakenly cloned an alert to the "Slack Alerts" app instead of the normal "Search & Reporting" app.  This alert is functioning and sending Slack messages when triggered. But the alert is in ... See more...
Hi, I mistakenly cloned an alert to the "Slack Alerts" app instead of the normal "Search & Reporting" app.  This alert is functioning and sending Slack messages when triggered. But the alert is in the wrong app. But worse is that the alert now appears in the "All Configurations" page. I am able to disable the alert but not able to remove it and I really need to remove it from the "All Configurations" page. I'm also not able to edit the alert in any way. Is it possible to remove the alert from the "All Configurations" page?   Thank you.
is there a definitive KB article that tells us what exactly makes up a user's disk "Disk Space Limit"? What are activities that count towards this limit? Also, what are things that can help users c... See more...
is there a definitive KB article that tells us what exactly makes up a user's disk "Disk Space Limit"? What are activities that count towards this limit? Also, what are things that can help users clean up their usage? Screenshot attached of which setting I am talking about. appreciate any and all help.    
Hello, I have a table that shows vulnerabilities by asset name and severity level. For example, I have an asset name that has 3 critical, 2 high, 3 medium, and 1 low. Now what I want to do is be a... See more...
Hello, I have a table that shows vulnerabilities by asset name and severity level. For example, I have an asset name that has 3 critical, 2 high, 3 medium, and 1 low. Now what I want to do is be able to just click on the field critical and just be able to show those critical vulnerabilities for that asset name and so on. I am not sure if that requires a condition and how that is set up or if it just requires a simple drill-down. Can someone please help?  
Hi! We use Splunk Stream 7.3.0. When receiving an event in a log longer than 1000000 characters, Splunk cuts it. Event in json format. Tell me what settings should be applied in Splunk Stream so tha... See more...
Hi! We use Splunk Stream 7.3.0. When receiving an event in a log longer than 1000000 characters, Splunk cuts it. Event in json format. Tell me what settings should be applied in Splunk Stream so that Splunk parses the data correctly. Thanks!