All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, I am following guide to create new custom REST endpoint, but I have problem on debug my code in local. How I can debug python code local. Currently, I am using Splunk Docker and moun... See more...
Hi everyone, I am following guide to create new custom REST endpoint, but I have problem on debug my code in local. How I can debug python code local. Currently, I am using Splunk Docker and mount local app to Splunk container. After every time changed code, I must refresh debug on Splunk. It is very taken time. Does everyone have other way to debug python code on Splunk. Thank.
To provide further from yesterday's SPL query. I am facing huge events in multivalues. I want to break in a single event. How can I achieve it. My current events are look like as below.  
Hello everyone! What is the best way to remove dots from domain in field? for example | eval field = lower(mvindex(split(field, "."), 0)) removes just 1 dot, and what if 2+ dots in domain?
I have configured HTTP inputs by creating HEC token in heavy forwarder. I see duplicate events every time I test sending data via these HEC token. I have validated that source does not have duplica... See more...
I have configured HTTP inputs by creating HEC token in heavy forwarder. I see duplicate events every time I test sending data via these HEC token. I have validated that source does not have duplicate events. Even if I send a test event using curl command, it appears twice. curl -k https://<endpoint>:8088/services/collector/event -H "Authorization: Splunk <HEC token>” -d '{"event": "This is a test event"}' Please help find out why are there duplicate events and how can I fix it.
Hi, I have a base search and post process searches on a dashboard that need to be split by source, but it doesn't appear like splitting by source works. The only thing shared is the index, and some ... See more...
Hi, I have a base search and post process searches on a dashboard that need to be split by source, but it doesn't appear like splitting by source works. The only thing shared is the index, and some fields but depending on the source I need to evaluate the fields differently. For instance: Base search: index=test_logs | fields A   Two post process searches: | search source=sourceA . (evaluate field A certain way because it's from source A)   | search source=sourceB . (evaluate field A a different way as it's from source B)   The problem is that when I do this nothing will load. I've found the only way to get this to work is to put the source in the base search but then I wouldn't be able to do my evaluations properly.
I know you can delete KVStore via the command line : https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usetherestapitomanagekv/  Is it possible to delete from the splunk se... See more...
I know you can delete KVStore via the command line : https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/kvstore/usetherestapitomanagekv/  Is it possible to delete from the splunk search line? I tried using the following but it doesn't work.   | rest /servicesNS/nobody/search/storage/collections/data/my_collection     I also tried installing the Webtools TA app to get access to being able to send external api requests but it says I'm unauthorized:   | curl method=get user="" password="" uri=https://host:8089/servicesNS/nobody/search/storage/collections/data/my_collection   The Webtools TA app uses python requests and I'm able to get a 200 when I make the request from a python script.
I have added the following to my dashboard and I am still getting this error (A custom JavaScript error caused an issue loading your dashboard). What should be my next steps? <dashboard version="1.... See more...
I have added the following to my dashboard and I am still getting this error (A custom JavaScript error caused an issue loading your dashboard). What should be my next steps? <dashboard version="1.1" script="myCustomJS.js"> or  <form version="1.1" script="myCustomJS.js">   Manage dashboards that need jQuery updates - Splunk Documentation
Is it possible to restrict a role to run a certain search or only be able to run saved searches? Ie a user can only run index=index | stats count by field? OR | savedsearch search1
Hello, Can anyone help me with the dashboards code for splunk app for windows? When i looked it on splunkbase, i see this app has been archived. please share if anyone is still using it.     ... See more...
Hello, Can anyone help me with the dashboards code for splunk app for windows? When i looked it on splunkbase, i see this app has been archived. please share if anyone is still using it.     Thanks,
I have logs like shown below: 2022-03-09T13:22:45.345-01:00 [app_driver_group_stream_api-1] | INFO s.p.k.o.external.thread 345 - [applicationid=String, offset=100, CADM=String,  IPOD=String,  Uniq... See more...
I have logs like shown below: 2022-03-09T13:22:45.345-01:00 [app_driver_group_stream_api-1] | INFO s.p.k.o.external.thread 345 - [applicationid=String, offset=100, CADM=String,  IPOD=String,  UniqueStringId=String, IMPT=000000-0000-000000-400-00000, applicationname=wwms-processor, Msgid=String, appId=app_group, EndToEndID=String, timestamp=17789552323] - app_thread [app_schema-677ghh89hhjjj-appThread-2] Processed 12 total count, run 0 quotations, and completed 0 total apps past the  last update 2022-03-09T13:22:45.345-01:00 [app_driver_group_stream_api-1] | INFO s.p.k.o.external.thread 345 - [applicationid=String, offset=100, CADM=String,  IPOD=String,  UniqueStringId=String, IMPT=000000-0000-000000-400-00000, applicationname=wwms-processor, Msgid=String, app=app_group_payment, EndToEndID=String, timestamp=17789552323] - app_thread [app_schema-677ghh89hhjjj-appThread-2] Processed 12 total count, run 0 quotations, and completed 0 total apps past the  last update 2022-03-10T12:10:45.345-01:00 [app_driver_group_stream_api-1] | INFO s.p.k.o.external.thread 345 - [applicationid=String, offset=100, CADM=String,  IPOD=String,  UniqueStringId=String, IMPT=000000-0000-000000-400-00000, applicationname=wwms-processor, Msgid=String, app=app_group_payment, EndToEndID=String, timestamp=17789552323] - app_thread [app_schema-677ghh89hhjjj-appThread-2] Processed 12 total count, run 0 quotations, and completed 0 total apps past the  last update 2022-03-15T10:44:45.345-01:00 [app_driver_group_stream_api-1] | INFO s.p.k.o.external.thread 345 - [applicationid=String, offset=100, CADM=String,  IPOD=String,  UniqueStringId=String, IMPT=000000-0000-000000-400-00000, applicationname=wwms-processor, Msgid=String, app=app_group_payment, EndToEndID=String, timestamp=17789552323] - app_thread [app_schema-677ghh89hhjjj-appThread-2] Processed 12 total count, run 0 quotations, and completed 0 total apps past the  last update From the above logs i want to get the  min, max, avg, p95 and p99 response_time by app i am not sure how to calculate the response time from the above logs by app.
On a Splunk custom rest API endpoint, I need to get the body of http POST request on the executed python script handling this endpoint. the full rest.py handler script:   # rest.py from server im... See more...
On a Splunk custom rest API endpoint, I need to get the body of http POST request on the executed python script handling this endpoint. the full rest.py handler script:   # rest.py from server import serverless_request from pathlib import Path from splunk.persistconn.application import PersistentServerConnectionApplication import json class App(PersistentServerConnectionApplication): def __init__(self, _command_line, _command_arg): log('init connection', _command_line, _command_arg) super(PersistentServerConnectionApplication, self).__init__() # Handle a syncronous from splunkd. def handle(self, in_string): """ Called for a simple synchronous request. in_string: request data passed in @rtype: string or dict @return: String to return in response. If a dict was passed in, it will automatically be JSON encoded before being returned. """ log(self) log(dir(self)) request = json.loads(in_string.decode()) log("request info", request) log('now proccessing request, hopefully at would be executed by flask') path_info = request['path_info'] if "path_info" in request else '/' method = request['method'] log("request", request) log('sending flask', {"path_info": path_info, method: "method"}) response = serverless_request(path_info, method) payload = response.data if type(payload) is bytes: payload = payload.decode() log('return payload from flask', payload) return {'payload': payload, 'status': 200} def handleStream(self, handle, in_string): """ For future use """ raise NotImplementedError( "PersistentServerConnectionApplication.handleStream") def done(self): """ Virtual method which can be optionally overridden to receive a callback after the request completes. """ pass   when sending a POST request over the custom endpoint with the body    {"isTimeSeriesCollection":true,"collectionName":"333","timeField":"_time","metaField":""}    I would expect the only argument 'in_string' passed to the handler function of `App.handle` to contain information about the body request, but the logs show that the value does not contain any of it:   request info {'output_mode': 'xml', 'output_mode_explicit': False, 'server': {'rest_uri': 'https://127.0.0.1:8089', 'hostname': 'ELIAVS-PC', 'servername': 'Eliavs-PC', 'guid': 'CD4B2374-0104-42C8-A069-F0115A5035DE'}, 'restmap': {'name': 'script:backend', 'conf': {'handler': 'application.App', 'match': '/backend', 'script': 'rest.py', 'scripttype': 'persist'}}, 'path_info': 'new_collection/tsdb', 'query': [], 'connection': {'src_ip': '127.0.0.1', 'ssl': False, 'listening_port': 12211}, 'session': {'user': 'eliav2', 'authtoken': 'ICvMPKZyW3OiN1FV5WE^3^YGOdqGvkpRax7DNB_C6pzoWS53mhj9yEYJH_UwrsJZEK4MH3gUAQh_DNiv0BNOsf4JkVJcjBh5yL1ni1n7LURwQ8a8c6vGvB__qfuTCcs_UIanwMQVmF'}, 'rest_path': '/backend/new_collection/tsdb', 'lang': 'en-US', 'method': 'POST', 'ns': {'app': 'darkeagle'}, 'form': []}     so how can I access the body of the json request? I followed https://dev.splunk.com/enterprise/docs/devtools/customrestendpoints/customrestscript and various other sources to get to this point, the docs are lacking basic information.  
hello I try to send data to indexer with the Http Event Collector Here is the token I have generated Now, I try to send a simple message like "Manual test from Splunk UF" in Splunk with the cu... See more...
hello I try to send data to indexer with the Http Event Collector Here is the token I have generated Now, I try to send a simple message like "Manual test from Splunk UF" in Splunk with the curl command below but it doesn't works   curl -k -u "x: b701a03d-7f4d-4581-8191-936cbf665da0" "http://XXXXXX:8088/services/collector/event" -d '{"source": "hec_users_test", "event”: “Manual test from Splunk UF"}'   I have the message "couldn't connect to host" whai is the problem please?  
Hi Friends, I'm importing new services through Ad-hoc Search SPL. After complete all the steps while I'm adding my Services (Parent & Child) to existing parent service, I'm not getting the below pop... See more...
Hi Friends, I'm importing new services through Ad-hoc Search SPL. After complete all the steps while I'm adding my Services (Parent & Child) to existing parent service, I'm not getting the below pop up window to enable the services to start running the KPI searches and map entities.  Due to this no entities mapped with my services.  Could you please assist on  this? Thanks in advance.   
Hello, I am trying to create a new clustered search head and was planning to create this using the docker images, I have set the management URL to our index manager node however I am receiving this e... See more...
Hello, I am trying to create a new clustered search head and was planning to create this using the docker images, I have set the management URL to our index manager node however I am receiving this error  WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Login failed is this planned architecture possible? as I cannot see anything in the documentation that follows this path?  this is the original move and then later down the line the indexers will be move to containers however is out of scope currently 
Hello Splunk Ninjas! I will require your assistance with designing my regex expression. I need to filter for the value of Message in this sample log line:   2022-09-23T13:20:25.765+01:00 [29]... See more...
Hello Splunk Ninjas! I will require your assistance with designing my regex expression. I need to filter for the value of Message in this sample log line:   2022-09-23T13:20:25.765+01:00 [29] WARN Core.ErrorResponse - {} - Error message being sent to user with Http Status code: BadRequest: {"Details":{"field1":value,"field2":"value2"},"Message":"This is the message.","UserMessage":null,"Code":86,"Explanation":null,"Resolution":null,"Category":4}   I will be interested in extracting value of field1, field2 (inside of {Details}, message and code, Any help, much appreciated! Thanks again
We recently upgraded to Splunk 9.0.0 on our platform and the Splunk Add-On for Active Directory stopped working. We connect to our Active Directory instance using SSL ... and we're getting errors lik... See more...
We recently upgraded to Splunk 9.0.0 on our platform and the Splunk Add-On for Active Directory stopped working. We connect to our Active Directory instance using SSL ... and we're getting errors like this one now 2022-10-20 08:05:03,580, Level=ERROR, Pid=5668, File=search_command.py, Line=390, Abnormal exit: (LDAPSocketOpenError('socket ssl wrapping error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)'),)   What needs to be changed in order to make this work with Splunk 9? ... (We still have an instance running 8.2.6 and the AddOn still works perfectly fine there with the exact same configuration)   ( we're using the most recent version {3.0.5} of SA-ldapsearch )   Thank you
so im trying to find outside IP addresses hiting our firewall and seeing if they have been blocked or not. is there a script i can run to find them?  i have this one im trying  index="firewall" src... See more...
so im trying to find outside IP addresses hiting our firewall and seeing if they have been blocked or not. is there a script i can run to find them?  i have this one im trying  index="firewall" src_ip=( IP address here ) and i get nothing any help will be appreciated  
500 internal server error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.   I have installed Splu... See more...
500 internal server error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.   I have installed Splunk instance on the Ubuntu (WSL), and had some issues while starting the splunk so i followed the steps suggested by @naisanza (https://community.splunk.com/t5/Getting-Data-In/Why-am-I-getting-quot-homePath-opt-splunk-var-lib-splunk-audit/m-p/220231/highlight/false#M43245), this was fixed, And still my Splunkweb was not up and running so i have followed steps suggested by @fearofcasanova in the post(https://community.splunk.com/t5/Installation/splunk-installation-is-failing-to-generate-cert-pem/m-p/405406/highlight/false#M5445) the issue was fixed And i tried to login to splunk web link "https://<ip.o.o.o>:8000" and when i entered the credentials it was giving me the above 500 error. Can somebody help me out in fixing this issue, thanks in advance.  
Hello, I'm trying to set up SNMP monitoring for my  Zscaler NSS  on splunk.  Does anyone know where I can get documentation or does anyone have any ideas how I can do this? Thanks!
Hi All, I need help on plotting backlog data on timechart We have set of tickets in backlog on specific dates with workgroups, wanted to show them in Timechart  Below is the situation  exampl... See more...
Hi All, I need help on plotting backlog data on timechart We have set of tickets in backlog on specific dates with workgroups, wanted to show them in Timechart  Below is the situation  example ticket123 is backlog on 1st Oct with group A and same ticket123 moved to group B on 03rd Oct and with them till 05th Oct.  at last ticket moved Group C on 06th Now below is the table that shows in Splunk.  Date        Ticket     Workgroup  status 01-Oct     123             A                 Pending 03-Oct     123             B                 Pending 06-Oct     123            C                  Pending   from above table  if we do timechart its shows ticket123 in backlog on 01st , 03rd and 06th  however ticket  ticket123,   in backlog on 01st and 02nd in group A ticket123,   in backlog on 03rd,04th and 05th in group B ticket123,   in backlog on 06th in group B how to get dates in  02nd,04th,05th in Table so that we can show on the timechart that the ticket in the backlog has specific dates.