All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for the advise. We will proof it with the customer as soon as I can and will respond.
hi, Wondering if there is a document or guidance on how to estimate the  volume of data ingested in Splunk by pulling data from DNA Centre using the Splunk Add-on: Cisco DNA Center Add-on. Cheers, ... See more...
hi, Wondering if there is a document or guidance on how to estimate the  volume of data ingested in Splunk by pulling data from DNA Centre using the Splunk Add-on: Cisco DNA Center Add-on. Cheers, Ahmed.
Note: I have an active token that looks similar to this: c0865140-53b4-4b53-a2d1-9571d39a5de8 My HTTP request has the following header: Authorization: Splunk c0865140-53b4-4b53-a2d1-9571d39a5de8 ... See more...
Note: I have an active token that looks similar to this: c0865140-53b4-4b53-a2d1-9571d39a5de8 My HTTP request has the following header: Authorization: Splunk c0865140-53b4-4b53-a2d1-9571d39a5de8 MY Splunk Cloud settings show HEC configuration to have SSL enabled and port 8088 (though these settings are grayed out and cannot be adjusted)
Hi Ismo, I am working on developing an app that updates the values in the inputs.conf file from the setup.xml configuration. Additionally, the app retrieves values from the inputs.conf file and load... See more...
Hi Ismo, I am working on developing an app that updates the values in the inputs.conf file from the setup.xml configuration. Additionally, the app retrieves values from the inputs.conf file and loads them into Splunk.
Hi all, I just started a trial for Splunk Cloud , my URL looks similar to this: https://prd-p-s8qvw.splunkcloud.com/en-GB/app/launcher/home   I want to get data in with the HEC. I have read all t... See more...
Hi all, I just started a trial for Splunk Cloud , my URL looks similar to this: https://prd-p-s8qvw.splunkcloud.com/en-GB/app/launcher/home   I want to get data in with the HEC. I have read all the following documentation: https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Data/UsetheHTTPEventCollector#Configure_HTTP_Event_Collector_on_Splunk_Cloud_Platform According to the documentation, my URL should look like this: https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event However this does not work. It seems the DNS cannot be resolved. My NodeJS gives "ENOTFOUND" I have tried different options (HHTP / HTTPS, host, port etc): HTTP: http://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event HTTPS: https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event GCP: http://http-inputs.prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs.prd-p-s8qvw.splunkcloud.com:8088/services/collector/event host: http://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs-p-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs.s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs.s8qvw.splunkcloud.com:8088/services/collector/event port: http://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs.prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs.prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs.s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs.s8qvw.splunkcloud.com:443/services/collector/event No prefix: http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://p-s8qvw.splunkcloud.com:8088/services/collector/event http://s8qvw.splunkcloud.com:8088/services/collector/event http://s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://p-s8qvw.splunkcloud.com:8088/services/collector/event https://s8qvw.splunkcloud.com:8088/services/collector/event https://s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://p-s8qvw.splunkcloud.com:443/services/collector/event http://s8qvw.splunkcloud.com:443/services/collector/event http://s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://p-s8qvw.splunkcloud.com:443/services/collector/event https://hs8qvw.splunkcloud.com:443/services/collector/event https://s8qvw.splunkcloud.com:443/services/collector/event None of these work. All give one of the following errors: Error: getaddrinfo ENOTFOUND http-inputs-prd-p-s8qvw.splunkcloud.com Error: read ECONNRESET HTTP 400 Sent HTTP to port 443 HTTP 404 Not Found Can anybody help me get this working?   Regards,   Lawrence
[md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond]... See more...
[md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(subsecond_temp), time_temp + " " + _raw, time_temp + subsecond_temp + " " + _raw) [md_time_default] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 $0 DEST_KEY = _raw   The problem seems to be somewhere in md_time, md_subsecond or md_fix_subsecond, because if I use md_time_default, it works (though without subseconds), and if I enable these three instead of md_time_default, then I get no output: the packets emitted by Splunk seem to be empty: without a payload.
Old post, but managed without Javascript and CSS oneliner. Set a table ID like (my_table_with_default_sort), In CSS, apply a fixed or dynamic data-sort-key to the colomn you want to sort. In this ... See more...
Old post, but managed without Javascript and CSS oneliner. Set a table ID like (my_table_with_default_sort), In CSS, apply a fixed or dynamic data-sort-key to the colomn you want to sort. In this example token is used matches with the sort used in the query. #my_table_with_default_sort table th[data-sort-key="$tok_sort_column$"] a i::before content: "↥"; /* for ASC*, for DESC use "↧"; */ display: inline-block; font-size: 12px; margin-right: 5px; }    
Hi I cannot recall that there are this kind of api at least supported by Splunk. Of course you can do it by yourself if you really want, but probably there is a better way to do what you are aiming... See more...
Hi I cannot recall that there are this kind of api at least supported by Splunk. Of course you can do it by yourself if you really want, but probably there is a better way to do what you are aiming for? So what is the issue which you are trying to solve? r. Ismo
Hi Splunk Community, I am looking to edit the inputs.conf file programmatically via the Splunk API. Specifically, I want to know: Is there an API endpoint available to update the inputs.conf file? ... See more...
Hi Splunk Community, I am looking to edit the inputs.conf file programmatically via the Splunk API. Specifically, I want to know: Is there an API endpoint available to update the inputs.conf file? If yes, what would be the correct method to achieve this (e.g., required endpoint, parameters, or payload)? I understand that inputs.conf primarily configures data inputs, and certain operations might have to be performed via the REST API or directly through configuration file updates. Any documentation or examples regarding: Supported Splunk API endpoints for modifying input configurations. Best practices for editing inputs.conf programmatically. Any necessary permissions or prerequisites to perform such updates.
Then you need just system which enables this log collection for this individual system. This depends how you are managing UF configurations. If you have DS then just add a new server class for this c... See more...
Then you need just system which enables this log collection for this individual system. This depends how you are managing UF configurations. If you have DS then just add a new server class for this collection and add it into this system. If you are using something else then use it just like this. Of course you must have separate UF app which is configured for this collection. Just inputs.conf file with suitable configuration. Then when analysis has done just remove that UF app from this server with DS or your other cfg mgm system. Probably you should have separate index for these logs with short retention period to get rid of those logs as they are not needed inside splunk?
Hi @rahulkumar , I worked in a project in which we were receiving losconcentrated and exported in logstash format. the problem is that you cannot use normal add-ons because the format is different.... See more...
Hi @rahulkumar , I worked in a project in which we were receiving losconcentrated and exported in logstash format. the problem is that you cannot use normal add-ons because the format is different. You have two choices: modufy all your parsing rules of the used add-ons. Convert your logstash logs in the original format and it isn't s simple job but it's long! In few words, you have to extract metadata from the json using INGEST_EVAL and then convert in _raw the original log field. For more infos see at https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Ciao. Giuseppe
I have been using the Splunk API from within a Python script to retrieve information about saved searches using a call to the endpoint:   hxxps://<splunk_server>/-/-/saved/searches/<name_of_saved_s... See more...
I have been using the Splunk API from within a Python script to retrieve information about saved searches using a call to the endpoint:   hxxps://<splunk_server>/-/-/saved/searches/<name_of_saved_search>?output_mode=json   The <name_of_saved_search> has been URL encoded to deal with some punctuation (including '/'), using the Python function:   name_of_searched_search = urllib.parse.quote(search_name, safe='')   It has been working so far, but recently I encountered an issue when the name of the saved search contains square brackets (e.g. "[123] My Search") Even after URL encoding, Splunk's API just does not accept the API call at the endpoint:   hxxps://<splunk_server>/-/-/saved/searches/%5B123%5D%20My%20Search?output_mode=json   and returns a response with HTTP status code of 404 (Not Found). I am not sure what else I should be doing to handle the square brackets in the name of the saved search to make the API call work.
at the moment, the servers are monitored on splunk, but only win event log:security logs are being piped. I want to increase the monitoring capability to include sysmon and powershell logging, but, i... See more...
at the moment, the servers are monitored on splunk, but only win event log:security logs are being piped. I want to increase the monitoring capability to include sysmon and powershell logging, but, i do not want sysmon logs from ALL servers to be indexed and searchable, unless a security event warrants a particular server to have its sysmon indexed.    i.e.  1. all severs have sysmon enabled  2. splunk's security analytics and detection queries runs in the background to monitor the sysmon, if there are any hits, it creates an alert on splunk and the alert log is indexed. 3. alert is sent to a case management system 4. at the request of the security analyst, he can request to view the sysmon of that particular server and that server' sysmon will be indexed on splunk for the past 5 days.  5. analyst will not be able to view sysmon on splunk for the rest of the servers that are not indexed as it is unrelated to the security event. he can only index the sysmon of a particular server IF he triggers that action from the case management system
Hi @desmando , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Not exactly. If splunkd is running then it generates events into splunkd.log, but it’s not 100% indicator that it is forwarding also. But you could look events from that file which told this. Those a... See more...
Not exactly. If splunkd is running then it generates events into splunkd.log, but it’s not 100% indicator that it is forwarding also. But you could look events from that file which told this. Those are “connected/forwarding server 1.2.3.4:9997” or something similar (I cannot check correct lines now). Is it possible that you look that information from server side? Just search those from _internal logs or even from MC?
I encountered an issue while trying to integrate a Python script into my Splunk dashboard to export Zabbix logs to a Splunk index. When I click the button on the dashboard, the following error is log... See more...
I encountered an issue while trying to integrate a Python script into my Splunk dashboard to export Zabbix logs to a Splunk index. When I click the button on the dashboard, the following error is logged in splunkd.log: 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': Traceback (most recent call last): 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': File "/opt/splunk/bin/runScript.py", line 72, in <module> 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': os.chdir(scriptDir) 01-16-2025 12:01:24.958 +0530 ERROR ScriptRunner [40857 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.9 /opt/splunk/bin/runScript.py zabbix_handler.Zabbix_handler': FileNotFoundError: [Errno 2] No such file or directory: '' Setup: Python Script: Location: /opt/splunk/etc/apps/search/bin/zabbix_handler.py Function: Export Zabbix logs to a Splunk index using the HEC endpoint. JavaScript Code: Location: /opt/splunk/etc/apps/search/appserver/static/ Function: Adds a button to the dashboard, which triggers the Python script. Observed Behavior: When the button is clicked, the error indicates that the scriptDir variable in runScript.py is empty, leading to the os.chdir(scriptDir) call failing. Questions: Why might scriptDir be empty when runScript.py is executed? Is there a specific configuration required in the Splunk dashboard or app structure to ensure the ScriptPath is correctly passed to the ScriptRunner? How can I debug or fix this issue to ensure the Python script is executed properly? Any help or guidance would be greatly appreciated. Thank you!
It seems that I understood your question little bit differently. So you have separate system (not splunk), which are currently monitoring those source systems. When it found something then it create... See more...
It seems that I understood your question little bit differently. So you have separate system (not splunk), which are currently monitoring those source systems. When it found something then it create alert. After that log collection has started from this source system? How you can be sure that you will get all needed old logs which are needed for analysis if you are starting collection after the event has happened? Some related activities could happened a long time ago before the event which create the alert!
There are couple of apps which can manage e.g. base64 encoding. Here is one which I have used https://splunkbase.splunk.com/app/5565 When you have issues with windows character sets, you must add ch... See more...
There are couple of apps which can manage e.g. base64 encoding. Here is one which I have used https://splunkbase.splunk.com/app/5565 When you have issues with windows character sets, you must add character set information into UF’s props.conf. There are some examples in community and this is also described on docs.
Reason for the above is that i have hundreds of servers and if every server has its sysmon log indexed, it will take up a lot of bandwidth, storage space and cost. Hence i am looking to find a possib... See more...
Reason for the above is that i have hundreds of servers and if every server has its sysmon log indexed, it will take up a lot of bandwidth, storage space and cost. Hence i am looking to find a possible solution where splunk security detection analytics can run across all servers and triggers an alert for any positive hits, and only at the request of the security analyst, then the sysmon of a particular endpoint of interest will be indexed for the past 5 days for example. 
Theoretically yes, practically it’s vere very hard to do and it’s needs a lot of writing dashboards, reports etc. I really don’t suggest it. What is the issue which you try to solve with this appro... See more...
Theoretically yes, practically it’s vere very hard to do and it’s needs a lot of writing dashboards, reports etc. I really don’t suggest it. What is the issue which you try to solve with this approach?