All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Does Splunk 9.0 compatible with Oracle Linux?
I've lately installed MISP add-on app from Splunk to integrate our MISP environment feed to Splunk app using the URL and the Auth API.  That being said, I was able to configure it with details requir... See more...
I've lately installed MISP add-on app from Splunk to integrate our MISP environment feed to Splunk app using the URL and the Auth API.  That being said, I was able to configure it with details required in MISP add-on app. However, after the configuration, I'm getting the following error: (Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability). Furthermore, by looking into the role capabilities under Splunk UI setting, I dont see "dispatch_rest_to_indexers" role either. Could someone please assist?
We have splunk installed in linux machine under /opt/splunk.  We have created add on and added python code and that is getting saved in modalert_test_webhook_helper.py under "/opt/splunk/etc/apps/sp... See more...
We have splunk installed in linux machine under /opt/splunk.  We have created add on and added python code and that is getting saved in modalert_test_webhook_helper.py under "/opt/splunk/etc/apps/splunk_app_addon-builder/local/validation/TA-splunk-webhook-final/bin/ta_splunk_webhook_final" We wanted to create one parameter in any config file with value in the form of list of rest api endpoints and read that in python code. If rest api endpoint entered by user while adding the action to alert is present in the list added in config file then only need to proceed with process_data action in python else display a message saying rest api endpoint is not present So now we wanted to know In which conf to define the parameter and what changes to make in python file and which python file to be used as there are many python files under /bin directory. Also after making changes in any conf or python files and restarting the changes are not getting saved. How to get it saved after restarting splunk? PFA screenshots of conf and python files. Kindly help with any solution.
Hi, I appreciate that there are numerous questions on here for similar problems but, after reading quite a few of them, nothing seems to quite fit my scenario / issue. I am trying to extract a fie... See more...
Hi, I appreciate that there are numerous questions on here for similar problems but, after reading quite a few of them, nothing seems to quite fit my scenario / issue. I am trying to extract a field from an event and call it 'action'. The entry in the props.conf looks like : EXTRACT-pam_action = (Action\: (?P<action>\[[^:\]]+]) ) I know that the extraction is working as there is a field alias later in the props.conf : FIELDALIAS-aob_gen_syslog_alias_32 = action AS change_type When I run a basic generating search on the index & sourcetype, the field 'action' does not appear in the 'Interesting Fields' but the 'change_type' alias does appear! The regex is fine as I can create the 'action' field OK if I add the rex to the search. I have also added the exact same regex to the props.conf file but called the field 'action1' and that field is displayed OK. Another test I tried is to create a field alias for the action1 field name called 'action' : FIELDALIAS-aob_gen_syslog_alias_30 = action1 AS action FIELDALIAS-aob_gen_syslog_alias_32 = action1 AS change_type 'change_type' is visible but, again 'action' is not visible. Finally my search "index=my_index action=*" produces 0 results whereas "index=my_index change_type-*" produces an accurate output. I have looked in the props and transforms configs across my searchhead and can't see anything that might be 'removing' my field extraction but, I guess my question is..... how can I debug the creation ( or not ) of a field name? I have a deep suspicion that it is something to do with one one the Windows TA's apps that we have installed but am struggling to locate the offending configuration Many thanks for any help. Mark
Hi,   Is there a way of bulk enabling alerts in Splunk enterprise?   Thanks,   Joe
Hi,   I would like to remove every occurrence of a specific pattern from my _raw events. Specifically in this case I am looking for deleting these html tags: <b>, </b>, <br>   Example, I have th... See more...
Hi,   I would like to remove every occurrence of a specific pattern from my _raw events. Specifically in this case I am looking for deleting these html tags: <b>, </b>, <br>   Example, I have this raw event: <b>This<\b> is an <b>example<\b><br>of raw<br>event And I would like to transform it like this: This is an exampleof rawevent   I tried to create this transforms.conf: [remove_html_tags] REGEX = <\/?br?> FORMAT =  DEST_KEY = _raw   And this props.conf: [_sourcetype_] TRANSFORMS-html_tags = remove_html_tags But it doesn't work.   I also thought I could change the transforms.conf like this: [remove_html_tags] REGEX = (.*)<\/?br?>(.*) FORMAT = $1$2 DEST_KEY = _raw But it will stop after just one substitution and the REPEAT_MATCH property is not suitable because the doc says: NOTE: This setting is only valid for index-time field extractions. This setting is ignored if DEST_KEY is _raw. And I must set DEST_KEY = _raw     Can you help me? Thank you in advance.
Hello splunkers! Has anyone had experience with getting data in Splunk from PAM (Privileged Access Management) systems? I want to do the integration of Splunk with Fudo PAM. Question of getting lo... See more...
Hello splunkers! Has anyone had experience with getting data in Splunk from PAM (Privileged Access Management) systems? I want to do the integration of Splunk with Fudo PAM. Question of getting logs from Fudo to Splunk is not a problem at all, it's easily done over syslog. However, I don't know how to parse these logs. The syslog sourcetype doesn't properly parse the events, it misses a lot of useful information such as: users, processes, action done, accounts, basically almost everything except for the IP of the node and the timestamp of the event.  Does anyone know if there is a good add-on for parsing logs from Fudo PAM? Or any other good way how to parse its logs?  Thanks for taking time reading and replying to my post
Hi, I'm curious Can SPLUNK automatically turn off the screen or start a screen saver when you log out of the Splunk console or when your session expires? Is it possible to run this functionality wit... See more...
Hi, I'm curious Can SPLUNK automatically turn off the screen or start a screen saver when you log out of the Splunk console or when your session expires? Is it possible to run this functionality without Pantom, e.g. using a bash script or PowerShell
package org.example; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; import com.splunk.Service; import com.splunk.ServiceArgs; public class ActualSplunk { pub... See more...
package org.example; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; import com.splunk.Service; import com.splunk.ServiceArgs; public class ActualSplunk { public static void main(String[] args) { // Create ServiceArgs object with connection parameters ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setUsername("providedvalidusername"); loginArgs.setPassword("providedvalidpassword"); loginArgs.setHost("hostname"); loginArgs.setPort(8089); HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); // Connect to Splunk Service service = Service.connect(loginArgs); // Check if connection is successful if (service != null) { System.out.println("Connected to Splunk!"); // Perform operations with the 'service' object as needed } else { System.out.println("Failed to connect to Splunk."); } // Close the connection when done if (service != null) { service.logout(); // Logout from the service // service.close(); // Close the service connection } } } when i run the above code to connect to the local splunk it is working fine with my local splunk credentials. But when i tried same code in my VM with the actual splunk cloud host, username, password to connect to the splunk to get the logs it throwing an exception "java.lang.RuntimeException:An established connection was aborted by your host machine".
Hi team, I encountered a problem when retrieving data from rotate log files: duplicate event. For example: the event in file test.log.1 has been retrieved, when rotating to test.log.2 splunk retrie... See more...
Hi team, I encountered a problem when retrieving data from rotate log files: duplicate event. For example: the event in file test.log.1 has been retrieved, when rotating to test.log.2 splunk retrieves it again. How to configure splunk to only retrieve the latest events and not events that have been rotated to another file? ===== Log4j App information: log4j.appender.file.File=test.log log4j.appender.file.MaxFileSize=10000KB log4j.appender.file.MaxBackupIndex=99 ===== Splunk inputs.conf information [monitor:///opt/IBM/WebSphere/AppServer/profiles/APP/test.log*]
Hi expert, My SPL looks something like: index=<> sourcetype::<> | <do some usual data manipulation> | timechart min(free) AS min_free span=1d limit=bottom1 usenull=f BY hostname | filldown Wha... See more...
Hi expert, My SPL looks something like: index=<> sourcetype::<> | <do some usual data manipulation> | timechart min(free) AS min_free span=1d limit=bottom1 usenull=f BY hostname | filldown What I want to achieve is displaying the outcome as Single Value visualisation with sparkline. My expectation is to have the very last and smallest value min_free for the time span selected displayed and showing the hostname with the smallest min_free shown in the same visual. However, I get different outcome. The BY split appears to group data by hostname first and then applies the min_free value as secondary sort criteria. Following is what I get:   When I modify the SPL timechart to limit=bottom2, I get the following.   What I want with a slightly modified SPL (limit=bottom1 useother=f) is to only display the circled middle one with the Single Value showing both the latest smallest min_free and hostname values. How can I achieve this? Thanks, MCW    
I want to show weekly data in a trend ,it should not add total  Right now using the below query, but it showing overall count of a week | timechart span=1w@w7 sum(abc) by xyz @splunk 
We have an app input config monitor containing wildcards with whitelist configured to pick up only .log and .out. There are about 120 log files matching the whitelist regex. All the logfiles are inge... See more...
We have an app input config monitor containing wildcards with whitelist configured to pick up only .log and .out. There are about 120 log files matching the whitelist regex. All the logfiles are ingesting fine except for 1 specific logfile that seems unable to continue the ingestion after log rotation. crcSalt and initCrcLength already defined as below -  initCrcLength = 1048576 crcSalt = <SOURCE> On splunkd.log, the below event can be found  -  05-15-2024 00:32:57.332 -0400 INFO WatchedFile [16425 tailreader0] - Logfile truncated while open, original pathname file='/xxx/catalina-.out', will begin reading from start. Is 120 logs on 1 input too many for Splunk to handle? How can we resolve this issue?
I am in a bit of a fix right now and getting the below error when I am trying to add a new input to Splunk using this document  https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-... See more...
I am in a bit of a fix right now and getting the below error when I am trying to add a new input to Splunk using this document  https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-on   Note:  The Splunk instance is in a different account than the S3 bucket. Error response received from the server: Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied". See splunkd.log/python.log for more details.   I have created an AWS role to allow the user residing in  account where my S3 bucket is and the permissions are like below   Trust relationship:     The user contains s3full access and AssumeRole policy attached to it.    Splunk config: The IAM role still shows undiscovered:     Are there any changes required at the Splunk instance level in the other account so that it could access the policy? TIA for your help!
Hello Splunkers! I am learning Splunk, but I've never deployed or worked with Splunk ES in production environment especially in SOC.   As you know, we have notables and investigations in ES and f... See more...
Hello Splunkers! I am learning Splunk, but I've never deployed or worked with Splunk ES in production environment especially in SOC.   As you know, we have notables and investigations in ES and for both of them we can change the status to indicate when the investigation is in process or not, but I am not quite sure about how SOC actually uses these features. That's why I have couple of questions regarding that.  1) Do analysts always start investigation when they are about to handle a notable in the incident review tab?   Probably the first what analysts do is changing the status from new to "in progress" and assign the event to themselves, to indicate that they are handling notable, but do they also start a new investigation or add them to the existing one, or analyst can handle the notable without adding it to an existing one or starting the new investigation? 2) When a notable was added to an investigation, what do analysts do when they close they figure out the disposition (complete their investigation)? Do they merely change the status through editing the investigation and the notable in their associated tabs? Do they always put their conclusions about an incident in the comment section like described in this article: The Five Step SOC Analyst Method. This 5-step security analysis… | by Tyler Wall | Medium? 3) Does SOC analyst of the first level directly put the status "closed" when the notable/investigation  is completed, or he/she always has to put it to "resolved" for their more-experienced colleagues' confirmation? I hope my questions are clear, thanks for taking your time reading my post and replying to it  
Please tell me how to make the output replace some characters in the field definitions. Specifically, the problem is that the following two formats of Mac Address in multiple logs imported into Splu... See more...
Please tell me how to make the output replace some characters in the field definitions. Specifically, the problem is that the following two formats of Mac Address in multiple logs imported into Splunk are mixed. AA:BB:CC:00:11:22 AA-BB-CC-00-11-22 I would like to unify the MacAddress field in the log in the form of “AA:BB:CC:00:11:22” in advance, because I would like to link the host name from MacAddress in the automatic definition of LookUpTable. Put the following in the search field and output the modified one as “MacAddr”, index=“Log” | rex ^. +? \scli\s}? <CL_MacAddr>. +? (. +?)) \) | eval MacAddr = replace(CL_MacAddr,“-”,“:”) Alternatively, we could replace the existing field “CL_MacAddr” with a modified version as follows. index=“Log” | rex mode=sed field=“CL_MacAddr” “s/-/:/g” I am trying to set this in the GUI's field extraction and field transformation to always have the modified superscript, but it does not work. Or can it be set directly in transforms.conf, but in this case, what values can be set and where? I know this is basic, but I would appreciate your help. Thank you in advance.
hi   I want to how to change from the archive state to active of splunkbase site. I have already submitted a new version of  the addon file. The addon is still in archive status.
Hi all, I have a number a forwarder that sends a lot of logs to different indexes. For example, there are three indexes: Windows, Linux, and Firewall. A new cluster has been set up and I am plannin... See more...
Hi all, I have a number a forwarder that sends a lot of logs to different indexes. For example, there are three indexes: Windows, Linux, and Firewall. A new cluster has been set up and I am planning to forward only some logs to the new cluster based on the index name. For example, consider that between the three indexes of Windows, Linux, and Firewall, I'm going to send only Firewall to the new cluster. This is the configuration that I tried to create:   [tcpout] defaultGroup = dag,dag-n [tcpout:dag] disabled = false server = p0:X,p1:Y [tcpout:dag-n] disabled = false server = pn:Z forwardedindex.0.whitelist = firewall forwardedindex.1.blacklist = .*   but unfortunately, some logs of both Windows and Linux indexes are still sent to the new cluster, and because an index is not considered for them in the new cluster, they frequently cause errors. The thing that came to my mind was that maybe I should empty the default whitelist and blacklist first. Anyone have any ideas?
Currently the Cisco Networks app is looking in all Indexes when searching for cisco:ios sourcetype. Looking for an easy way to restrict this to a single index to help improve performance. No config o... See more...
Currently the Cisco Networks app is looking in all Indexes when searching for cisco:ios sourcetype. Looking for an easy way to restrict this to a single index to help improve performance. No config options in the App or Add-on that I can see.   Any thoughts?