All Topics

Top

All Topics

Hi, I'm curious Can SPLUNK automatically turn off the screen or start a screen saver when you log out of the Splunk console or when your session expires? Is it possible to run this functionality wit... See more...
Hi, I'm curious Can SPLUNK automatically turn off the screen or start a screen saver when you log out of the Splunk console or when your session expires? Is it possible to run this functionality without Pantom, e.g. using a bash script or PowerShell
package org.example; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; import com.splunk.Service; import com.splunk.ServiceArgs; public class ActualSplunk { pub... See more...
package org.example; import com.splunk.HttpService; import com.splunk.SSLSecurityProtocol; import com.splunk.Service; import com.splunk.ServiceArgs; public class ActualSplunk { public static void main(String[] args) { // Create ServiceArgs object with connection parameters ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setUsername("providedvalidusername"); loginArgs.setPassword("providedvalidpassword"); loginArgs.setHost("hostname"); loginArgs.setPort(8089); HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); // Connect to Splunk Service service = Service.connect(loginArgs); // Check if connection is successful if (service != null) { System.out.println("Connected to Splunk!"); // Perform operations with the 'service' object as needed } else { System.out.println("Failed to connect to Splunk."); } // Close the connection when done if (service != null) { service.logout(); // Logout from the service // service.close(); // Close the service connection } } } when i run the above code to connect to the local splunk it is working fine with my local splunk credentials. But when i tried same code in my VM with the actual splunk cloud host, username, password to connect to the splunk to get the logs it throwing an exception "java.lang.RuntimeException:An established connection was aborted by your host machine".
Hi team, I encountered a problem when retrieving data from rotate log files: duplicate event. For example: the event in file test.log.1 has been retrieved, when rotating to test.log.2 splunk retrie... See more...
Hi team, I encountered a problem when retrieving data from rotate log files: duplicate event. For example: the event in file test.log.1 has been retrieved, when rotating to test.log.2 splunk retrieves it again. How to configure splunk to only retrieve the latest events and not events that have been rotated to another file? ===== Log4j App information: log4j.appender.file.File=test.log log4j.appender.file.MaxFileSize=10000KB log4j.appender.file.MaxBackupIndex=99 ===== Splunk inputs.conf information [monitor:///opt/IBM/WebSphere/AppServer/profiles/APP/test.log*]
Hi expert, My SPL looks something like: index=<> sourcetype::<> | <do some usual data manipulation> | timechart min(free) AS min_free span=1d limit=bottom1 usenull=f BY hostname | filldown Wha... See more...
Hi expert, My SPL looks something like: index=<> sourcetype::<> | <do some usual data manipulation> | timechart min(free) AS min_free span=1d limit=bottom1 usenull=f BY hostname | filldown What I want to achieve is displaying the outcome as Single Value visualisation with sparkline. My expectation is to have the very last and smallest value min_free for the time span selected displayed and showing the hostname with the smallest min_free shown in the same visual. However, I get different outcome. The BY split appears to group data by hostname first and then applies the min_free value as secondary sort criteria. Following is what I get:   When I modify the SPL timechart to limit=bottom2, I get the following.   What I want with a slightly modified SPL (limit=bottom1 useother=f) is to only display the circled middle one with the Single Value showing both the latest smallest min_free and hostname values. How can I achieve this? Thanks, MCW    
I want to show weekly data in a trend ,it should not add total  Right now using the below query, but it showing overall count of a week | timechart span=1w@w7 sum(abc) by xyz @splunk 
We have an app input config monitor containing wildcards with whitelist configured to pick up only .log and .out. There are about 120 log files matching the whitelist regex. All the logfiles are inge... See more...
We have an app input config monitor containing wildcards with whitelist configured to pick up only .log and .out. There are about 120 log files matching the whitelist regex. All the logfiles are ingesting fine except for 1 specific logfile that seems unable to continue the ingestion after log rotation. crcSalt and initCrcLength already defined as below -  initCrcLength = 1048576 crcSalt = <SOURCE> On splunkd.log, the below event can be found  -  05-15-2024 00:32:57.332 -0400 INFO WatchedFile [16425 tailreader0] - Logfile truncated while open, original pathname file='/xxx/catalina-.out', will begin reading from start. Is 120 logs on 1 input too many for Splunk to handle? How can we resolve this issue?
I am in a bit of a fix right now and getting the below error when I am trying to add a new input to Splunk using this document  https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-... See more...
I am in a bit of a fix right now and getting the below error when I am trying to add a new input to Splunk using this document  https://docs.splunk.com/Documentation/AddOns/released/AWS/Setuptheadd-on   Note:  The Splunk instance is in a different account than the S3 bucket. Error response received from the server: Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [400]: Bad Request -- An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied". See splunkd.log/python.log for more details.   I have created an AWS role to allow the user residing in  account where my S3 bucket is and the permissions are like below   Trust relationship:     The user contains s3full access and AssumeRole policy attached to it.    Splunk config: The IAM role still shows undiscovered:     Are there any changes required at the Splunk instance level in the other account so that it could access the policy? TIA for your help!
Hello Splunkers! I am learning Splunk, but I've never deployed or worked with Splunk ES in production environment especially in SOC.   As you know, we have notables and investigations in ES and f... See more...
Hello Splunkers! I am learning Splunk, but I've never deployed or worked with Splunk ES in production environment especially in SOC.   As you know, we have notables and investigations in ES and for both of them we can change the status to indicate when the investigation is in process or not, but I am not quite sure about how SOC actually uses these features. That's why I have couple of questions regarding that.  1) Do analysts always start investigation when they are about to handle a notable in the incident review tab?   Probably the first what analysts do is changing the status from new to "in progress" and assign the event to themselves, to indicate that they are handling notable, but do they also start a new investigation or add them to the existing one, or analyst can handle the notable without adding it to an existing one or starting the new investigation? 2) When a notable was added to an investigation, what do analysts do when they close they figure out the disposition (complete their investigation)? Do they merely change the status through editing the investigation and the notable in their associated tabs? Do they always put their conclusions about an incident in the comment section like described in this article: The Five Step SOC Analyst Method. This 5-step security analysis… | by Tyler Wall | Medium? 3) Does SOC analyst of the first level directly put the status "closed" when the notable/investigation  is completed, or he/she always has to put it to "resolved" for their more-experienced colleagues' confirmation? I hope my questions are clear, thanks for taking your time reading my post and replying to it  
Please tell me how to make the output replace some characters in the field definitions. Specifically, the problem is that the following two formats of Mac Address in multiple logs imported into Splu... See more...
Please tell me how to make the output replace some characters in the field definitions. Specifically, the problem is that the following two formats of Mac Address in multiple logs imported into Splunk are mixed. AA:BB:CC:00:11:22 AA-BB-CC-00-11-22 I would like to unify the MacAddress field in the log in the form of “AA:BB:CC:00:11:22” in advance, because I would like to link the host name from MacAddress in the automatic definition of LookUpTable. Put the following in the search field and output the modified one as “MacAddr”, index=“Log” | rex ^. +? \scli\s}? <CL_MacAddr>. +? (. +?)) \) | eval MacAddr = replace(CL_MacAddr,“-”,“:”) Alternatively, we could replace the existing field “CL_MacAddr” with a modified version as follows. index=“Log” | rex mode=sed field=“CL_MacAddr” “s/-/:/g” I am trying to set this in the GUI's field extraction and field transformation to always have the modified superscript, but it does not work. Or can it be set directly in transforms.conf, but in this case, what values can be set and where? I know this is basic, but I would appreciate your help. Thank you in advance.
hi   I want to how to change from the archive state to active of splunkbase site. I have already submitted a new version of  the addon file. The addon is still in archive status.
Hi all, I have a number a forwarder that sends a lot of logs to different indexes. For example, there are three indexes: Windows, Linux, and Firewall. A new cluster has been set up and I am plannin... See more...
Hi all, I have a number a forwarder that sends a lot of logs to different indexes. For example, there are three indexes: Windows, Linux, and Firewall. A new cluster has been set up and I am planning to forward only some logs to the new cluster based on the index name. For example, consider that between the three indexes of Windows, Linux, and Firewall, I'm going to send only Firewall to the new cluster. This is the configuration that I tried to create:   [tcpout] defaultGroup = dag,dag-n [tcpout:dag] disabled = false server = p0:X,p1:Y [tcpout:dag-n] disabled = false server = pn:Z forwardedindex.0.whitelist = firewall forwardedindex.1.blacklist = .*   but unfortunately, some logs of both Windows and Linux indexes are still sent to the new cluster, and because an index is not considered for them in the new cluster, they frequently cause errors. The thing that came to my mind was that maybe I should empty the default whitelist and blacklist first. Anyone have any ideas?
Currently the Cisco Networks app is looking in all Indexes when searching for cisco:ios sourcetype. Looking for an easy way to restrict this to a single index to help improve performance. No config o... See more...
Currently the Cisco Networks app is looking in all Indexes when searching for cisco:ios sourcetype. Looking for an easy way to restrict this to a single index to help improve performance. No config options in the App or Add-on that I can see.   Any thoughts?
Hello , i have logs in following path /abc-logs/hosta/mods/stdout.240513-070854 /abc-logs/hostb/mods/stdout.240513-070854 /abc-logs/hostc/mods/stdout.240513-070854 /abc-logs/hostd.a.clusters.ab... See more...
Hello , i have logs in following path /abc-logs/hosta/mods/stdout.240513-070854 /abc-logs/hostb/mods/stdout.240513-070854 /abc-logs/hostc/mods/stdout.240513-070854 /abc-logs/hostd.a.clusters.abc.com/mods/stdout.240206-084344 /abc-logs/hoste/mods/stdout.240513-070854 when I am trying monitor this path to get logs into splunk .I only get two files .when checked internal logs i see following errors 05-16-2024 10:07:25.609 -0700 ERROR TailReader [1846912 tailreader0] - File will not be read, is too small to match seekptr checksum (file=/abc-logs/hosta/mods/stdout.240513-070854).  Last time we saw this initcrc, filename was different.  You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source.  Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info. A possible timestamp match (Fri Feb 13 15:31:30 2009) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: FileClassifier C:\abc-logs\hostd.a.clusters.abc.com\mods\stdout.240206-084344 I am using below props [ mods ] BREAK_ONLY_BEFORE_DATE=null CHARSET=AUTO CHECK_METHOD=entire_md5 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n]+) MAX_DAYS_AGO =2000 MAX_DAYS_HENCE=365 NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Custom crcSalt=<SOURCE> initCrcLength = 1048576 i tried changing the CHECK_METHOD to other options but it did not work  Thanks in advance 
Hello all, Just wondering if anyone else is removed index time extractions for the Cisco DNA Center Add-on (6668). I don't like that it needlessly indexes fields then resolved the duplicate-field ... See more...
Hello all, Just wondering if anyone else is removed index time extractions for the Cisco DNA Center Add-on (6668). I don't like that it needlessly indexes fields then resolved the duplicate-field issue by disabling KV_MODE. I was thinking of adding something like this to the app props.conf but I am still looking better options.     INDEXED_EXTRACTIONS =  KV_MODE=JSON SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\[\]\,]+\s*)([\{])
So I have the following setup and everything is good but I want to kind of do a subsearch In the Event - Sample User-ABCDEF assigned Role-'READ' on Project-1234 to GHIJKL Current SPL  index="... See more...
So I have the following setup and everything is good but I want to kind of do a subsearch In the Event - Sample User-ABCDEF assigned Role-'READ' on Project-1234 to GHIJKL Current SPL  index="xxxx" "role-'WRITE'" OR "role-'READ'" | rex "User-(?<userid>[^,]*)" | rex "(?<resource>\w+)$" | eval userid=upper(userid) | stats c as Count latest(_time) as _time by userid I get an output as this ABCDEF ASSIGNED ROLE-'READ' ON PROJECT-1234 TO GHIJKL   What I want is to search on just the GHIJKL after it extracts or should I just put it at the front so it only fetches that?
Pls what is the rest endpoint for searches that users are running 
I want a query that shows  the total volume of indexes used for splunk searches. Query on information that has to do with how much indexes are used based on splunk searches     
Can i get a query that will find searches that users are running in splunk
I have a search that returns the following table (after transpose): column row 1 row 2 search_name UC-315 UC-231 ID 7zAt/7 5Dfxdf Time 13:27:17 09:17:09 And I need it to lo... See more...
I have a search that returns the following table (after transpose): column row 1 row 2 search_name UC-315 UC-231 ID 7zAt/7 5Dfxdf Time 13:27:17 09:17:09 And I need it to look like this: column new_row search_name UC-315 ID 7zAt/7 Time 13:27:17 search_name UC-231 ID 5Dfxdf Time 09:17:09 This should work independently of the amount of rows. I've tried using mvexpand, and streamstats but without any luck.