All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello dears, I deleted my custom field which I created before but still extract in search results. Also, I'm trying a new field extract ( sampling is fine ) but it doesn't show in search ( verbose ... See more...
Hello dears, I deleted my custom field which I created before but still extract in search results. Also, I'm trying a new field extract ( sampling is fine ) but it doesn't show in search ( verbose mode ).  Do you have any idea, why? Regards.
While configuring an S3 input in the Splunk Add-on for AWS, I received an error message stating that "SSL Validation failed" because the VPC S3 Endpoint did not match a series of S3 bucket endpoint n... See more...
While configuring an S3 input in the Splunk Add-on for AWS, I received an error message stating that "SSL Validation failed" because the VPC S3 Endpoint did not match a series of S3 bucket endpoint names (e.g. s3.us-east-1.amazonaws.com). As part of the Splunk AWS Add-on naming convention for private endpoints, the Private Endpoint URL for the S3 bucket must be https://vpce-<endpoint_id>-<unique_id>.s3.<region>.vpce.amazonaws.com After creating the endpoints, we're running into the SSL Validation errors. Any idea what could be causing this?
I have a query that frequently times out due to the subsearch time limit. I'd like to improve it's performance but I'm not sure how. Here's my query: host=prod* source=user-activity.log sourcetype=... See more...
I have a query that frequently times out due to the subsearch time limit. I'd like to improve it's performance but I'm not sure how. Here's my query: host=prod* source=user-activity.log sourcetype=log4j ID=uniqueID MESSAGE="LOGIN_SUCCESS*"| stats count as Logins by Full_Date, ID, DName, STATE | join type=left ID [ search host=prod* source=server.log  sourcetype=log4j MESSAGE="[Dashboard User-Facing*" ID=uniqueID | stats count as Errors by Full_Date ,ID, DName, STATE ]|eval %=round((100*Errors)/Logins,0) |table ID, DName, Full_Date, STATE, Errors, Logins,%   Any help would be greatly appreciated.
the below search is for an alert, it is supposed to list all missing / non reporting agents. when I run it it lists all hosts ? can anyone help fix the search please below. greatly appreciated.   ... See more...
the below search is for an alert, it is supposed to list all missing / non reporting agents. when I run it it lists all hosts ? can anyone help fix the search please below. greatly appreciated.   index=indexname sourcetype="sourcetypename" | bin _time span=4d | eval days_since = floor((now()-lastSeen)/86400) | stats latest(lastSeen) as lastSeen , values(days_since) as days_since by host | search days_since>4 | eval lastSeen=strftime(lastSeen, "%Y-%m-%d %H:%M:%S")
I don't know what the best way to word the subject, so if anyone has a better recommendation after reading my question below let me know what would be a better way to word the subject. I have acces... See more...
I don't know what the best way to word the subject, so if anyone has a better recommendation after reading my question below let me know what would be a better way to word the subject. I have access to several different security dashboards from the InfoSec app. I am trying to figure out how to pivot from the summarized data shown in the dashboard which uses tstats to the traditional Search & Reporting app where I can view events and click on items of interest to narrow down the search. One example is seeing an alert on a dashboard and when I open the search tstats only shows the source IP addresses. I'd like to see more information, such as a destination and other fields but tstats isn't designed to show information when you click on "events". This why I'd like to take the information from tstats and open up the Search and Reporting app so  I can scroll through the list of fields on the left and use that to help refine results.
I have an indexing cluster and searchhead cluster.  I want to use a csv threat feeds to add IP reputation field using automatic lookup  I tried using all the online resources but It doesnt work  ... See more...
I have an indexing cluster and searchhead cluster.  I want to use a csv threat feeds to add IP reputation field using automatic lookup  I tried using all the online resources but It doesnt work    anyone knows a limitation for doing the automatic lookup with SearchHead clustering  I used the web based and the config files based option but didnt work  I did the manual checks and all worked 
I have what is hopefully a really straightforward issue.   Essentially I want to take the output (data within a specific field from sourcetypeA) from one search and use that data to search again with... See more...
I have what is hopefully a really straightforward issue.   Essentially I want to take the output (data within a specific field from sourcetypeA) from one search and use that data to search again within the same index but a different sourcetype (sourcetypeB).   Initial Search   "index"="data_index" "sourcetype"="sourcetypeA" "field1"="static_value" | table "field2" | dedup "field2"   The above search result is a single field that contains a single value per row, but ultimately more than 1 row and different values each row...something like below.....   field2 AAAAAAAAA BBBBBBBBB CCCCCCCCC    I then need to take each of the rows above and plug that into another search:   "index"="data_index" "sourcetype"="sourcetypeB" | table "field3", | table "field4" | dedup "field3"  
Hello Splunkers , I am trying to see if I can merge the following events and show in a tabular format sample event 1: 3/31/22 6:54:29.000 AM   GB (ID 5): BSN: 15730946, BON: 699-... See more...
Hello Splunkers , I am trying to see if I can merge the following events and show in a tabular format sample event 1: 3/31/22 6:54:29.000 AM   GB (ID 5): BSN: 15730946, BON: 699-01, BOAA: 01, GPN: 1395, GSN: 920-000   Sample event 2: 3/31/22 6:54:29.000 AM   CPU (ID 0): BSN: 55506204BC, BON: 555.06901.0004, BOAA: 01, QPN: 16646, QSN: 001   Sample event 3: 3/31/22 6:54:29.000 AM   CHASN: 166066   I want to merge all events which are coming from same host and same time  and show in a tabular format. if there is no value for a particular field it should show UNKNOWN   time                           host     CHASN       GPN                    GSN                                  QPN                                                QSN 3/31/22 6:54:29.000 AM  ABC      166066     1395             920-000                          16646                                           001  
Hi all, Can somebody recommend some sources from where I could learn about writing and implementing Telecom-Security use cases for Splunk? I'd appreciate any suggestions and recommendations. Ch... See more...
Hi all, Can somebody recommend some sources from where I could learn about writing and implementing Telecom-Security use cases for Splunk? I'd appreciate any suggestions and recommendations. Cheers
Hi I can't access the recent data in a metric index anymore with mstat command, but i can see it with mpreview commands. This means that data is there, but mstats just won't work on it anymore? I a... See more...
Hi I can't access the recent data in a metric index anymore with mstat command, but i can see it with mpreview commands. This means that data is there, but mstats just won't work on it anymore? I am on a standalone Splunk install we are using for test. I have a test that has run 4 hours ago, with mpreview you can see the data. This install was working fine until I upgraded an app. However, the APP does not have the INDEX in it that i needed. Using Analytics- I can see data from last Friday. But not today. Is there a way to check if the index is broken, or what are the next actions  
When I just display field "Writeup" from my Excel dataset which is like below: "This is the first line. This is the second line. This is the third line." While displaying the same in Dashboar... See more...
When I just display field "Writeup" from my Excel dataset which is like below: "This is the first line. This is the second line. This is the third line." While displaying the same in Dashboard Studio, it prints like below: "This is the first line.This is the second line.This is the third line." How can I resolve this issue
HI all, I have lookup table with 5 colon that contains IPs I want to create a search that exclude the IPs from my results, the issue is that I have 5 values and all of them should be  match to 1 ... See more...
HI all, I have lookup table with 5 colon that contains IPs I want to create a search that exclude the IPs from my results, the issue is that I have 5 values and all of them should be  match to 1 single value example: mycontiation | search NOT [ inputlookup mylookup.csv ] |rename 1V as IP |fields IP] | search NOT [|inputlookup mylookup.csv |rename 2v as IP |fields IP] | search NOT[|inputlookup mylookup.csv |rename 3v as IP |fields IP] | search NOT [|inputlookup mylookup.csv |rename 4v as IP |fields IP] | search NOT [|inputlookup mylookup.csv |rename 5v as IP |fields IP] | table IP its not working. anyone?  thanks!
We just installed ISIT, and we're also using an app for AD object collection (MS Windows AD Objects ). I'm wondering it's it's possible to configure ISIT to use some of that existing AD data already ... See more...
We just installed ISIT, and we're also using an app for AD object collection (MS Windows AD Objects ). I'm wondering it's it's possible to configure ISIT to use some of that existing AD data already collected? Also I was hoping to avoid having two AD related apps that might be pulling duplicate data from all Windows machines.
I'm trying to send my logs from java to splunk through log4j2 . I'm doing log4j2 configuration programatically. I know this is not the correct way to do so. But I'm still doing this for learning pu... See more...
I'm trying to send my logs from java to splunk through log4j2 . I'm doing log4j2 configuration programatically. I know this is not the correct way to do so. But I'm still doing this for learning purpose.  After execution of my java code I see the console appender logs in console but not the splunk appender logs in splunk. I don't know what I'm missing here.    I've tried with postman with same url and token. In this case it works well. code of my POM file is here ==>       <dependencies> <!-- https://mvnrepository.com/artifact/com.splunk/splunk-sdk-java --> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.4</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.11.2</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.11.2</version> </dependency> <dependency> <groupId>com.splunk</groupId> <artifactId>splunk</artifactId> <version>1.6.5.0</version> </dependency> </dependencies> <repositories> <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository> </repositories>         my java code is here =>>       import java.util.*; import org.apache.logging.log4j.Level; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.core.config.Configurator; import org.apache.logging.log4j.core.config.builder.api.ConfigurationBuilder; import org.apache.logging.log4j.core.config.builder.api.ConfigurationBuilderFactory; import org.apache.logging.log4j.core.config.builder.impl.BuiltConfiguration; import org.apache.logging.log4j.core.layout.PatternLayout; import com.splunk.logging.*; import java.io.*; public class Main { private static final Logger log; static { configureLog4J(); log = LogManager.getLogger(Main.class); } public static void configureLog4J() { ConfigurationBuilder<BuiltConfiguration> builder = ConfigurationBuilderFactory.newConfigurationBuilder(); // configure a splunk appender builder.add( builder.newAppender("splunk", "SplunkHttp") .add( builder.newLayout(PatternLayout.class.getSimpleName()) .addAttribute( "pattern", "%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" ) ) .addAttribute("sourcetype", "log4j") .addAttribute("index", "main") .addAttribute("url", "http://localhost:8088/services/collector") .addAttribute("token", "XXX") .addAttribute("host", "java") ); //configure console appender builder.add( builder.newAppender("console", "Console") .add( builder.newLayout(PatternLayout.class.getSimpleName()) .addAttribute( "pattern", "%logger{36}-%msg%n" ) )); // configure the root logger builder.add( builder.newRootLogger(Level.INFO) .add(builder.newAppenderRef("splunk")) .add(builder.newAppenderRef(("console"))) ); // apply the configuration Configurator.initialize(builder.build()); } public static void main(String ar[]) { System.out.println("START"); log.info("ok"); log.log(Level.INFO, "BY from log4j2"); log.log(Level.ERROR, "BY Error from log4j2"); System.out.println("END"); } }        
I think I must be misunderstanding how dedup works. It seems to me if you add fields to the dedup field list, you should never get fewer events returned. | dedup fieldA Should get rid of all extra ... See more...
I think I must be misunderstanding how dedup works. It seems to me if you add fields to the dedup field list, you should never get fewer events returned. | dedup fieldA Should get rid of all extra events with the same value of fieldA | dedup fieldA fieldB Should only get right of those where BOTH fieldA and fieldB have duplicate values, which set theory suggests to me must be the at least the same size as the those where we only get rid of duplicates for fieldA alone. But I'm getting far more results for: | dedup _time Than I do for | dedup _time wma_set wma_filename Any idea what's going on? For reference, here's the query: index="main" host="designsafe01.tacc.utexas.edu" "designsafe.storage.community" "SimCenter/Datasets" (op=download OR op=preview OR op=copy OR op=agave_file_download OR op=agave_file_preview OR op=data_depot_copy) | rex mode=sed "s/%20/ /g" | rex mode=sed field=info "s/\'/\"/g" | rex mode=sed field=info "s/\: u\"/: \"/g" | eval thepath=case(in(op,"download","preview","agave_file_download","agave_file_preview"),json_extract(info,"filePath"),op="copy", json_extract(info,"path"), op="data_depot_copy", json_extract(info,"fromFilePath")) | rex field=thepath "\/?SimCenter\/Datasets\/(?<wma_set>\w+)(?<wma_path>\/(.*\/)*)(?<wma_filename>[-\w\s\.]+)" | rex field=wma_filename ".+\.(?<wma_extension>\w*)" | dedup _time wma_set wma_filename
I have a cluster which sometimes reports one of the indexers as being off-line (unable to distribute search to... bla bla bla). Usually when I connect to such indexer it is under heavy load so I just... See more...
I have a cluster which sometimes reports one of the indexers as being off-line (unable to distribute search to... bla bla bla). Usually when I connect to such indexer it is under heavy load so I just assumed that for some reason I didn't have the time so far the jobs piled up on this indexer and it will simply go away - which it usually did. But today I had this one indexer which seemed offline but it was reported in monitoring console for next two hours or so as offline so I started to take notice. It turns out that it got stuck on available threads for processing requests since... # ss -ptn| grep CLOSE-WAIT | wc -l 7056  That's not a normal state for a server. All other indexers had a nice round zero of CLOSE-WAIT connections. These were all incoming connections to port 8089, they were not from forwarders. And now I'm perplexed since CLOSE-WAIT is usually a sign of an app error. If it was simply a TIME-WAIT, I'd say those are just some lost FIN/ACK packets, the situation would simply return to normal after a proper timeout. But CLOSE-WAIT? The patient is 8.1.4 on SLES 12SP3 (kernel 4.4.180-94.100-default)
Hi, What's the expected delay between creating a completely new datapoint using SignalFX API and the datapoint actually arriving (i.e. being visible in SignalFX)?
I am trying to create a role using splunk REST API (https://docs.splunk.com/Documentation/Splunk/8.2.5/RESTREF/RESTaccess#authorization.2Froles). The route works perfectly using postman, but when I t... See more...
I am trying to create a role using splunk REST API (https://docs.splunk.com/Documentation/Splunk/8.2.5/RESTREF/RESTaccess#authorization.2Froles). The route works perfectly using postman, but when I try to invoke the same route using IServiceClient via .net core, I run in to "Cannot perform action "POST" without a target name to act on". Please let me know what other information I can provide. Have been searching on google for the error and in splunk documentation with no luck.   using (var client = _serviceClientFactory.GetServiceClient()) //client service factory returns a ServiceClient { PostRoleRequest request = new PostRoleRequest { name = Const.DefaultRoleName //this is the only required param }; var response = await client.PostAsync<PostRoleResponse>($"/services/authorization/roles?output_mode=json", request); } public class PostRoleRequest { public string name { get; set; } } public class PostRoleResponse { public Entry entry { get; set; } }   public class Entry { public string name { get; set; } public string id { get; set; } public DateTime updated { get; set; } public List<string> links { get; set; } public string author { get; set; } public Acl acl { get; set; } public Content content { get; set; } }
Hi, I'm trying to setup some alerts using the Microsoft Teams Card add-on.  So I installed the add-on, created a Teams channel and defined an alert which should be sent via a webhook whenever it ... See more...
Hi, I'm trying to setup some alerts using the Microsoft Teams Card add-on.  So I installed the add-on, created a Teams channel and defined an alert which should be sent via a webhook whenever it is triggered. The problem I noticed is that the alerts are sent when the conditions are met but I can see only the title and the subtitle of the alert, not also the actual message/body which should be a custom text containing a log line. This si how i defined the alert : This is how i receive the alerts in Teams :   I can't figure out what i'm doing wrong. I mention i'm very new to Splunk. Maybe the strcat function I use at the end of the query does not generate the apropriate output for the Teams add-on  ? If i run the alert query in the "Search & Reporting" app i get good results:       
Hi one app say ABC is deployed on 9 clients through DS Now, we went to one specific client and added some configuration in app/local input.conf file my question is that if in future anything happen... See more...
Hi one app say ABC is deployed on 9 clients through DS Now, we went to one specific client and added some configuration in app/local input.conf file my question is that if in future anything happens like upgrade or something in that app through DS then my defined configuration in that specific client will be remained or replaced.