All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI all, I have lookup table with 5 colon that contains IPs I want to create a search that exclude the IPs from my results, the issue is that I have 5 values and all of them should be  match to 1 ... See more...
HI all, I have lookup table with 5 colon that contains IPs I want to create a search that exclude the IPs from my results, the issue is that I have 5 values and all of them should be  match to 1 single value example: mycontiation | search NOT [ inputlookup mylookup.csv ] |rename 1V as IP |fields IP] | search NOT [|inputlookup mylookup.csv |rename 2v as IP |fields IP] | search NOT[|inputlookup mylookup.csv |rename 3v as IP |fields IP] | search NOT [|inputlookup mylookup.csv |rename 4v as IP |fields IP] | search NOT [|inputlookup mylookup.csv |rename 5v as IP |fields IP] | table IP its not working. anyone?  thanks!
We just installed ISIT, and we're also using an app for AD object collection (MS Windows AD Objects ). I'm wondering it's it's possible to configure ISIT to use some of that existing AD data already ... See more...
We just installed ISIT, and we're also using an app for AD object collection (MS Windows AD Objects ). I'm wondering it's it's possible to configure ISIT to use some of that existing AD data already collected? Also I was hoping to avoid having two AD related apps that might be pulling duplicate data from all Windows machines.
I'm trying to send my logs from java to splunk through log4j2 . I'm doing log4j2 configuration programatically. I know this is not the correct way to do so. But I'm still doing this for learning pu... See more...
I'm trying to send my logs from java to splunk through log4j2 . I'm doing log4j2 configuration programatically. I know this is not the correct way to do so. But I'm still doing this for learning purpose.  After execution of my java code I see the console appender logs in console but not the splunk appender logs in splunk. I don't know what I'm missing here.    I've tried with postman with same url and token. In this case it works well. code of my POM file is here ==>       <dependencies> <!-- https://mvnrepository.com/artifact/com.splunk/splunk-sdk-java --> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.4</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.11.2</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.11.2</version> </dependency> <dependency> <groupId>com.splunk</groupId> <artifactId>splunk</artifactId> <version>1.6.5.0</version> </dependency> </dependencies> <repositories> <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository> </repositories>         my java code is here =>>       import java.util.*; import org.apache.logging.log4j.Level; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.core.config.Configurator; import org.apache.logging.log4j.core.config.builder.api.ConfigurationBuilder; import org.apache.logging.log4j.core.config.builder.api.ConfigurationBuilderFactory; import org.apache.logging.log4j.core.config.builder.impl.BuiltConfiguration; import org.apache.logging.log4j.core.layout.PatternLayout; import com.splunk.logging.*; import java.io.*; public class Main { private static final Logger log; static { configureLog4J(); log = LogManager.getLogger(Main.class); } public static void configureLog4J() { ConfigurationBuilder<BuiltConfiguration> builder = ConfigurationBuilderFactory.newConfigurationBuilder(); // configure a splunk appender builder.add( builder.newAppender("splunk", "SplunkHttp") .add( builder.newLayout(PatternLayout.class.getSimpleName()) .addAttribute( "pattern", "%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" ) ) .addAttribute("sourcetype", "log4j") .addAttribute("index", "main") .addAttribute("url", "http://localhost:8088/services/collector") .addAttribute("token", "XXX") .addAttribute("host", "java") ); //configure console appender builder.add( builder.newAppender("console", "Console") .add( builder.newLayout(PatternLayout.class.getSimpleName()) .addAttribute( "pattern", "%logger{36}-%msg%n" ) )); // configure the root logger builder.add( builder.newRootLogger(Level.INFO) .add(builder.newAppenderRef("splunk")) .add(builder.newAppenderRef(("console"))) ); // apply the configuration Configurator.initialize(builder.build()); } public static void main(String ar[]) { System.out.println("START"); log.info("ok"); log.log(Level.INFO, "BY from log4j2"); log.log(Level.ERROR, "BY Error from log4j2"); System.out.println("END"); } }        
I think I must be misunderstanding how dedup works. It seems to me if you add fields to the dedup field list, you should never get fewer events returned. | dedup fieldA Should get rid of all extra ... See more...
I think I must be misunderstanding how dedup works. It seems to me if you add fields to the dedup field list, you should never get fewer events returned. | dedup fieldA Should get rid of all extra events with the same value of fieldA | dedup fieldA fieldB Should only get right of those where BOTH fieldA and fieldB have duplicate values, which set theory suggests to me must be the at least the same size as the those where we only get rid of duplicates for fieldA alone. But I'm getting far more results for: | dedup _time Than I do for | dedup _time wma_set wma_filename Any idea what's going on? For reference, here's the query: index="main" host="designsafe01.tacc.utexas.edu" "designsafe.storage.community" "SimCenter/Datasets" (op=download OR op=preview OR op=copy OR op=agave_file_download OR op=agave_file_preview OR op=data_depot_copy) | rex mode=sed "s/%20/ /g" | rex mode=sed field=info "s/\'/\"/g" | rex mode=sed field=info "s/\: u\"/: \"/g" | eval thepath=case(in(op,"download","preview","agave_file_download","agave_file_preview"),json_extract(info,"filePath"),op="copy", json_extract(info,"path"), op="data_depot_copy", json_extract(info,"fromFilePath")) | rex field=thepath "\/?SimCenter\/Datasets\/(?<wma_set>\w+)(?<wma_path>\/(.*\/)*)(?<wma_filename>[-\w\s\.]+)" | rex field=wma_filename ".+\.(?<wma_extension>\w*)" | dedup _time wma_set wma_filename
I have a cluster which sometimes reports one of the indexers as being off-line (unable to distribute search to... bla bla bla). Usually when I connect to such indexer it is under heavy load so I just... See more...
I have a cluster which sometimes reports one of the indexers as being off-line (unable to distribute search to... bla bla bla). Usually when I connect to such indexer it is under heavy load so I just assumed that for some reason I didn't have the time so far the jobs piled up on this indexer and it will simply go away - which it usually did. But today I had this one indexer which seemed offline but it was reported in monitoring console for next two hours or so as offline so I started to take notice. It turns out that it got stuck on available threads for processing requests since... # ss -ptn| grep CLOSE-WAIT | wc -l 7056  That's not a normal state for a server. All other indexers had a nice round zero of CLOSE-WAIT connections. These were all incoming connections to port 8089, they were not from forwarders. And now I'm perplexed since CLOSE-WAIT is usually a sign of an app error. If it was simply a TIME-WAIT, I'd say those are just some lost FIN/ACK packets, the situation would simply return to normal after a proper timeout. But CLOSE-WAIT? The patient is 8.1.4 on SLES 12SP3 (kernel 4.4.180-94.100-default)
Hi, What's the expected delay between creating a completely new datapoint using SignalFX API and the datapoint actually arriving (i.e. being visible in SignalFX)?
I am trying to create a role using splunk REST API (https://docs.splunk.com/Documentation/Splunk/8.2.5/RESTREF/RESTaccess#authorization.2Froles). The route works perfectly using postman, but when I t... See more...
I am trying to create a role using splunk REST API (https://docs.splunk.com/Documentation/Splunk/8.2.5/RESTREF/RESTaccess#authorization.2Froles). The route works perfectly using postman, but when I try to invoke the same route using IServiceClient via .net core, I run in to "Cannot perform action "POST" without a target name to act on". Please let me know what other information I can provide. Have been searching on google for the error and in splunk documentation with no luck.   using (var client = _serviceClientFactory.GetServiceClient()) //client service factory returns a ServiceClient { PostRoleRequest request = new PostRoleRequest { name = Const.DefaultRoleName //this is the only required param }; var response = await client.PostAsync<PostRoleResponse>($"/services/authorization/roles?output_mode=json", request); } public class PostRoleRequest { public string name { get; set; } } public class PostRoleResponse { public Entry entry { get; set; } }   public class Entry { public string name { get; set; } public string id { get; set; } public DateTime updated { get; set; } public List<string> links { get; set; } public string author { get; set; } public Acl acl { get; set; } public Content content { get; set; } }
Hi, I'm trying to setup some alerts using the Microsoft Teams Card add-on.  So I installed the add-on, created a Teams channel and defined an alert which should be sent via a webhook whenever it ... See more...
Hi, I'm trying to setup some alerts using the Microsoft Teams Card add-on.  So I installed the add-on, created a Teams channel and defined an alert which should be sent via a webhook whenever it is triggered. The problem I noticed is that the alerts are sent when the conditions are met but I can see only the title and the subtitle of the alert, not also the actual message/body which should be a custom text containing a log line. This si how i defined the alert : This is how i receive the alerts in Teams :   I can't figure out what i'm doing wrong. I mention i'm very new to Splunk. Maybe the strcat function I use at the end of the query does not generate the apropriate output for the Teams add-on  ? If i run the alert query in the "Search & Reporting" app i get good results:       
Hi one app say ABC is deployed on 9 clients through DS Now, we went to one specific client and added some configuration in app/local input.conf file my question is that if in future anything happen... See more...
Hi one app say ABC is deployed on 9 clients through DS Now, we went to one specific client and added some configuration in app/local input.conf file my question is that if in future anything happens like upgrade or something in that app through DS then my defined configuration in that specific client will be remained or replaced.
Hello, I have data that look like this : Month Key Value Number ------------------------------ Jan Key1 50 1 Feb Key1 57 2 Mar Key1 51 3 Jan Key2 ... See more...
Hello, I have data that look like this : Month Key Value Number ------------------------------ Jan Key1 50 1 Feb Key1 57 2 Mar Key1 51 3 Jan Key2 101 4 Feb Key2 107 5 Mar Key2 98 6 Jan Key3 701 7 Feb Key3 703 8 Mar Key3 712 9 And I would like it to look like that : Month Key Value Number ------------------------------ Jan Key1 50 1 Feb Key1 57 1 Mar Key1 51 1 Jan Key2 101 2 Feb Key2 107 2 Mar Key2 98 2 Jan Key3 701 3 Feb Key3 703 3 Mar Key3 712 3 Is it possible ? Thanks.
My Customer have a multi-site cluster (site1, site2), and they are considering introducing a new site3. They are considering introducing the SmartStore only to this site3 indexer cluster. We think ... See more...
My Customer have a multi-site cluster (site1, site2), and they are considering introducing a new site3. They are considering introducing the SmartStore only to this site3 indexer cluster. We think that it is necessary to describe the setting of SmartStore in "Local" of the indexer of site3 instead of distribution from the cluster master, because the SmartStore only to site3 indexer cluster. Please tell us about the introduction of SmartStore in a multi-site cluster. Also, RF and SF values must be the same when introducing the SmartStore, we think it is necessary to change the current settings as follows. [Current setting]     # Multi site settings multisite = true available_sites = site1, site2 site_replication_factor = origin:3, site2:2, total:5 site_search_factor = origin:2, site2:1, total:3     [After changing the settings]     # Multi site settings multisite = true available_sites = site1, site2, site3 site_replication_factor = origin:3, site2:2, site3:2, total:7 site_search_factor = origin:2, site2:1, site3:2, total:5     Please tell us anything else we should be careful about these case.
hello   I timechart a lot of search in a table and it works perfectly here is the result But for the piece of code below I try to find a solution in order to be able to calculate a perce... See more...
hello   I timechart a lot of search in a table and it works perfectly here is the result But for the piece of code below I try to find a solution in order to be able to calculate a percentage between sign and eue2 and to timechart the results like above instead having a separate result for sign field and for eue2 result     | appendcols [ search index=toto | timechart span=1h dc(sign) as sign ] | append [ search index=toto | timechart span=1h dc(eue2) as eue2]     I need something like this : | eval perc=(sign/eueu2) | timechart values(p) span=1h  could you help please?
Hi All, I  have logs like below in Splunk: log1:  Valid from: Mon Oct 11 05:12:56 EDT 2021 until: Wed Oct 11 05:12:56 EDT 2023 log2: Serial number: 6900015f06a7454c0728c2744b000000015f06 log3... See more...
Hi All, I  have logs like below in Splunk: log1:  Valid from: Mon Oct 11 05:12:56 EDT 2021 until: Wed Oct 11 05:12:56 EDT 2023 log2: Serial number: 6900015f06a7454c0728c2744b000000015f06 log3: Owner: CN=sd-72m2-rt6w.nam.nsroot.net, OU=166139, O=Citigroup Inc., L=warren, ST=NJ, C=US log4: /apps/gcafews_SG/jboss-eap-7.3/ssl/server.jks log5: /apps/gcafewshlc_SG/jboss-eap-7.3/ssl/server.jks and so on... Aim is to get the validity of each Instance and CN, so created the below query to extract the required fields and to find the validity in days: ..... | rex field=_raw "\/apps\/(?P<Instance>\w+)\/" | rex field=_raw "CN\=(?P<CN>[^\,]+)\," | rex field=_raw "Serial\snumber\:(?P<Serial_Number>[^\,]+)" | rex field=_raw "OU\=(?P<CSI_ID>[^\,]+)\," | rex field=_raw "until\:\s(?P<Valid_Until>\w+\s\w+\s(\s{0,1})\d+\s\d+\:\d+\:\d+\s\w+\s\d+)" | eval From = _time | eval Until = strptime(Valid_Until, "%a %b %d %H:%M:%S %Z %Y") | eval dur=Until-From | eval Validity = round(dur/(60*60*24)) Now to represent all these data in a tabular view I used the query : | table Instance,CN,Serial_Number,CSI_ID,Valid_Until,Validity But it gave me the table in the below manner: Instance CN Serial_Number CSI_ID Valid_Until Validity         Wed Oct 11 05:12:56 EDT 2023 556     6900015f06a7454c0728c2744b000000015f06         sd-72m2-rt6w.nam.nsroot.net   166139     gcafews_SG           gcafewshlc_SG           The requirement is to create table with the values in single row as below: Instance CN Serial_Number CSI_ID Valid_Until Validity gcafews_SG sd-72m2-rt6w.nam.nsroot.net 6900015f06a7454c0728c2744b000000015f06  166139 Wed Oct 11 05:12:56 EDT 2023 556 gcafewshlc_SG sd-72m2-rt6w.nam.nsroot.net 6900015f06a7454c0728c2744b000000015f06  166139 Wed Oct 11 05:12:56 EDT 2023 556 Please help modify the query to get the table in the desired manner.
I create a splunk enterprise setup in a aws machine . I can access it via http://ipv4_address_by_aws:8000 now i want to send zeek index data into elastic . Now in elasticsearch it ask for URL o... See more...
I create a splunk enterprise setup in a aws machine . I can access it via http://ipv4_address_by_aws:8000 now i want to send zeek index data into elastic . Now in elasticsearch it ask for URL of Splunk enterprise server , which I hope is   http://ipv4_address_by_aws:8000  It asks for REST API username and password which I hope will be as splunk username and password i used during installation. I can see data in splunk search using this command : index="zeek" source="/opt/zeek/logs/current/dns.log"   but this is not present in elastic after i save all these setting , I get 404 error in almost all logs   how to connect splunk to elastic , also this rest url , username,password is to be filled as i have defined above or any other setting
 We want to get the number of successful login, multiple successful login, multi-fail logins and also number the of hqid which has not logged in i.e (total number of hqid - sum(successful login + mul... See more...
 We want to get the number of successful login, multiple successful login, multi-fail logins and also number the of hqid which has not logged in i.e (total number of hqid - sum(successful login + multiple successful login + multi fail). We have written below query, and we are able to get the number of successful login, multi-success login and as well multi-fail but I am not sure how to get the number for not logged-in case. Could anyone please help me here     base_search query | eval hqid = substr(requestURI,23,10) | table hqid httpStatus | eval status-success=if(httpStatus="200",1,0) | eval status-fail= if(httpStatus != "200",1,0) | stats sum(status-success) as status-success, sum(status-fail) as status-fail by hqid | eval status = case(('status-fail'=0 AND 'status-success'>0), "successful-logins", ('status-fail'>0 AND 'status-success'>0), "multi-success", ('status-fail'>0 AND 'status-success'=0), "multi-fail", ('status-fail'>0), "fail",1=1, "Other"
Hi everyone, I can't login to my Splunk account because I have a space at the beginning of my password. We will login to Splunk via LDAP. Does Splunk have a problem with that or is that a Bug? Th... See more...
Hi everyone, I can't login to my Splunk account because I have a space at the beginning of my password. We will login to Splunk via LDAP. Does Splunk have a problem with that or is that a Bug? Thank you very much for any advice.
Splunk UF is not sending logs to Splunk. The Splunkd constitutes full of errors and warnings as below. The telnet connection to DS and Indexers is successful at 8089 and 9997 respectively. It is a w... See more...
Splunk UF is not sending logs to Splunk. The Splunkd constitutes full of errors and warnings as below. The telnet connection to DS and Indexers is successful at 8089 and 9997 respectively. It is a windows server and service is up and running ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host. WARN TcpOutputProc - Applying quarantine to ip=**888* port=9997 _numberOfFailures=2 03-28-2022 04:50:44.070 +1100 ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host. 03-28-2022 04:50:44.070 +1100 WARN TcpOutputProc - Applying quarantine to ip=*8888* port=9997 _numberOfFailures=2    
Is it possible to search base on the Timestamp from the Column than the _time of ingestion I'm using dB connect not the "add Data" Since ill be using this in Dashboard,  I'm Very new in splunk ... See more...
Is it possible to search base on the Timestamp from the Column than the _time of ingestion I'm using dB connect not the "add Data" Since ill be using this in Dashboard,  I'm Very new in splunk  
Hi, we currently have one of our on-call schedules to be office hours only (Weekdays 9-5). However, we are noticing that we don't get notified about alerts that get raised over the weekend. Our expec... See more...
Hi, we currently have one of our on-call schedules to be office hours only (Weekdays 9-5). However, we are noticing that we don't get notified about alerts that get raised over the weekend. Our expectation was that with these alerts, because no one is there to acknowledge them, they will still be there when someone is eventually on the roster at 9am Monday but apparently that is not the case. (The alert is in the list of alerts, but it doesn't page anyone).  Is there a way to ensure that the person that gets rostered on at 9am Monday will be notified of any alerts that were triggered over the preceding weekend (period where no one was on-call)?  Thanks
Hello I would like to know if its possible to reuse the result of the field Total in another search? | stats dc(titi) as Total Thanks