All Topics

Top

All Topics

Hey, does anyone know any best practice or clever way of removing orphaned Knowledge Objects in a Search Head cluster when it is already too late for reassignment? For each orphaned object we are... See more...
Hey, does anyone know any best practice or clever way of removing orphaned Knowledge Objects in a Search Head cluster when it is already too late for reassignment? For each orphaned object we are doing manual job like checking if AD accounts still exist, emailing the users and asking if they still need Splunk etc. For non-existing accounts, we delete /opt/splunk/etc/users<user_id> catalogue from each SH separately (there are 4 SHs in our cluster), but we are looking for more clever solution Unfortunately, there is no option in our case to be informed by the users that they are going to leave the company in order to react in advance and avoid orphaned KOs at all... Greetings, Justyna  
Hello, Have anyone managed to collect windows logs other than the usual  Application,System,Security,Setup ? I am being asked if we can collect Microsoft-Windows-FailoverClustering  event ID 1641... See more...
Hello, Have anyone managed to collect windows logs other than the usual  Application,System,Security,Setup ? I am being asked if we can collect Microsoft-Windows-FailoverClustering  event ID 1641 If anyone has the inputs.conf file for something like that I would appreciate it.    
Hii all... Hope you can help me with two questions 1)Trying to create a query to find if the target user that set to "password never expirer" is a service user with using ldapsearch main serch ... See more...
Hii all... Hope you can help me with two questions 1)Trying to create a query to find if the target user that set to "password never expirer" is a service user with using ldapsearch main serch =                 index=microsoft-windows-dc EventID=4738 NewUacValue=0x210 I am trying  to run this ldapsearch on the results to remove  users with UserTypeName = service | ldapsearch domain=default search="(sAMAccountName=user)" attrs="sAMAccountName,displayName,sn,UserTypeName" How do I run the ldapsearch on all users from the results obtained after the first search ?   2. ldapsearch run only by admin , how to set Permissions to other roles to run ldapsearch  Thanks ... 
Hi,   I need to plot time difference between consecutive events by sourcetype in the last 7 days. I'm using this search but it's slow for a dashboard     index=myindex sourcetype=(sourcet... See more...
Hi,   I need to plot time difference between consecutive events by sourcetype in the last 7 days. I'm using this search but it's slow for a dashboard     index=myindex sourcetype=(sourcetype1, sourcetype,sourcetype3) | streamstats windwos=2 global=f range(_time) as delta by sourcetype | timechart max(range) as "delta [sec]" by sourcetype       do you have any suggestion for a more efficient search?   Thank you, Marta
I have installed Splunk TA Windows Add-on, still I am unable to see tag and tag::eventtype fields, when typing  index=windows but rest other are getting populated. Therefore unable to use "| savedsea... See more...
I have installed Splunk TA Windows Add-on, still I am unable to see tag and tag::eventtype fields, when typing  index=windows but rest other are getting populated. Therefore unable to use "| savedsearch DA-ITSI-OS-OS_Hosts_Search" for importing the entities as it required tag field Please help me here.
Hello Splunkers, I am trying to change the last updated time on the IT essential work dashboard, ( attached screenshot ). the following time zone is not coming from the servers and also the timez... See more...
Hello Splunkers, I am trying to change the last updated time on the IT essential work dashboard, ( attached screenshot ). the following time zone is not coming from the servers and also the timezone in splunk is not this. I am trying to change this to +4 GMT,
Hello, I can't find any information about integration Ivanti Neurons data to Splunk. Maybe someone have solution for this? 
Hi All, Good Day!   I have 2 indexes and having different source types  and diff uri, index 1--- nere having httpstatuscodes   1.  one uri having only 200,403,422 are success remaining failu... See more...
Hi All, Good Day!   I have 2 indexes and having different source types  and diff uri, index 1--- nere having httpstatuscodes   1.  one uri having only 200,403,422 are success remaining failure 2.remaing uri's 200 is success and remaining failure    index 2-- diffrent-- uri -- one uri having 200 is success ---here having Respnsecodes     how to get success percentage by using timechart  by country please help on this 
Using Splunk Enterprise 9. I'm trying to populate a dashboard studio dropdown input from query results. I was testing via a simple query (copied from the dashboard studio examples) as follows:   | ... See more...
Using Splunk Enterprise 9. I'm trying to populate a dashboard studio dropdown input from query results. I was testing via a simple query (copied from the dashboard studio examples) as follows:   | inputlookup firewall_example.csv | stats count by host   This works fine and the dropdown gets populated with the hosts. However, I'd also expected the following to to work    | inputlookup firewall_example.csv | stats values(host)   but it doesn't and no dynamic entries are in the dropdown.  So my understanding of the query types that can be used with dropdown inputs is incomplete! Can someone point me in the right direction?     
Hi Team, 1.We have the 50GB of DEV/TEST license file with us. After Configuring this License Can we create Knowledge objects like Dashboards, Reports and alerts.?? 2. Do we have any restrictions af... See more...
Hi Team, 1.We have the 50GB of DEV/TEST license file with us. After Configuring this License Can we create Knowledge objects like Dashboards, Reports and alerts.?? 2. Do we have any restrictions after configuring the DEV/TES License that prevent us from performing these tasks with the DEV/TEST license? In comparison to Enterprise 3. I want to configure this on a single server that serves as Heavy Forwarder Search head and the indexer. This makes sense, right?
More specifically: when the incoming events are already in JSON format; just, not the HEC-specific JSON structure? In my case, each event is represented by a JSON object with a "flat" structure (no ... See more...
More specifically: when the incoming events are already in JSON format; just, not the HEC-specific JSON structure? In my case, each event is represented by a JSON object with a "flat" structure (no nesting): just a collection of sibling key/value pairs. This "generic" JSON can be ingested by numerous analytics platforms with minimal configuration. I've configured a sourcetype in props.conf and transforms.conf to ingest events in this JSON structure, including timestamp recognition and per-event sourcetype mapping (that is, dynamically mapping each event to a more specific sourcetype based on two values in the event). I use that sourcetype configuration for the following Splunk inputs: TCP HEC raw endpoint (services/collector/raw) I could modify this JSON to meet the HEC-specific structure required by the HEC JSON endpoint (services/collector). I understand the HEC-specific structure and the changes that I need to make. However, before I do that, I thought I'd ask: what are the advantages of using the HEC JSON endpoint versus the HEC raw endpoint? I anticipate that answers will make the point that Splunk ingestion is more streamlined, because you don't need to configure, for example: Timestamp recognition: you specify time as a metadata key Per-event sourcetype mapping: you can specify sourcetype as a metadata key However, from my perspective, this is simply shifting compute costs upstream. That is, I would have to perform additional upstream processing to modify the existing "generic" JSON. Given this context, what do I gain by using the HEC JSON endpoint? I understand that HEC indexer acknowledgment is available via both endpoints. Am I missing something?
    I have a log that documents call results for phone calls as a CSV event record There is a field in the event record for the phone number The event record may contain a list of sub-events th... See more...
    I have a log that documents call results for phone calls as a CSV event record There is a field in the event record for the phone number The event record may contain a list of sub-events that I want to track. If the CSV event record contains a sting "MOCK,?,?,1" that is counted as a BAD call. The "1" is what determine it's a bad call we don't care what the ? number are) If the event record has any event ("MOCK,?,?,0" but not "MOCK,?,?,1") this is a Good call A would like to report to show the number of calls to  every  "phone number" and the percentage of Bad calls
Hi, Does anyone help me to download my exam scorecard and obtained marks.I have recently passed my splunk power user exam.
Consider these three searches that end with timechart.  The second one skews time range all the way to year 2038!  How do I fix that? 1. Index search   2. Change to equivalent tstats   | tst... See more...
Consider these three searches that end with timechart.  The second one skews time range all the way to year 2038!  How do I fix that? 1. Index search   2. Change to equivalent tstats   | tstats count where index=_internal earliest=-7d by _time span=1d | timechart span=1d sum(count)   Note how timespan magically changes all the way to 2038? 3. Do not use earliest with tstats; use time selector in search screen.   | tstats count where index=_internal ```earliest=-7d``` by _time span=1d | timechart span=1d sum(count)   I have specific reasons to set earliest with specific token in dashboard.  So, search time selector is not an option.
I have a lookup file called prefixes.csv, and it has about 5 headers: prefix,location,description,owner "1.0.0.0/8",usa,"corporate things", "joe schmoe" I want to be able to reference this fil... See more...
I have a lookup file called prefixes.csv, and it has about 5 headers: prefix,location,description,owner "1.0.0.0/8",usa,"corporate things", "joe schmoe" I want to be able to reference this file so that, for example, if I am looking at firewall logs, I can ignore or , alternatively, specifically look for events where their src_ip falls into these ranges. So for example, something like: index=firewall src_ip=* | search NOT [ |inputlookup | field + prefix | rename prefix as src_ip] I know that I can do something like this if I had every range expanded for single entries per IP, but is there a way to do this with cidr? I have tried doing the lookup definition route but I think I am missing something or misunderstanding something there. Thanks in advance __PRESENT
Hello. I am trying to take advantage of the free courses with splunk, but I am unable to view the videos. I've tried turning VPN off, turning off extensions, clearing cache, using incognito. Nothing ... See more...
Hello. I am trying to take advantage of the free courses with splunk, but I am unable to view the videos. I've tried turning VPN off, turning off extensions, clearing cache, using incognito. Nothing works. Thanks in advance for the responses and helping out.
[1pm PT / 4pm ET] - Register here and ask questions below. This thread is for the Community Office Hours session on Getting Data In (GDI): Forwarders & Edge Processor on Wed, August 23, 2023 at 1pm P... See more...
[1pm PT / 4pm ET] - Register here and ask questions below. This thread is for the Community Office Hours session on Getting Data In (GDI): Forwarders & Edge Processor on Wed, August 23, 2023 at 1pm PT / 4pm ET.   This is your opportunity to ask questions related to getting data into Splunk Platform using forwarders or Splunk Edge Processor. Including: Universal forwarder or heavy forwarder setup and troubleshooting Forwarder connectivity issues, blocked queues, and tuning Using Edge Processor Forwarders vs. Edge Processor vs. Ingest Actions Anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!
we have some services, each produces some logs. these logs aggregated and store in a minio bucket (not aws! just a on-perm minio deployment). I want to integrate splunk with minio such that splunk ge... See more...
we have some services, each produces some logs. these logs aggregated and store in a minio bucket (not aws! just a on-perm minio deployment). I want to integrate splunk with minio such that splunk get these logs from bukcet (not minio pushes logs).   
I can't see my logs, Stack: Spring boot , Maven Here is my pom file: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/... See more...
I can't see my logs, Stack: Spring boot , Maven Here is my pom file: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.0.2</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.example</groupId> <artifactId>SpringBootCRUDWithSplunkIntegration</artifactId> <version>0.0.1-SNAPSHOT</version> <name>SpringBootCRUDWithSplunkIntegration</name> <description>SpringBootCRUDWithSplunkIntegration</description> <properties> <java.version>17</java.version> </properties> <repositories> <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository> </repositories> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-log4j --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <!-- https://mvnrepository.com/artifact/com.splunk.logging/splunk-library-javalogging --> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.8.0</version> <scope>runtime</scope> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <scope>runtime</scope> </dependency> <!-- https://mvnrepository.com/artifact/org.umlg/sqlg-postgres-dialect --> <dependency> <groupId>org.umlg</groupId> <artifactId>sqlg-postgres-dialect</artifactId> <version>1.3.2</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.12</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes> <exclude> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin> </plugins> </build> </project> Here is my log4j.xml <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="http://127.0.0.1:8088" token="c7d19018-8e86-4c22-ace8-00903bb92845" host="localhost" index="spring_dev" type="raw" source="source name" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true"> <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <Root level="info"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration> Can anybody help with that issue,  I've tried everything literary    
trying to upgrade our Windows Server 2019 based Splunk version 9.0.0 to 9.1.0.1 and it's randomly failing on 50% or half of our 12 servers in our lab the error below is from one of our non clustered... See more...
trying to upgrade our Windows Server 2019 based Splunk version 9.0.0 to 9.1.0.1 and it's randomly failing on 50% or half of our 12 servers in our lab the error below is from one of our non clustered Search Heads, others which are identical installed fine, we got the same error on our index Cluster Master    Splunk Enterprise Setup Wizard ended prematurely Splunk Enterprise Setup Wizard ended prematurely because of an error.  Your system has not been modified.  To install this program at a later time, run Setup Wizard again.  Click the Fiinish button to exit the Setup Wizard.       Setup cannot copy the following files:  Splknetdrv.sys SplunkMonitorNoHandleDrv.sys SplunkDrv.sys