All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Unable to initialize modular input "snmp" defined in the app "snmp_ta": Introspecting scheme=snmp: script running failed (exited with code 1).. I am getting this error?? how to solve this...
Hello, This TA is using only the deprecated legacy key-based APIs access Based on your last CS communication, the end date for this kind of access is the October 29th, 2020 Are you planning to up... See more...
Hello, This TA is using only the deprecated legacy key-based APIs access Based on your last CS communication, the end date for this kind of access is the October 29th, 2020 Are you planning to update this TA accordingly or it is going to be abandoned and removed from splunkbase ? Thank you for your support
what is the use of snmp master agent? how to configure it to forward to the splunk instance from remote server? is UF necessary...
Need help in find a query to get the duration of the alert w.r.t the current time. Current code am using: index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" AND... See more...
Need help in find a query to get the duration of the alert w.r.t the current time. Current code am using: index=opennms "uei.opennms.org/nodes/nodeUp" OR "uei.opennms.org/nodes/nodeDown" AND ("*WANR*" OR "*LAN*") | search * | rex field=eventuei "uei.opennms.org/nodes/node(?<bgpPeerState>.+)" | stats max(_time) as Time_IST latest(bgpPeerState) as Status by nodelabel | where Status="Down" | lookup ONMS_nodes.csv nodelabel OUTPUT nodelabel | lookup ONMS_nodes.csv nodelabel OUTPUT sitecode | lookup GRDB_site_list.csv "Site Code" as sitecode OUTPUT Region, Country,"Precious Metal" as Metal, "Site Classification" as Class | eval Region=mvindex(Region,0) | eval Country=mvindex(Country,0) | eval Metal=mvindex(Metal,0) | eval sitecode=if(isnull(sitecode),"Unknown", sitecode) | eval Country=if(isnull(Country),"Unknown", Country) | eval Metal=if(isnull(Metal),"Unknown", Metal) | eval Class=if(isnull(Class),"Unknown", Class) | eval Region=if(isnull(Region),"Unknown", Region) | search Country="*" | rename nodelabel as "Hostname" , Class as Classification, sitecode as "Site Code" | fieldformat Time_IST=strftime(Time_IST+10.5*3600,"%Y-%m-%d %l:%M:%S %p") | sort- Time_IST | table Hostname Status Classification "Site Code" Time_IST Table of output below: Hostname Status Classification Site Code Time_IST 1 GBABO-1 Down Silver ABO 2020-05-05 1:33:37 PM 2 GBABO-2 Down Silver ABO 2020-05-05 1:33:15 PM I am looking for a query to get the duration of the event. Table am expecting. Hostname Status Classification Site Code Time_IST Duration My splunk timing is in CST time zone.
Hi all, I have CTI data that somes into splunk and id like to correlate for matches in indexes against the CTI data. The problem is also that the CTI data can range back many years, but i may on... See more...
Hi all, I have CTI data that somes into splunk and id like to correlate for matches in indexes against the CTI data. The problem is also that the CTI data can range back many years, but i may only want to search data from network index for last 24hrs. Basically looking for where a index=network src value equals a index=CTI indicator value Example below works, but is very slow (index=network src=* earliest=-24h latest=now) OR (index=cti indicator=* earliest=1 latest=now) | fields indicator index sourcetype src | eval splunk_indicator=coalesce(src,indicator) | stats dc(sourcetype) AS distinct_sourcetype values(*) AS * by splunk_indicator | where distinct_sourcetype > 1 So i created a data model for the CTI Data. But i need a way to combine the Datamodel search with a "normal" search. So either | tstats or |datamodel But i can seem to find a way to do this where there is no common field. Is there a way i can either -combine datamodel with a normal search - search the CTI data as a blob rather then using time (so that i can set my index=network to 24hrs and search for matches across all CTI data regardless of the CTI time) - more efficiently search 2 indexes with different time frames for matches - a better way to correlate one index against an other with different time constraints Thanks for any input or direction
HI All, I have a search query that needs to be excluded to run on a bank holiday. I have created a holidays.csv file as a lookup table with Date,Description Entries as below, Date,Descri... See more...
HI All, I have a search query that needs to be excluded to run on a bank holiday. I have created a holidays.csv file as a lookup table with Date,Description Entries as below, Date,Description 05/08/2020,Early May bank holiday 05/25/2020,Spring bank holiday 08/31/2020,Summer bank holiday I'm struggling to get my search to exclude the bank holidays from this list inputlookup holidays.csv | eval holiday = strftime(now(),"%m/%d/%Y") | where Date==holiday if this condition meets my search query should not be run. I tried to test with todays date but still the results are getting returned. search query | search NOT [ inputlookup holidays.csv | eval holiday = strftime(now(),"%m/%d/%Y") | where Date==holiday ] Any help is much appreciated.
Hi, we upgraded our Gradle Versions to the most recent ones, but experience some issues with the latest AppDynamics-Plugin (20.4.0) while doing so. Same happens for older AppDynamics-Plugin-Versions... See more...
Hi, we upgraded our Gradle Versions to the most recent ones, but experience some issues with the latest AppDynamics-Plugin (20.4.0) while doing so. Same happens for older AppDynamics-Plugin-Versions with new Gradle Versions. As Gradle-Wrapper we use version 5.6.4. For the Gradle Android Build-Tools we tried the latest one recommended by Google 3.6.x which causes the final APK to include no DEX-Files when AppDynamics Plugin is applied. When we go one version back to 3.5.3  of the Gradle Android Build-Tools the APK contains DEX-Files and is able to be installed. But during the Build we get an IllegalArgumentException for AppDynamics. Edit: Unfortunately I cannot post a Stacktrace here, as your Input Field won't let me. It says "Your message cannot contain more than 40,000 characters". My Stacktrace is not even close to that number of chars. Edit2: I added the Stacktrace as 2 separate replies to this post. But they have been removed by a moderator because he/she classified them as spam.
Hi All, Would like to know did any one tried integration oracle cloud with Splunk which covers Billing,Instances,network overview details etc,If yes hows its done? Is it periodically being up... See more...
Hi All, Would like to know did any one tried integration oracle cloud with Splunk which covers Billing,Instances,network overview details etc,If yes hows its done? Is it periodically being updatee to SNOW or synchronous? Regards, Shweta
I have signup on splunk.com created account and when click on free trial for Splunk cloud it is giving below message . It is not letting me login only showing message. : "We're Preparing Your Sp... See more...
I have signup on splunk.com created account and when click on free trial for Splunk cloud it is giving below message . It is not letting me login only showing message. : "We're Preparing Your Splunk Cloud Trial Index 5 GB / Day for 15 days. Your free cloud trial lets you search, analyze and visualize your data for 15 days. If you like what you see, it's simple to transition your trial instance to a production account. " Working ... Any help is appreciated.
I've seen a lot of join , transaction and append SPLs. Using timechart to show percentage of each time, it's hard. but everybody wants to do it. I think you didn't have to use that SPL. T... See more...
I've seen a lot of join , transaction and append SPLs. Using timechart to show percentage of each time, it's hard. but everybody wants to do it. I think you didn't have to use that SPL. There is a best practice, but I don't know worst practice Is there SPL's worst practice? or Can you tell me what's wrong with this way of using it?
I have an event that is multiple lines: Mon May 4 22:06:47 PDT 2020 /dev/sdb1 13245631 12450471 127548 99% /Volumes/Media /dev/sdd2 9460988 7196839 1787272 81% /Volumes/Medi... See more...
I have an event that is multiple lines: Mon May 4 22:06:47 PDT 2020 /dev/sdb1 13245631 12450471 127548 99% /Volumes/Media /dev/sdd2 9460988 7196839 1787272 81% /Volumes/Media 2 I'm trying to turn it into something that I can monitor over time in a time chart but I'm having trouble getting this split up properly. I tried this: index=sysmon | rex max_match=0 (?<event>.*)\N | rex max_match=0 \/dev\/(?<drive>\w+)\s*(?<blocks>\d+)\s*(?<used>\d+)\s*(?<available>\d+)\s*(?<usepcnt>\d+)%\s*(?<mounted>.*) | timechart span=30m values(used) by drive It starts to look right in the table, I have time and values but they are all grouped together still:
Hi, Does anyone know if there is an efficient way to incorporate ip_intel into a search/query. I want to set up an alert using a particular ip_intel feed for a specific index to notify me when the... See more...
Hi, Does anyone know if there is an efficient way to incorporate ip_intel into a search/query. I want to set up an alert using a particular ip_intel feed for a specific index to notify me when there is traffic from high-risk IP addresses. Currently trying the below but I get no results (saw the below as another answer but cannot get it to work) index=reverse_proxy [|inputlookup ip_intel | return ip] | where ip=src note: "src" is the the field name in which the IP is parsed for my index Cheers
Can Deployer and Deployment server be on a Single instance? What are Management servers in Splunk?
How to send Prometheus data to splunk? If any free plugin or matrix are available to do.
Hi Splunkers. In the Configuring Streams manual, it outlines configuring Splunk_TA_stream to point to an NFS server in the streamfwd stanza on the UF. i.e.: [streamfwd] fileServerId = nfs://... See more...
Hi Splunkers. In the Configuring Streams manual, it outlines configuring Splunk_TA_stream to point to an NFS server in the streamfwd stanza on the UF. i.e.: [streamfwd] fileServerId = nfs://192.168.6.1/packetcaptures fileServerMountPoint = /usr/local/packetcaptures The manual doesn't specifically mention a separate approach for Windows (which wouldn't normally have any NFS support installed). What's the approach people are taking in mixed Windows/Linux UF invironments? Is further config needed beyond the stanza above on Windows endpoints to get the captured packets onto the remote fileserver. For example, does NFS support need to be installed separately on each of the Windows endpoints to be able to use target packet capture on them? Streams would be deployed via a deployment server, so looking to avoid any manual install/config actions across the fleet. Thanks.
We're receiving this error from an email that it is unable to process Apr 30 16:34:37 splunk-phantom01-nonprod SPAWN[5308]: TID:5308 : DEBUG: SPAWN : spawn.cpp : 829 : EWSOnPremConnector :ErrorExp... See more...
We're receiving this error from an email that it is unable to process Apr 30 16:34:37 splunk-phantom01-nonprod SPAWN[5308]: TID:5308 : DEBUG: SPAWN : spawn.cpp : 829 : EWSOnPremConnector :ErrorExp in self._handle_mail_object: unknown encoding: windows-874
I am getting "ValueError: No JSON object could be decoded" message from the add-on, and I believe the reason is that I am getting some error message in HTML format from Azure instead of real log/even... See more...
I am getting "ValueError: No JSON object could be decoded" message from the add-on, and I believe the reason is that I am getting some error message in HTML format from Azure instead of real log/events in JSON format. I am using PAT (Personal Access Token) instead of password. Could that be the reason? thanks,
Hey Gang, I am running Splunk Enterprise 7.3.1, with a Search Head Cluster with 4 nodes across two sites and an indexing cluster that has 6 nodes across those same two sites. All of our Splunk se... See more...
Hey Gang, I am running Splunk Enterprise 7.3.1, with a Search Head Cluster with 4 nodes across two sites and an indexing cluster that has 6 nodes across those same two sites. All of our Splunk servers are RHEL 7. I have "successfully" deployed the Splunk App for Jenkins to a Heavy Forwarder, and I'm successfully receiving Jenkins data from an onsite Jenkins install. I have seen data in the "User" role as part of the app, however, anytime I attempt to switch the dashboard roles over to "Admin" the whole screen goes white, and I have no options to do anything. We do have a custom index, and I have edited the search macros manually to point to the appropriate custom index. However, I still can't get to the Admin dashboards. Any and all help would be very appreciated. Sincerely, Matthew Granger
Hello, please i need to know how i can create an alert that groups together several other alerts. Or Maybe to create an alert regrouping 2 or 3 event logs in splunk. Thank you.
I'm trying to use splunk on a search head I don't manage but I noticed that whenever I try to use erex on the search head, the regex never comes back to me. I see logs at the end of my search that e... See more...
I'm trying to use splunk on a search head I don't manage but I noticed that whenever I try to use erex on the search head, the regex never comes back to me. I see logs at the end of my search that erex executed fine: 05-04-2020 21:41:04.191 INFO DispatchExecutor - END OPEN: Processor=erex 05-04-2020 21:41:04.366 INFO script - Invoked script erex with 6203295 input bytes (20351 events). Returned 41 output bytes in 120 ms but I can't find anywhere in the search.log or in the GUI where the learned regex is being displayed. My guess is that the level of logging required to print this output may be too high. What is the required logging level for erex to function?