All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to figure out which search will most accurately tell me when events with future timestamps are being detected. Somebody on the team built this search: | tstats min(_time) as earliest_... See more...
I'm trying to figure out which search will most accurately tell me when events with future timestamps are being detected. Somebody on the team built this search: | tstats min(_time) as earliest_time, max(_time) as latest_time by index | eval daysOfLogs=round((latest_time - earliest_time)/60/60/24, 2) | eval eventsInFuture=if(latest_time > now(), "yes", "no") | eval tnow = now() | convert ctime(*time) | lookup index_to_env index | convert ctime(tnow) That search doesn't show any sourcetypes with future data. This search, on the other hand, shows that multiple sourcetypes are showing future timestamps: | metadata type=sourcetypes index=* index!=_* | eval now=now() | eval futuretime=lastTime-now | where futuretime>0 Based on what I've seen searching on the raw events with a "latest=+20d@d", the tstats command is the one that isn't seeing the future events... Any idea what is causing this behavior?
we are utilizing log2metrics in the form of a script writes a csv file, then Splunk reads that csv file and converts it to metric format. I am wondering if we can forego the intermediary csv file and... See more...
we are utilizing log2metrics in the form of a script writes a csv file, then Splunk reads that csv file and converts it to metric format. I am wondering if we can forego the intermediary csv file and perform the log2metric conversion directly from the output of the script, similar to a scripted input for an event index.
I have a set of JSON data and I would like to ignore (blacklist) all events where the field "id.orig_h" contains the value "192.168.0.1". So far, I've tried using the blacklisting procedure for Wi... See more...
I have a set of JSON data and I would like to ignore (blacklist) all events where the field "id.orig_h" contains the value "192.168.0.1". So far, I've tried using the blacklisting procedure for Windows EventCodes as a model, but with no success. Example EventCode blacklist: [WinEventLog:Security] blacklist1 = EventCode = "4662" Message = "Account Name:\s+(example account)" What I've tried: 1) Adding the blacklist underneath the monitor stanza in inputs.conf [monitor:///opt/bro/logs/current] index = yada sourcetype = yadayada blacklist = id.orig_h = "192.168.0.1" 2) Adding the blacklist under a separate sourcetype stanza in inputs.conf [monitor:///opt/bro/logs/current] index = yada sourcetype = yadayada [yadayada] blacklist = id.orig_h = "192.168.0.1" How can I achieve this?
Hello everyone, I need help with a query. I have a table with the following fields: _time USERNUMBER WEIGHT 2020-04-02 07:17:12.397 545 245.2400 2020-04-02 07:17... See more...
Hello everyone, I need help with a query. I have a table with the following fields: _time USERNUMBER WEIGHT 2020-04-02 07:17:12.397 545 245.2400 2020-04-02 07:17:12.400 547 100.0011 2020-04-03 07:15:37.956 545 260.2400 2020-04-06 07:16:41.763 545 245.2400 2020-04-09 07:15:38.864 545 612.5600 2020-04-10 07:15:59.845 545 245.2400 2020-04-10 07:17:12.401 547 100.0011 2020-04-13 07:16:32.169 545 245.0258 2020-04-15 07:15:37.027 545 245.2400 2020-04-15 07:15:37.027 547 120.2400 2020-04-16 07:15:44.469 545 156.2400 2020-04-17 07:15:46.726 545 245.0215 2020-04-20 07:16:45.495 545 142.2400 2020-04-22 07:15:54.629 545 245.0214 2020-04-22 07:15:54.629 547 106.0521 2020-04-23 07:15:43.608 545 965.2850 2020-04-25 07:15:50.403 545 245.2100 2020-04-27 07:16:48.186 545 335.2400 2020-04-28 07:15:52.713 545 245.5596 2020-04-29 07:15:54.472 545 520.2400 2020-05-01 07:15:52.179 545 245.0018 I want to show for each USERNUMBER the weight on each day of the month of April. The problem is, that the weight is only updated when there has been a change. How can I autofill the missing dates in April, and also populate the WEIGHT for those missing dates with the WEIGHT that was there the previous day for that USER. So for example if I'm only looking at USERNUMBER 545 the rows from 04/02 to 04/09 will look like: 2020-04-02 07:17:12.397 545 245.2400 2020-04-03 07:15:37.956 545 260.2400 2020-04-04 00:00:00.000 545 260.2400 2020-04-05 00:00:00.000 545 260.2400 2020-04-06 07:16:41.763 545 245.2400 2020-04-07 00:00:00.000 545 245.2400 2020-04-08 00:00:00.000 545 245.2400 2020-04-09 07:15:38.864 545 612.5600 Sidenote: The Hour/Time component of the date is not important to me I hope this makes sense, thank you in advance for your help.
Hi, We have an alert to detect OutOfMemort errors which runs every minute and checks for the last minute. Noticed that the alert was not triggered for around 20 minutes even though the search meets t... See more...
Hi, We have an alert to detect OutOfMemort errors which runs every minute and checks for the last minute. Noticed that the alert was not triggered for around 20 minutes even though the search meets the criteria to trigger an alert. Checked the schedular log and the alert is not skipped. Its strange to see result_count=0 as there was an error at 14:14:19. Any reasons why this would happen or anything else I might want to check? 05-02-2020 14:15:04.388 -0500 INFO SavedSplunker - savedsearch_id="nobody;search;DOIT(1m)-JBOSS 7 OutOfMemory", search_type="scheduled", user="4120", app="search", savedsearch_name="DOIT(1m)-JBOSS 7 OutOfMemory", priority=default, status=success, digest_mode=1, scheduled_time=1588446900, window_time=0, dispatch_time=1588446904, run_time=0.352, result_count=0, alert_actions="", sid="scheduler_412037search_RMD5f897e3bb04038a2c_at_1588446900_8314", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool="" Regards.
Ok so bear with me as I explain. I would like to view my VulnerabilityTitle count deltas over time. So for instance, if I ran count by VulnerabilityTitle yesterday to get my top 10 most common vuln... See more...
Ok so bear with me as I explain. I would like to view my VulnerabilityTitle count deltas over time. So for instance, if I ran count by VulnerabilityTitle yesterday to get my top 10 most common vulnerabilities in my environment that is great. however, I need to be able to do that day after day using a time range. So, for instance, my top most common VulnerabilityTitle may drop down to my second most common VulnerabilityTitle after patching over the weekend. So my end goal is to be able to pick a time window and get VulnerabilityTitle counts Ranked for all the days in the time window I pick. That way I can see x vulnerability was rank 1 yesterday and is now rank 2 today. I am currently running this the search below which gives me most of what I want but it does not track the ranking over time. This ideally would not be only viewable in line graph format. | eval import_time=strftime(_time, "%Y-%m-%d:%H") | eval import_timeday=strftime(_time, "%Y-%m-%d") | eventstats latest(import_time) as Last by import_timeday | where Last = import_time | timechart span=1d count by VulnerabilityTitle where max in top10 useother=f
Hi. I'm getting the Windows Infrastructure App up and running and in one of the events, I am getting a failure on ta_windows_action. The event was a process creation that showed success. Is there ... See more...
Hi. I'm getting the Windows Infrastructure App up and running and in one of the events, I am getting a failure on ta_windows_action. The event was a process creation that showed success. Is there some sort of documentation (searched here and on Google) that explains what that field is (or others...basically a reference.) Thanks!
Hello Splunker, Hope this message find you well. Actually i am looking for list of required columns in Alert (basically its activity dashboard) Basically with all type of alert , I tired to use... See more...
Hello Splunker, Hope this message find you well. Actually i am looking for list of required columns in Alert (basically its activity dashboard) Basically with all type of alert , I tired to use source="/opt/splunk/var/log/splunk/python.log" sendemail and REST Service Action Alert but it is not giving me following columns as Need columns as (More Important columns to add to track activity of alerts): 1 ) Alert Name 2.)Alert Sent TO 3.)Alert Sent FROM 4.)Severity 5)SPL run 6)action 7.)host
I'd like to be able to use Meta Woot! to collect information on old events, but I only just installed it. Looking at the way the "Generate Meta Woot!" search works, it seems like it should be possibl... See more...
I'd like to be able to use Meta Woot! to collect information on old events, but I only just installed it. Looking at the way the "Generate Meta Woot!" search works, it seems like it should be possible (albeit time-intensive) to collect summaries of older events with repeated tstats/collect commands going back over time. Is there a mechanism built into Meta Woot! to do this automatically?
I have an event as below Names "John|James|Jude|Jenni|bond|Tom" How do i get each name as separate event.
Hey Guys, We are in a Splunk Cloud environment with ES, and we have added our own TAXII feed as well as some open source TAXII feeds and we can see that they start "polling" but we never see them d... See more...
Hey Guys, We are in a Splunk Cloud environment with ES, and we have added our own TAXII feed as well as some open source TAXII feeds and we can see that they start "polling" but we never see them download any collection sets or fail in the event logs so it doesn't appear to be working. Has anyone else configured this in Splunk Cloud and if so do you know if there is something else we need to enable for our TAXII feeds to work? Thanks!
Hi Team, We have installed AppDynamics java agent on IBM Cloud servers on Linux boxes. The agents were successfully installed as well as we are getting metrics but we have a problem on the agent s... See more...
Hi Team, We have installed AppDynamics java agent on IBM Cloud servers on Linux boxes. The agents were successfully installed as well as we are getting metrics but we have a problem on the agent side. We’re seeing the below error on server startup related to appdynamics. Do you know what this means? How do we resolve it? INFO   | jvm 1    | main    | 2020/05/01 04:38:22.310 | Install Directory resolved to[/export/home/AppDynamics/AppServerAgent] INFO   | jvm 1    | main    | 2020/05/01 04:38:22.610 | ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. INFO   | jvm 1    | main    | 2020/05/01 04:38:22.811 | Exception caught : java. util.zip.ZipException: error in opening zip file trying to preload agent classes INFO   | jvm 1    | main    | 2020/05/01 04:38:22.811 | java.util.zip.ZipException: error in opening zip file I have also disabled  - "Detect errors logged using Log4j" from AppDynamics Console in the Error Detection Tab  Please help
Hi, I am trying to push app based on IP subnet whitelist and blacklist, while whitelist subnets are working perfectly alright, whatever put with blacklist. is not working as expected, because I ca... See more...
Hi, I am trying to push app based on IP subnet whitelist and blacklist, while whitelist subnets are working perfectly alright, whatever put with blacklist. is not working as expected, because I can see application is getting deployed to subnets in blacklist as well. Here is the configuration =================================== [serverClass:PROD_IP_RANGE] machineTypesFilter = windows-x64 blacklist.0 = ^10.50.100.\d+$ whitelist.0 = ^10.50.\d+.\d+$ [serverClass:PROD_IP_RANGE:app:TestApp] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled ============================== expected behaviour is TestApp should get deployed to all Windows hosts in 10.50.0.0/16 subnets except hosts in 10.50.100.0/24, but I can see app gets deployed to hosts in 10.50.100.0/24 subnet as well
Hi, I am finding Machine Agent status 0 in the Appdynamics UI for a particular Tier/Node. As a result, none of the metrics like CPU %, Memory, etc are not reporting any data for that application. ... See more...
Hi, I am finding Machine Agent status 0 in the Appdynamics UI for a particular Tier/Node. As a result, none of the metrics like CPU %, Memory, etc are not reporting any data for that application. The hosting server is RHEL OS, having both Machine agents and Java Agent installed. Machine Agent version: 4.5.1 Java Agent Version: 4.5.10 Machine agent logs do not show any errors related to this application. Also, I have recently upgraded the Java Agent version from 4/3.x to 4.5.10. Apparently the issue is happening since then. How can I troubleshoot the issue? Please suggest. Regards Shweta
I am not finding universal fowarder that supports windows 2012 , NT 6.2 version ??
I am working to normalize the data for Oracle WebLogic which is heavily JMX based so that I can leverage the Application Server module in ITSI. The Apache Tomcat add-on that is Splunk built gives m... See more...
I am working to normalize the data for Oracle WebLogic which is heavily JMX based so that I can leverage the Application Server module in ITSI. The Apache Tomcat add-on that is Splunk built gives me 90% of what I need for the WebLogic use case however I am missing the JMX thread info that is returned as separate events using the dumpAllThreads modular input in the Apache Tomcat add-on. What is required to generalize the dumpAllThreads input for Tomcat so it can be used by another application that has a JMX server? Thank you in advance for your help.
I am wondering is there any method to list out CIM compliance indexes on demand through SPL query ?
I have a dashboard query where I am comparing some stats between 2 different dates. I am able to use one time picker but not sure how to remove - (earliest=1587963600 latest=1588050000) and take it a... See more...
I have a dashboard query where I am comparing some stats between 2 different dates. I am able to use one time picker but not sure how to remove - (earliest=1587963600 latest=1588050000) and take it as a paramerter from timepicker. source=table1 (earliest=1587963600 latest=1588050000) | JOIN type=inner id [ SEARCH source=table1 | rename user_id AS id ] </query> <earliest>$currentStatus.earliest$</earliest> <latest>$currentStatus.latest$</latest> </search>
i have installed Splunk DB connect and upon first launch i get this error : Missing or malformed messages.conf stanza for MOD_INPUT:INIT_FAILURE__server_the app "splunk_app_db_connect"_Introspecti... See more...
i have installed Splunk DB connect and upon first launch i get this error : Missing or malformed messages.conf stanza for MOD_INPUT:INIT_FAILURE__server_the app "splunk_app_db_connect"_Introspecting scheme=server: script running failed (exited with code 1). i then dont get the welcome page for the app it just sits there with the circle loading icon spinning. any idea how i can fix this? Things i have tried so far are: Remove app and reinstall update to the latest version of splunk 8.0.0.3 searched for the error on google and cant really any info on this. ,I installed db connect 3.0 and upon first launch i get this error: Missing or malformed messages.conf stanza for MOD_INPUT:INIT_FAILURE__server_the app "splunk_app_db_connect"_Introspecting scheme=server: script running failed (exited with code 1). anyone have an idea on what can cause this ? How can i fix this ?
Hi everyone, We are looking into the possibility of another way to monitor the Splunk universal forwarders on our servers, besides within the search head or deployment server. One idea raised was ... See more...
Hi everyone, We are looking into the possibility of another way to monitor the Splunk universal forwarders on our servers, besides within the search head or deployment server. One idea raised was utilizing Solar winds to monitor the UF process. I believe the only process that would need to be monitored for Linux would be the Splunk daemon or are there other dependencies that should be monitored as well? Thank you for your time!