All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Everyone, I have a base search at hand which is setup as an alert with a threshold value for it to trigger. I want to exclude this alert from running on last day of every month as threshold valu... See more...
Hi Everyone, I have a base search at hand which is setup as an alert with a threshold value for it to trigger. I want to exclude this alert from running on last day of every month as threshold values expected are higher and setup and new cloned alert on it's place that runs on just the last day of the month. Is there anyway in which we can do this ? I tried thinking about CRON schedule but managing 30/31 days doesn't seem to be possible with it and February(28/29) completely gets excluded. Thanks in advance for any kind of help
Hi all, I am not sure if this is possible. Is there any method that can be used to pass a value of one column as token to another column. Actually i want to use that value passed to calculate some d... See more...
Hi all, I am not sure if this is possible. Is there any method that can be used to pass a value of one column as token to another column. Actually i want to use that value passed to calculate some data which should be displayed in the second column. Not sure if this is possible.
Hi there, One of my colleagues has created a dashboard for audit to know that who logged into Splunk and how many times he logs into Splunk for the last 7 days for all the users. One of the users l... See more...
Hi there, One of my colleagues has created a dashboard for audit to know that who logged into Splunk and how many times he logs into Splunk for the last 7 days for all the users. One of the users left the organization in January and we deleted the account with admin login. Now we are seeing his name in the dashboard and alerts were triggering on his name also. We have again checked the user list but his name was not available, but we are still seeing his name in the alerts and dashboard. Can anyone help me with it…
Hi, I am a beginner in splunk and would like to ask if anyone can help me with creating a search or alert that would trigger if a certain condition 2 is not seen. Example.  first condition is i... See more...
Hi, I am a beginner in splunk and would like to ask if anyone can help me with creating a search or alert that would trigger if a certain condition 2 is not seen. Example.  first condition is if src_ip has event 1234 and event 2345 that is allowed in WAF, then  second condition is to check if same src_ip does not have event 3456 in IPS.  
Is there any Enterprise Security (ES) alternative of the use case 'New Cloud API Call Per Peer Group'?   
I'm looking for help in extracting "allowedSourceAddressPrefix" field/value from a JSON. This field is an escaped JSON string inside a nested JSON. Following is the JSON tree - properties (extracte... See more...
I'm looking for help in extracting "allowedSourceAddressPrefix" field/value from a JSON. This field is an escaped JSON string inside a nested JSON. Following is the JSON tree - properties (extracted by splunk) - /subscription/..../.../ (dynamic field) - ports (escaped json) - allowedSourceAddressPrefix (nested json) The allowedSourceAddressPrefix takes values of single ipaddress (or) multiple ip addresses (or) *. I have tried various rex patterns but failed in extracting the required field, Any help is appreciated. Following is the JSON that has the required field     properties: { "User": "johndoe@contoso.com", "/subscriptions/3483b2ca-02cf-4ff6-92af-99326c8fac7f/resourceGroups/apple-dev/providers/Microsoft.Compute/virtualMachines/gjappledev": "{\"id\":\"/subscriptions/3483b2ca-02cf-4ff6-92af-99326c8fac7f/resourceGroups/apple-dev/providers/Microsoft.Compute/virtualMachines/gjappledev\",\"ports\":[{\"number\":3389,\"allowedSourceAddressPrefix\":\"*\",\"endTimeUtc\":\"2022-03-21T1:50:39.1599446Z\"}]}", "Justification": null }     TIA  
I met an issue after upgrade Splunk App Enterprise to 8.2.5. I have a custom dashboard which loads splunkJS stack libs by require([...]). It was working fine before 8.2.5. Recently we upgrade the... See more...
I met an issue after upgrade Splunk App Enterprise to 8.2.5. I have a custom dashboard which loads splunkJS stack libs by require([...]). It was working fine before 8.2.5. Recently we upgrade the splunk to 8.2.5 the dashboard broken due to undefined 'require'. Even though I add the require.js manually it will load those libs one by one but return 404. In 8.2.4 I notice it was loading visualizationloader.js Any recent change from splunk app cause this issue? Thanks in advance  
Hi, I have a need to periodically pull a specific health rule to identify whether a violation has occurred. This needs to be done through a Java program in order to trigger some other events in our ... See more...
Hi, I have a need to periodically pull a specific health rule to identify whether a violation has occurred. This needs to be done through a Java program in order to trigger some other events in our applications. I understand that AppDynamics offers several health rule related APIs, specified here: https://docs.appdynamics.com/21.3/en/extend-appdynamics/appdynamics-apis/alert-and-respond-api/health-rule-api However, I cannot find one to provide a health rule is being violated at a given time. My question is, is there such API exposed by App Dynamics to listen to such event by periodically pulling such information, preferably an HTTP request?
Hi, Im really new to the splunk, having problem where i need to make a dashboard from txt health sheets file, could anyone help me? It read the data like that
How do combine the below 2 searches into one? 1. * orderid|stats count by id returns something like  2022-03-21T00:10:16,999Z ...INFO [thread_id=12349, id=VU53ZQCTTMLPG, ..... 2022-03-21T00:10... See more...
How do combine the below 2 searches into one? 1. * orderid|stats count by id returns something like  2022-03-21T00:10:16,999Z ...INFO [thread_id=12349, id=VU53ZQCTTMLPG, ..... 2022-03-21T00:10:16,995Z....INFO [thread_id=549, id=F2PAC6ITNX6O3, 2. Based on the above response, I need to query as below after fetching the "id".  Note, "id's would vary for different orderid and the number of "id"'s would also vary  id IN ("VU53ZQCTTMLPG","F2PAC6ITNX6O3")   Thank you
I am creating the new index and getting the below error. Please find the below configurations.    [splunk@ap2-cclabs658055-idx1 ~]$ /opt/splunk/bin/splunk start   Splunk> Another one.   Checkin... See more...
I am creating the new index and getting the below error. Please find the below configurations.    [splunk@ap2-cclabs658055-idx1 ~]$ /opt/splunk/bin/splunk start   Splunk> Another one.   Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Problem parsing indexes.conf: Cannot load IndexConfig: idx=_audit Configured path 'volume:primary/_audit/db' refers to non-existent volume 'primary'; 1 volumes in config Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue [splunk@ap2-cclabs658055-idx1 ~]$    indexes.conf :  # Parameters commonly leveraged here: # maxTotalDataSizeMB - sets the maximum size of the index data, in MBytes, # over all stages (hot, warm, cold). This is the *indexed* volume (actual # disk space used) not the license volume. This is separate from volume- # based retention and the lower of this and volumes will take effect. # NOTE: THIS DEFAULTS TO 500GB - BE SURE TO RAISE FOR LARGE ENVIRONMENTS! # # maxDataSize - this constrains how large a *hot* bucket can grow; it is an # upper bound. Buckets may be smaller than this (and indeed, larger, if # the data source grows very rapidly--Splunk checks for the need to rotate # every 60 seconds). # "auto" means 750MB # "auto_high_volume" means 10GB on 64-bit systems, and 1GB on 32-bit. # Otherwise, the number is given in MB # (Default: auto) # # maxHotBuckets - this defines the maximum number of simultaneously open hot # buckets (actively being written to). For indexes that receive a lot of # data, this should be 10, other indexes can safely keep the default # value. (Default: 3) # # homePath - sets the directory containing hot and warm buckets. If it # begins with a string like "volume:<name>", then volume-based retention is # used. [required for new index] # # coldPath - sets the directory containing cold buckets. Like homePath, if # it begins with a string like "volume:<name>", then volume-based retention # will be used. The homePath and coldPath can use the same volume, but # but should have separate subpaths beneath it. [required for new index] # # thawedPath - sets the directory for data recovered from archived buckets # (if saved, see coldToFrozenDir and coldToFrozenScript in the docs). It # *cannot* reference a volume: specification. This parameter is required, # even if thawed data is never used. [required for new index] # # frozenTimePeriodInSecs - sets the maximum age, in seconds, of data. Once # *all* of the events in an index bucket are older than this age, the # bucket will be frozen (default action: delete). The important thing # here is that the age of a bucket is defined by the *newest* event in # the bucket, and the *event time*, not the time at which the event # was indexed. # TSIDX MINIFICATION (version 6.4 or higher) # Reduce the size of the tsidx files (the "index") within each bucket to # a tiny one for space savings. This has a *notable* impact on search, # particularly those which are looking for rare or sparse terms, so it # should not be undertaken lightly. First enable the feature with the # first option shown below, then set the age at which buckets become # eligible. Am35yNvd # enableTsidxReduction = true / (false) - Enable the function to reduce the # size of tsidx files within an index. Buckets older than the time period # shown below. # timePeriodInSecBeforeTsidxReduction - sets the minimum age for buckets # before they are eligible for their tsidx files to be minified. The # default value is 7 days (604800 seconds). # Seconds Conversion Cheat Sheet # 86400 = 1 day # 604800 = 1 week # 2592000 = 1 month # 31536000 = 1 year [default] # Default for each index. Can be overridden per index based upon the volume of data received by that index. #300GB #homePath.maxDataSizeMB = 300000 # 200GB #coldPath.maxDataSizeMB = 200000 # VOLUME SETTINGS # In this example, the volume spec is not defined here, it lives within # the org_(indexer|search)_volume_indexes app, see those apps for more # detail. One Volume for Hot and Cold [volume:primary] path = /opt/splunk/var/lib/splunk 500GB maxVolumeDataSizeMB = 500000 # Two volumes for a "tiered storage" solution--fast and slow disk. #[volume:home] #path = /path/to/fast/disk #maxVolumeDataSizeMB = 256000 # # Longer term storage on slower disk. #[volume:cold] #path = /path/to/slower/disk #5TB with some headroom leftover (data summaries, etc) ##maxVolumeDataSizeMB = 4600000 # SPLUNK INDEXES # Note, many of these use historical directory names which don't match the # name of the index. A common mistake is to automatically generate a new # indexes.conf from the existing names, thereby "losing" (hiding from Splunk) # the existing data. [main] homePath = volume:primary/defaultdb/db coldPath = volume:primary/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb [history] homePath = volume:primary/historydb/db coldPath = volume:primary/historydb/colddb thawedPath = $SPLUNK_DB/historydb/thaweddb [summary] homePath = volume:primary/summarydb/db coldPath = volume:primary/summarydb/colddb thawedPath = $SPLUNK_DB/summarydb/thaweddb [_internal] homePath = volume:primary/_internaldb/db coldPath = volume:primary/_internaldb/colddb thawedPath = $SPLUNK_DB/_internaldb/thaweddb # For version 6.1 and higher [_introspection] homePath = volume:primary/_introspection/db coldPath = volume:primary/_introspection/colddb thawedPath = $SPLUNK_DB/_introspection/thaweddb # For version 6.5 and higher [_telemetry] homePath = volume:primary/_telemetry/db coldPath = volume:primary/_telemetry/colddb thawedPath = $SPLUNK_DB/_telemetry/thaweddb [_audit] homePath = volume:primary/_audit/db coldPath = volume:primary/_audit/colddb thawedPath = $SPLUNK_DB/_audit/thaweddb [_thefishbucket] homePath = volume:primary/fishbucket/db coldPath = volume:primary/fishbucket/colddb thawedPath = $SPLUNK_DB/fishbucket/thaweddb # For version 8.0 and higher [_metrics] homePath = volume:primary/_metrics/db coldPath = volume:primary/_metrics/colddb thawedPath = $SPLUNK_DB/_metrics/thaweddb datatype = metric # For version 8.0.4 and higher [_metrics_rollup] homePath = volume:primary/_metrics_rollup/db coldPath = volume:primary/_metrics_rollup/colddb thawedPath = $SPLUNK_DB/_metrics_rollup/thaweddb datatype = metric # No longer supported in Splunk 6.3 # [_blocksignature] # homePath = volume:primary/blockSignature/db # coldPath = volume:primary/blockSignature/colddb # thawedPath = $SPLUNK_DB/blockSignature/thaweddb # SPLUNKBASE APP INDEXES [os] homePath = volume:primary/os/db coldPath = volume:primary/os/colddb thawedPath = $SPLUNK_DB/os/thaweddb  
Hello! I am attempting to take a variety of values for a single field and essentially use another search from a different index to rename them to a more human readable value. Both indexes do have a... See more...
Hello! I am attempting to take a variety of values for a single field and essentially use another search from a different index to rename them to a more human readable value. Both indexes do have a field that contains a 1:1 value that I could potentially use |join, however I am having issues with the stats table output where the search is failing to pull up any data or pulling up all data despite searching for a specific value in a field. I have tried |append as well but not getting the results I expect.  Example:   index=index_ mac_address=* logical_vm=* state=online | stats latest(physical_vm) as server latest(ip_address) as IP latest(logical_vm) as host by mac_address | search server=z4c8h2 IP=* host=* name=* | stats count by server Output: mac_address | server | IP | host xx:xx:xx:xx:xx:xx | z4c8h2 | 10.0.0.0 | vm01.internet.io index=translate box=z4c8h2 | table human_name   The translate index search shows the name that I would like to replace in the index_ search for server, but cant get the stats table to update correctly.  Any suggestions how to format a join/append or some other method of getting the value to update in the Stats output table?
Hi, I am trying to retrieve data from Cisco AMP for Endpoints using "Cisco AMP for Endpoints Event Input" app. But "New Input" can not be created with stopping at attachment screen, and the data is... See more...
Hi, I am trying to retrieve data from Cisco AMP for Endpoints using "Cisco AMP for Endpoints Event Input" app. But "New Input" can not be created with stopping at attachment screen, and the data is not retrieve from AMP for Endpoints via API. The API key is accurate. Does anyone success to retrieve data using Splunk8 and App verion 2.0.2? I am trying on next version: Splunk version 8.1.0 Cisco AMP for Endpoints Event Input app 2.0.2
Hello,  Thank you for taking the time to consider my question. I'm currently working on a solution that would report all outbound IPv4 connections from Windows workstations, but in order to reduce t... See more...
Hello,  Thank you for taking the time to consider my question. I'm currently working on a solution that would report all outbound IPv4 connections from Windows workstations, but in order to reduce the volume of these logs I'd like to blacklist (or in another sense whitelist) some of the normal (internal) sites that users will be visiting often, so as not to kill our entire license.  I have been closely reading the inputs.conf Splunk documentation where it's clear that this functionality is possible using regex, but for some reason mine isn't working.  I am using analytics markets' IP range regular expression builder to find the correct syntax, and testing it using the very well known and common tool regex101. My inputs.conf (subtracting other configs out of scope of this topic) is as follows: [WinNetMon://OutboundMon] disabled=0 addressFamily=ipv4;ipv6 direction=outbound index=winnetmon sourcetype=WinEventLog packetType=connect;accept protocol=tcp;udp blacklist1 = ^10\.(([1-9]?\d|[12]\d\d)\.){2}([1-9]?\d|[12]\d\d)$ blacklist2 = ^192\.168\.([1-9]|[1-9]\d|[12]\d\d)\.([1-9]?\d|[12]\d\d)$ Essentially, just as a test, I am just trying to see if I can eliminate traffic logs from all internal (private) IP ranges, in this case the test ranges being 10.0.0.0/8 and 192.168.0.0/16.  If I put these in regex101 and enter addresses within each of those ranges they are highlighted, but when I test internal connections and expect no logs to show up, sure enough they still populate for destination addresses within those ranges, so what gives?  Many thanks in advance        
Hey hey, I'm trying to turn telemetry to a graph. I have a CSV containing: PID,runtime,invoked,usecs,5sec,1min,5min,tty,process. There are a bunch of process with each of those fields, I want t... See more...
Hey hey, I'm trying to turn telemetry to a graph. I have a CSV containing: PID,runtime,invoked,usecs,5sec,1min,5min,tty,process. There are a bunch of process with each of those fields, I want to turn the CSV into 3 column graphs, one with the process name and then% CPU used in (5sec, 1min or 5min, one graph each) And I'm confused as to how to accomplish that          
Hello. I am using the following Jamf Pro Add-on for Splunk (Version 2.10.4) to import Jamf data. https://splunkbase.splunk.com/app/4729/ Here, the following error records may be included. <Err... See more...
Hello. I am using the following Jamf Pro Add-on for Splunk (Version 2.10.4) to import Jamf data. https://splunkbase.splunk.com/app/4729/ Here, the following error records may be included. <Error><error>The XML was too long</error></Error> Is there any way to resolve this error? The following is a detailed description. The inputs are set up as follows. API Call Name       custom Search Name        /JSSResource/mobiledevices The number of records is about 60,000, but about 200 of them have the above error. According to the information on the following site, records with more than 10,000 characters seem to cause the above error. https://community.jamf.com/t5/jamf-pro/splunk-jamfpro-api-getting-started/m-p/169054            There is also information that Splunk does not capture data longer than 10,000 characters by default, but Splunk does not make that setting.
Hi, how to build a search to check  endpoint agent is installed on windows/linux host by running a query. Scenario : i have a all the assets in a lookup.csv and now i want to run the search query... See more...
Hi, how to build a search to check  endpoint agent is installed on windows/linux host by running a query. Scenario : i have a all the assets in a lookup.csv and now i want to run the search query comparing the on-baorded logs(symantec.exe) with lookup file which is have assets name, whether Symantec agent is installed or not on the host. Thanks in advance
Hi, From these logs (unique index): 2022-03-16 16:43:43.279 traceId="1234" svc="Service1" url="/customer/{customerGuid}" duration=132 2022-03-16 16:43:43.281 traceId="5678" svc="Service3" url="/c... See more...
Hi, From these logs (unique index): 2022-03-16 16:43:43.279 traceId="1234" svc="Service1" url="/customer/{customerGuid}" duration=132 2022-03-16 16:43:43.281 traceId="5678" svc="Service3" url="/customer/{customerGuid}" duration=219 2022-03-16 16:43:43.284 traceId="1234" svc="Service2" url="/user/{userGuid}" duration=320 2022-03-16 16:43:44.010 traceId="1234" svc="Service2" url="/shop/{userGuid}" duration=1023 2022-03-16 16:43:44.299 traceId="1234" svc="Service3" url="/shop/{userGuid}" duration=822 2022-03-16 16:43:44.579 traceId="5678" svc="Service2" url="/info/{userGuid}" duration=340 2022-03-16 16:43:44.928 traceId="9012" svc="Service1" url="/user/{userGuid}" duration=543 how to extract the following information? target only traceIds which trigger at least one operation to 'Service2' for each traceId, get first (txStart) and last (txEnd) event timestamps (including all logs for this traceId, not only those of Service2) build stats around 'Service2' Given the example above, I would like to get the following report: traceId txStartTs txEndTs nbCallsService2 avgDurationService2 1234 2022-03-16 16:43:43.279 2022-03-16 16:43:44.299 2 671.5 5678 2022-03-16 16:43:43.281 2022-03-16 16:43:44.579 1 340   Is it possible achieve this in one query? I tried to append, join searches but it does not go anywhere Ideally, I need something like like (in broken terms):       index=idx | stats earliest(_time), latest(_time) by traceId | join traceId [ search index=idx svc="Service2" | stats count avg(duration) by traceId ]        
I want to use regex in field names as "*Warning" or "*Danger" in below map code <option name="mapping.fieldColors">{Warning:0xffd700,Danger:0xe60026}</option>  
On search peer, error: Error [00000010] Instance name "" Search head's authentication credentials rejected by peer. Try re-adding the peer. Last Connect Time:2022-03-19T21:32:13.000+00:00; Failed 11 ... See more...
On search peer, error: Error [00000010] Instance name "" Search head's authentication credentials rejected by peer. Try re-adding the peer. Last Connect Time:2022-03-19T21:32:13.000+00:00; Failed 11 out of 11 times.