All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How do combine the below 2 searches into one? 1. * orderid|stats count by id returns something like  2022-03-21T00:10:16,999Z ...INFO [thread_id=12349, id=VU53ZQCTTMLPG, ..... 2022-03-21T00:10... See more...
How do combine the below 2 searches into one? 1. * orderid|stats count by id returns something like  2022-03-21T00:10:16,999Z ...INFO [thread_id=12349, id=VU53ZQCTTMLPG, ..... 2022-03-21T00:10:16,995Z....INFO [thread_id=549, id=F2PAC6ITNX6O3, 2. Based on the above response, I need to query as below after fetching the "id".  Note, "id's would vary for different orderid and the number of "id"'s would also vary  id IN ("VU53ZQCTTMLPG","F2PAC6ITNX6O3")   Thank you
I am creating the new index and getting the below error. Please find the below configurations.    [splunk@ap2-cclabs658055-idx1 ~]$ /opt/splunk/bin/splunk start   Splunk> Another one.   Checkin... See more...
I am creating the new index and getting the below error. Please find the below configurations.    [splunk@ap2-cclabs658055-idx1 ~]$ /opt/splunk/bin/splunk start   Splunk> Another one.   Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Problem parsing indexes.conf: Cannot load IndexConfig: idx=_audit Configured path 'volume:primary/_audit/db' refers to non-existent volume 'primary'; 1 volumes in config Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue [splunk@ap2-cclabs658055-idx1 ~]$    indexes.conf :  # Parameters commonly leveraged here: # maxTotalDataSizeMB - sets the maximum size of the index data, in MBytes, # over all stages (hot, warm, cold). This is the *indexed* volume (actual # disk space used) not the license volume. This is separate from volume- # based retention and the lower of this and volumes will take effect. # NOTE: THIS DEFAULTS TO 500GB - BE SURE TO RAISE FOR LARGE ENVIRONMENTS! # # maxDataSize - this constrains how large a *hot* bucket can grow; it is an # upper bound. Buckets may be smaller than this (and indeed, larger, if # the data source grows very rapidly--Splunk checks for the need to rotate # every 60 seconds). # "auto" means 750MB # "auto_high_volume" means 10GB on 64-bit systems, and 1GB on 32-bit. # Otherwise, the number is given in MB # (Default: auto) # # maxHotBuckets - this defines the maximum number of simultaneously open hot # buckets (actively being written to). For indexes that receive a lot of # data, this should be 10, other indexes can safely keep the default # value. (Default: 3) # # homePath - sets the directory containing hot and warm buckets. If it # begins with a string like "volume:<name>", then volume-based retention is # used. [required for new index] # # coldPath - sets the directory containing cold buckets. Like homePath, if # it begins with a string like "volume:<name>", then volume-based retention # will be used. The homePath and coldPath can use the same volume, but # but should have separate subpaths beneath it. [required for new index] # # thawedPath - sets the directory for data recovered from archived buckets # (if saved, see coldToFrozenDir and coldToFrozenScript in the docs). It # *cannot* reference a volume: specification. This parameter is required, # even if thawed data is never used. [required for new index] # # frozenTimePeriodInSecs - sets the maximum age, in seconds, of data. Once # *all* of the events in an index bucket are older than this age, the # bucket will be frozen (default action: delete). The important thing # here is that the age of a bucket is defined by the *newest* event in # the bucket, and the *event time*, not the time at which the event # was indexed. # TSIDX MINIFICATION (version 6.4 or higher) # Reduce the size of the tsidx files (the "index") within each bucket to # a tiny one for space savings. This has a *notable* impact on search, # particularly those which are looking for rare or sparse terms, so it # should not be undertaken lightly. First enable the feature with the # first option shown below, then set the age at which buckets become # eligible. Am35yNvd # enableTsidxReduction = true / (false) - Enable the function to reduce the # size of tsidx files within an index. Buckets older than the time period # shown below. # timePeriodInSecBeforeTsidxReduction - sets the minimum age for buckets # before they are eligible for their tsidx files to be minified. The # default value is 7 days (604800 seconds). # Seconds Conversion Cheat Sheet # 86400 = 1 day # 604800 = 1 week # 2592000 = 1 month # 31536000 = 1 year [default] # Default for each index. Can be overridden per index based upon the volume of data received by that index. #300GB #homePath.maxDataSizeMB = 300000 # 200GB #coldPath.maxDataSizeMB = 200000 # VOLUME SETTINGS # In this example, the volume spec is not defined here, it lives within # the org_(indexer|search)_volume_indexes app, see those apps for more # detail. One Volume for Hot and Cold [volume:primary] path = /opt/splunk/var/lib/splunk 500GB maxVolumeDataSizeMB = 500000 # Two volumes for a "tiered storage" solution--fast and slow disk. #[volume:home] #path = /path/to/fast/disk #maxVolumeDataSizeMB = 256000 # # Longer term storage on slower disk. #[volume:cold] #path = /path/to/slower/disk #5TB with some headroom leftover (data summaries, etc) ##maxVolumeDataSizeMB = 4600000 # SPLUNK INDEXES # Note, many of these use historical directory names which don't match the # name of the index. A common mistake is to automatically generate a new # indexes.conf from the existing names, thereby "losing" (hiding from Splunk) # the existing data. [main] homePath = volume:primary/defaultdb/db coldPath = volume:primary/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb [history] homePath = volume:primary/historydb/db coldPath = volume:primary/historydb/colddb thawedPath = $SPLUNK_DB/historydb/thaweddb [summary] homePath = volume:primary/summarydb/db coldPath = volume:primary/summarydb/colddb thawedPath = $SPLUNK_DB/summarydb/thaweddb [_internal] homePath = volume:primary/_internaldb/db coldPath = volume:primary/_internaldb/colddb thawedPath = $SPLUNK_DB/_internaldb/thaweddb # For version 6.1 and higher [_introspection] homePath = volume:primary/_introspection/db coldPath = volume:primary/_introspection/colddb thawedPath = $SPLUNK_DB/_introspection/thaweddb # For version 6.5 and higher [_telemetry] homePath = volume:primary/_telemetry/db coldPath = volume:primary/_telemetry/colddb thawedPath = $SPLUNK_DB/_telemetry/thaweddb [_audit] homePath = volume:primary/_audit/db coldPath = volume:primary/_audit/colddb thawedPath = $SPLUNK_DB/_audit/thaweddb [_thefishbucket] homePath = volume:primary/fishbucket/db coldPath = volume:primary/fishbucket/colddb thawedPath = $SPLUNK_DB/fishbucket/thaweddb # For version 8.0 and higher [_metrics] homePath = volume:primary/_metrics/db coldPath = volume:primary/_metrics/colddb thawedPath = $SPLUNK_DB/_metrics/thaweddb datatype = metric # For version 8.0.4 and higher [_metrics_rollup] homePath = volume:primary/_metrics_rollup/db coldPath = volume:primary/_metrics_rollup/colddb thawedPath = $SPLUNK_DB/_metrics_rollup/thaweddb datatype = metric # No longer supported in Splunk 6.3 # [_blocksignature] # homePath = volume:primary/blockSignature/db # coldPath = volume:primary/blockSignature/colddb # thawedPath = $SPLUNK_DB/blockSignature/thaweddb # SPLUNKBASE APP INDEXES [os] homePath = volume:primary/os/db coldPath = volume:primary/os/colddb thawedPath = $SPLUNK_DB/os/thaweddb  
Hello! I am attempting to take a variety of values for a single field and essentially use another search from a different index to rename them to a more human readable value. Both indexes do have a... See more...
Hello! I am attempting to take a variety of values for a single field and essentially use another search from a different index to rename them to a more human readable value. Both indexes do have a field that contains a 1:1 value that I could potentially use |join, however I am having issues with the stats table output where the search is failing to pull up any data or pulling up all data despite searching for a specific value in a field. I have tried |append as well but not getting the results I expect.  Example:   index=index_ mac_address=* logical_vm=* state=online | stats latest(physical_vm) as server latest(ip_address) as IP latest(logical_vm) as host by mac_address | search server=z4c8h2 IP=* host=* name=* | stats count by server Output: mac_address | server | IP | host xx:xx:xx:xx:xx:xx | z4c8h2 | 10.0.0.0 | vm01.internet.io index=translate box=z4c8h2 | table human_name   The translate index search shows the name that I would like to replace in the index_ search for server, but cant get the stats table to update correctly.  Any suggestions how to format a join/append or some other method of getting the value to update in the Stats output table?
Hi, I am trying to retrieve data from Cisco AMP for Endpoints using "Cisco AMP for Endpoints Event Input" app. But "New Input" can not be created with stopping at attachment screen, and the data is... See more...
Hi, I am trying to retrieve data from Cisco AMP for Endpoints using "Cisco AMP for Endpoints Event Input" app. But "New Input" can not be created with stopping at attachment screen, and the data is not retrieve from AMP for Endpoints via API. The API key is accurate. Does anyone success to retrieve data using Splunk8 and App verion 2.0.2? I am trying on next version: Splunk version 8.1.0 Cisco AMP for Endpoints Event Input app 2.0.2
Hello,  Thank you for taking the time to consider my question. I'm currently working on a solution that would report all outbound IPv4 connections from Windows workstations, but in order to reduce t... See more...
Hello,  Thank you for taking the time to consider my question. I'm currently working on a solution that would report all outbound IPv4 connections from Windows workstations, but in order to reduce the volume of these logs I'd like to blacklist (or in another sense whitelist) some of the normal (internal) sites that users will be visiting often, so as not to kill our entire license.  I have been closely reading the inputs.conf Splunk documentation where it's clear that this functionality is possible using regex, but for some reason mine isn't working.  I am using analytics markets' IP range regular expression builder to find the correct syntax, and testing it using the very well known and common tool regex101. My inputs.conf (subtracting other configs out of scope of this topic) is as follows: [WinNetMon://OutboundMon] disabled=0 addressFamily=ipv4;ipv6 direction=outbound index=winnetmon sourcetype=WinEventLog packetType=connect;accept protocol=tcp;udp blacklist1 = ^10\.(([1-9]?\d|[12]\d\d)\.){2}([1-9]?\d|[12]\d\d)$ blacklist2 = ^192\.168\.([1-9]|[1-9]\d|[12]\d\d)\.([1-9]?\d|[12]\d\d)$ Essentially, just as a test, I am just trying to see if I can eliminate traffic logs from all internal (private) IP ranges, in this case the test ranges being 10.0.0.0/8 and 192.168.0.0/16.  If I put these in regex101 and enter addresses within each of those ranges they are highlighted, but when I test internal connections and expect no logs to show up, sure enough they still populate for destination addresses within those ranges, so what gives?  Many thanks in advance        
Hey hey, I'm trying to turn telemetry to a graph. I have a CSV containing: PID,runtime,invoked,usecs,5sec,1min,5min,tty,process. There are a bunch of process with each of those fields, I want t... See more...
Hey hey, I'm trying to turn telemetry to a graph. I have a CSV containing: PID,runtime,invoked,usecs,5sec,1min,5min,tty,process. There are a bunch of process with each of those fields, I want to turn the CSV into 3 column graphs, one with the process name and then% CPU used in (5sec, 1min or 5min, one graph each) And I'm confused as to how to accomplish that          
Hello. I am using the following Jamf Pro Add-on for Splunk (Version 2.10.4) to import Jamf data. https://splunkbase.splunk.com/app/4729/ Here, the following error records may be included. <Err... See more...
Hello. I am using the following Jamf Pro Add-on for Splunk (Version 2.10.4) to import Jamf data. https://splunkbase.splunk.com/app/4729/ Here, the following error records may be included. <Error><error>The XML was too long</error></Error> Is there any way to resolve this error? The following is a detailed description. The inputs are set up as follows. API Call Name       custom Search Name        /JSSResource/mobiledevices The number of records is about 60,000, but about 200 of them have the above error. According to the information on the following site, records with more than 10,000 characters seem to cause the above error. https://community.jamf.com/t5/jamf-pro/splunk-jamfpro-api-getting-started/m-p/169054            There is also information that Splunk does not capture data longer than 10,000 characters by default, but Splunk does not make that setting.
Hi, how to build a search to check  endpoint agent is installed on windows/linux host by running a query. Scenario : i have a all the assets in a lookup.csv and now i want to run the search query... See more...
Hi, how to build a search to check  endpoint agent is installed on windows/linux host by running a query. Scenario : i have a all the assets in a lookup.csv and now i want to run the search query comparing the on-baorded logs(symantec.exe) with lookup file which is have assets name, whether Symantec agent is installed or not on the host. Thanks in advance
Hi, From these logs (unique index): 2022-03-16 16:43:43.279 traceId="1234" svc="Service1" url="/customer/{customerGuid}" duration=132 2022-03-16 16:43:43.281 traceId="5678" svc="Service3" url="/c... See more...
Hi, From these logs (unique index): 2022-03-16 16:43:43.279 traceId="1234" svc="Service1" url="/customer/{customerGuid}" duration=132 2022-03-16 16:43:43.281 traceId="5678" svc="Service3" url="/customer/{customerGuid}" duration=219 2022-03-16 16:43:43.284 traceId="1234" svc="Service2" url="/user/{userGuid}" duration=320 2022-03-16 16:43:44.010 traceId="1234" svc="Service2" url="/shop/{userGuid}" duration=1023 2022-03-16 16:43:44.299 traceId="1234" svc="Service3" url="/shop/{userGuid}" duration=822 2022-03-16 16:43:44.579 traceId="5678" svc="Service2" url="/info/{userGuid}" duration=340 2022-03-16 16:43:44.928 traceId="9012" svc="Service1" url="/user/{userGuid}" duration=543 how to extract the following information? target only traceIds which trigger at least one operation to 'Service2' for each traceId, get first (txStart) and last (txEnd) event timestamps (including all logs for this traceId, not only those of Service2) build stats around 'Service2' Given the example above, I would like to get the following report: traceId txStartTs txEndTs nbCallsService2 avgDurationService2 1234 2022-03-16 16:43:43.279 2022-03-16 16:43:44.299 2 671.5 5678 2022-03-16 16:43:43.281 2022-03-16 16:43:44.579 1 340   Is it possible achieve this in one query? I tried to append, join searches but it does not go anywhere Ideally, I need something like like (in broken terms):       index=idx | stats earliest(_time), latest(_time) by traceId | join traceId [ search index=idx svc="Service2" | stats count avg(duration) by traceId ]        
I want to use regex in field names as "*Warning" or "*Danger" in below map code <option name="mapping.fieldColors">{Warning:0xffd700,Danger:0xe60026}</option>  
On search peer, error: Error [00000010] Instance name "" Search head's authentication credentials rejected by peer. Try re-adding the peer. Last Connect Time:2022-03-19T21:32:13.000+00:00; Failed 11 ... See more...
On search peer, error: Error [00000010] Instance name "" Search head's authentication credentials rejected by peer. Try re-adding the peer. Last Connect Time:2022-03-19T21:32:13.000+00:00; Failed 11 out of 11 times.
Hi, I have a dashboard with a number of panels. One of the panels needs to output all events for an index under certain conditions like certain src, port, sourcetype, etc. The other panels in the... See more...
Hi, I have a dashboard with a number of panels. One of the panels needs to output all events for an index under certain conditions like certain src, port, sourcetype, etc. The other panels in the dashboard uses base searches and outputs only counts. These panels work. However, the panel outputting the events uses a saved search and NEVER finishes, even when I change the time range to VERY small time ranges like 30 seconds. I need the panel's search to complete as the stakeholder wants to export the panel's results.. The following is the slow panel on the Dashboard: And here is the respective Saved Search: Can you please help?  Thank you, Patrick
Greetings I am new to Splunk. I need to know if it is possible to draw a diagram using the below search results: Sourceip | Destinationip | Destination Port | Count 1.1.1.1 | 10.10.10.10 | 443 | 200 2... See more...
Greetings I am new to Splunk. I need to know if it is possible to draw a diagram using the below search results: Sourceip | Destinationip | Destination Port | Count 1.1.1.1 | 10.10.10.10 | 443 | 200 2.2.2.2 | 10.10.10.10 | 80 | 100 1.1.1.1 | 20.20.20.20 | 1521 | 90 1.1.1.1 | 10.10.10.10 | 445 | 80 I found the application "Network Diagram Viz" that could do something similar Kindly do you have any advice regarding this Please advise
Hi Team, Below is my query: index=os sourcetype=linux_mpio Firmware_Version="----------------------- DISK INFORMATION --------------------------*" host IN (r3ddclxp00003*) | dedup host | rex max_... See more...
Hi Team, Below is my query: index=os sourcetype=linux_mpio Firmware_Version="----------------------- DISK INFORMATION --------------------------*" host IN (r3ddclxp00003*) | dedup host | rex max_match=0 "(?ms)^DISK\=\"(?<DISK>[^\"]+)\"\s+NAME\=\"(?<NAME>[^\"]+)\"\s+HCTL\=\"(?<HCTL>[^\"]+)\"\s+TYPE\=\"(?<TYPE>[^\"]+)\"\s+VENDOR\=\"(?<VENDOR>[^\"]+)\"\s+SIZE\=\"(?<SIZE>[^\"]+)\"\s+SCSIHOST\=\"(?<SCSIHOST>[^\"]+)\"\s+CHANNEL\=\"(?<CHANNEL>[^\"]+)\"\s+ID\=\"(?<ID>[^\"]+)\"\s+LUN\=\"(?<LUN>[^\"]+)\"\s+BOOTDISK\=\"(?<BOOTDISK>[^\"]+)\"" | stats values(_time) AS TIME, values(NAME) as "DISK NAME" , list(SIZE) AS SIZE, list(VENDOR) AS VENDOR, list(LUN) AS LUN, list(BOOTDISK) as BOOTDISK by host | appendcols [search index=os sourcetype=linux_mpio host IN (r3ddclxp00003*) Firmware_Version="------------------------- MULTIPATH STATUS ----------------------------*" | dedup host |rex max_match=0 "^(?<lines>.+)\n+" | eval first_line=mvindex(lines,2,15) | rex field=first_line "^(?<name>\w+)\s+(?<uuid>[^ ]+)" | stats list(name) AS "MPATH" , LIST(uuid) AS UUID BY host ] | table host, "DISK NAME" ,VENDOR SIZE, LUN, UUID , MPATH| rename host as Host, SIZE AS Size, "DISK NAME" AS "Disk Name", VENDOR AS Vendor, LUN AS "LUN ID" and below is my output:   Would it be possible to add a line break for UUID and MPATH? For example: can we use an if else condition where in IF VENDOR(1) =LSI then add a line break for UUID so that the appropriate values will be mapped... Thanks for the help, Ranjitha N  
Running CIM 5.0 and was looking to do some reporting on users/groups added to security groups (information provided by the Windows Security Log event 4732 - but when I look in the Change datamodel, I... See more...
Running CIM 5.0 and was looking to do some reporting on users/groups added to security groups (information provided by the Windows Security Log event 4732 - but when I look in the Change datamodel, I cannot see the target group of the add in any of the fields. We are using out of the box Splunk_TA_windows, and Splunk Add-on for Microsoft Windows and I would have hoped that the data model would have been automatically filled with the relevant fields. Am I missing something obvious, or is there something I need to setup myself to get this working? Thanks Simon
query | bin _time span=30m | chart avg(throughput) by _time server Hi, I want only the avg(throughput) by _time server values that exceed a certain number to be shown. I tried multiple differen... See more...
query | bin _time span=30m | chart avg(throughput) by _time server Hi, I want only the avg(throughput) by _time server values that exceed a certain number to be shown. I tried multiple different ways and came up with broken queries/queries that return empty results like the following: # broken query | where avg(throughput) by _time server > 80 # no results found | search avg(throughput) by _time server > 80 # broken query | rename avg(throughput) by _time server as avgthroughput | where avgthroughput > 80 Would appreciate suggestions! Thank you. P.S. Splunk beginner
Hi Folks, Looking for a variable which can be used to pass business transaction name in Email or HTTP request template similar to below. ${latestEvent.node.name} - For Node Name Regards, Mohit
Hi If I have below sample message how can I extract procedure names from it, I'm pretty new to splunk any help or guidance would be great. I would like to extract the highlighted message. ... See more...
Hi If I have below sample message how can I extract procedure names from it, I'm pretty new to splunk any help or guidance would be great. I would like to extract the highlighted message.   ...lib.service.exc.ServiceHTTPError: service=svysvc_v2 url=http://x-vip/v2/sys/505019269 http_status=500 error={"code":500,"status":500,"error":{"service":null,"reason":"Unhandled exception","type":"unhandled-exception","error":"OperationalError: (pymssql._pymssql.OperationalError) (10316, b'The app domain with specified version id (84271) was unloaded due to memory pressure and could not be found.DB-Lib error message 20018, severity 16:\\nGeneral SQL Server error: Check messages from the SQL Server\\n')\n[SQL: \n \n \n EXEC Database01.dbo.Procedure01\n @ids=%(ids)s,@den(den)s,@Visible=(visible)s;\n ...   I'm trying to extract all the failed procedures from the error logs and get the counts by each procedure. Thank you
The message format we chose uses a field called scope to control the level of aggregation you want (by request_type, site, zone, cluster). The scope is set with a dropdown and passed in as a token. I... See more...
The message format we chose uses a field called scope to control the level of aggregation you want (by request_type, site, zone, cluster). The scope is set with a dropdown and passed in as a token. I wanted to use multi-search to coalesce the results of 4 different searches. So that if the scope was site, only the results from the site search would be shown. Actual Search: index=cloud_aws namespace=cloudship lambda=SCScloudshipStepFunctionStats metric_type=*_v0.3 | spath input=message | multisearch [search $request_type_token$ | where "$scope_token$" == "request_type" ] [search $request_type_token$ $site_token$ | where "$scope_token$" == "site"] [search $request_type_token$ $site_token$ $zone_token$ | where "$scope_token$" == "zone"] [search scope=$scope_token$ $request_type_token$ $site_token$ $zone_token$ $cluster_token$ | where "$scope_token$" == "cluster"] | timechart cont=FALSE span=$span_token$ sum(success) by request_type Search after token substitution with literal values. index=cloud_aws namespace=cloudship lambda=SCScloudshipStepFunctionStats metric_type=*_v0.3 | spath input=message | multisearch [search request_type="*" | where "site" == "request_type" ] [search request_type="*" site="RTP" | where "site" == "site"] [search request_type="*" site="RTP" zone="*" | where "site" == "zone"] [search scope=site request_type="*" site="RTP" zone="*" cluster="*" | where "site" == "cluster"] | timechart cont=FALSE span=hour sum(success) by request_type BUT ... the results of this query are equivalent to no search at all and I basically do not filter anything. index=cloud_aws namespace=cloudship lambda=SCScloudshipStepFunctionStats metric_type=*_v0.3 | spath input=message | timechart cont=FALSE span=hour sum(success) by request_type This query and the one above give the same result. What am I missing here? When I execute each part of the multi-search separately, the results are correct. I get empty results for all but the 'where "site" == "site"' search. But when I run the whole query I get no filtering at all. Help!
Looking for some help with this one. I'm building a few charts that are meant to serve as vulnerability trending. Our data is uploaded to Splunk on a daily basis. However, what I did not account f... See more...
Looking for some help with this one. I'm building a few charts that are meant to serve as vulnerability trending. Our data is uploaded to Splunk on a daily basis. However, what I did not account for is when a manual push occurs in the event of troubleshooting or rapidly changing data. What I was doing was set a search that counts the number of times severity=critical appears in the uploaded data by _time. Due to the fact that sometimes a manual push will have a day with extra data. In the table below, there are 86 records when it should be 60. index="foobar" | where severity="Critical" | bucket _time span=1d as day | eventstats latest(_time) as Last | stats count(severity) by day, Last | eval First=strftime(First,"%H:%M:%S") | eval Last=strftime(Last,"%Y/%m/%d:%H:%M:%S") | eval day=strftime(day,"%Y/%m/%d") day Last count(severity) 2022/02/16 2022/03/18:05:34:27 57 2022/02/17 2022/03/18:05:34:27 60 2022/02/18 2022/03/18:05:34:27 86 How can I set my search to only count the number of entries once per day, restricted to the latest h:m:s?