All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Installed Universal forwarder and no inputs are added yet, still gradual memory growth. Why there is constant memory growth with Universal Forwarder? More importantly in the K8 cluster setting, eve... See more...
Installed Universal forwarder and no inputs are added yet, still gradual memory growth. Why there is constant memory growth with Universal Forwarder? More importantly in the K8 cluster setting, every extra MB memory usage matters. Applicable for all splunk instances except Indexers/Heavy forwarders.
I'm having difficulty ingesting log data from flat files into Splunk. I'm monitoring six different directories, each containing 100-1000 log files, some of which are historical and will require less ... See more...
I'm having difficulty ingesting log data from flat files into Splunk. I'm monitoring six different directories, each containing 100-1000 log files, some of which are historical and will require less ingestion in the future. However, I'm seeing inconsistent results and not all logs are being ingested properly. Here's an example of the issue: When all six monitors are enabled, I don't see any data from [file-monitor5] or [file-monitor6]. If I disable 1-3, I start seeing logs from [file-monitor5], but not [file-monitor6]. I have to disable 1-5 to get logs from [file-monitor6]. The configuration for each monitor is shown below: [file-monitor1] [file-monitor2] [file-monitor3] [file-monitor4] [file-monitor5] [file-monitor6] I'm wondering if Splunk doesn't monitor all inputs at the same time or if it ingests monitored files based on timestamp, getting the earliest file in each folder.  Here's my current config for the monitors: [file-monitor1://C:\example] whitelist=.log$|.LOG$ sourcetype=ex-type queue=parsingQueue index=test disabled=false Can anyone provide insight into what might be causing the inconsistent results and what I can do to improve the ingestion process?
Sometimes I run a really complex query and accumulate results in a lookup table.  I recently tried doing this and including a sparkline, which gave me a field that looked like trend ##__SPARKL... See more...
Sometimes I run a really complex query and accumulate results in a lookup table.  I recently tried doing this and including a sparkline, which gave me a field that looked like trend ##__SPARKLINE__##,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,63,55,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0   If I just run "|inputlookup" to visualize that data, I just get the raw data back.  Is there a command that turns the stored sparkline data back into a sparkline?
Hi, I have injected NATS stream details in json format to the splunk and it look below. Wanted to extract key value pair from it. Any help is appreciated. Thanks in advance! looking to extract va... See more...
Hi, I have injected NATS stream details in json format to the splunk and it look below. Wanted to extract key value pair from it. Any help is appreciated. Thanks in advance! looking to extract values of below key - messages bytes first_seq first_ts last_seq last_ts consumer_count   JSON format -   {   "config": {     "name": "test-validation-stream",     "subjects": [       "test.\u003e"     ],     "retention": "limits",     "max_consumers": -1,     "max_msgs_per_subject": -1,     "max_msgs": 10000,     "max_bytes": 104857600,     "max_age": 3600000000000,     "max_msg_size": 10485760,     "storage": "file",     "discard": "old",     "num_replicas": 3,     "duplicate_window": 120000000000,     "sealed": false,     "deny_delete": false,     "deny_purge": false,     "allow_rollup_hdrs": false,     "allow_direct": false,     "mirror_direct": false   },   "created": "2023-02-14T19:26:42.663470573Z",   "state": {     "messages": 0,     "bytes": 0,     "first_seq": 39482101,     "first_ts": "1970-01-01T00:00:00Z",     "last_seq": 39482100,     "last_ts": "2023-03-18T03:10:35.6728279Z",     "consumer_count": 105   },   "cluster": {     "name": "cluster",     "leader": "server0.mastercard.int",     "replicas": [       {         "name": "server1",         "current": true,         "active": 387623412       },       {         "name": "server2",         "current": true,         "active": 387434624       }     ]   } }
Hi, I have a particular service which we triggered occasionally and I would like to know the earliest time of every time it gets kick off for e.g  For e.g following is the data: _time ser... See more...
Hi, I have a particular service which we triggered occasionally and I would like to know the earliest time of every time it gets kick off for e.g  For e.g following is the data: _time service message Host 2022-07-08T05:47:22.029Z abc calling service 123 host123.com 2022-07-08T05:49:17.029Z abc Talking to service 123 host123.com 2022-10-11T01:00:39.029Z abc calling service 123 host123.com 2022-10-11T01:02:46.029Z abc Talking to service 123 host123.com   The expected data outcome would be: Host starting_time host123.com 2022-07-08T05:47:22.029Z host123.com 2022-10-11T01:00:39.029Z   I am aware I have to use streamstats somewhere. But given all the other fields are identical earliest time by host wont work. Also I am backdating the data for 6 months so I need something that is bit efficient. I only care about starting_time of the service of each time the service starts.
Hi, I am exporting my SAS server but it's splitting one big event to multiple small events with identical timestamp. I want to combine these small events to one event in splunk (index_time/search_tim... See more...
Hi, I am exporting my SAS server but it's splitting one big event to multiple small events with identical timestamp. I want to combine these small events to one event in splunk (index_time/search_time) . Please refer to the below _raw log. 2021-09-16T14:56:13,979 INFO [00000003] :sas - NOTE: Unable to open SASUSER.PROFILE. WORK.PROFILE will be opened instead. 2021-09-16T14:56:13,980 INFO [00000003] :sas - NOTE: All profile changes will be lost at the end of the session. 2021-09-16T14:56:13,980 INFO [00000003] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - NOTE: Copyright (c) 2016 by SAS Institute Inc., Cary, NC, USA. 2021-09-16T14:56:14,003 INFO [00000006] :sas - NOTE: SAS (r) Proprietary Software 9.4 (TS1M7) 2021-09-16T14:56:14,003 INFO [00000006] :sas - Licensed to MSF -SI TECH DATA (DMA DEV), Site 70251144. 2021-09-16T14:56:14,003 INFO [00000006] :sas - NOTE: This session is executing on the Linux 3.10.0-1160.83.1.el7.x86_64 (LIN X64) platform. 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - NOTE: Additional host information: 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,003 INFO [00000006] :sas - Linux LIN X64 3.10.0-1160.83.1.el7.x86_64 #1 SMP Mon Dec 19 10:44:06 UTC 2022 x86_64 Red Hat Enterprise Linux Server release 7.9 (Maipo) 2021-09-16T14:56:14,003 INFO [00000006] :sas - 2021-09-16T14:56:14,006 INFO [00000006] :sas - You are running SAS 9. Some SAS 8 files will be automatically converted 2021-09-16T14:56:14,007 INFO [00000006] :sas - by the V9 engine; others are incompatible. Please see 2021-09-16T14:56:14,007 INFO [00000006] :sas - http://support.sas.com/rnd/migration/planning/platform/64bit.html 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - PROC MIGRATE will preserve current SAS file attributes and is 2021-09-16T14:56:14,007 INFO [00000006] :sas - recommended for converting all your SAS libraries from any 2021-09-16T14:56:14,007 INFO [00000006] :sas - SAS 8 release to SAS 9. For details and examples, please see 2021-09-16T14:56:14,007 INFO [00000006] :sas - http://support.sas.com/rnd/migration/index.html 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - This message is contained in the SAS news file, and is presented upon 2021-09-16T14:56:14,007 INFO [00000006] :sas - initialization. Edit the file "news" in the "misc/base" directory to 2021-09-16T14:56:14,007 INFO [00000006] :sas - display site-specific news and information in the program log. 2021-09-16T14:56:14,007 INFO [00000006] :sas - The command line option "-nonews" will prevent this display. 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,007 INFO [00000006] :sas - 2021-09-16T14:56:14,008 INFO [00000006] :sas - NOTE: SAS initialization used: 2021-09-16T14:56:14,008 INFO [00000006] :sas - real time 0.19 seconds 2021-09-16T14:56:14,008 INFO [00000006] :sas - cpu time 0.08 seconds 2021-09-16T14:56:14,008 INFO [00000006] :sas - 2021-09-16T14:56:14,331 INFO [00000005] :sas - SAH011001I SAS Metadata Server (8561), State, starting 2021-09-16T14:56:14,362 INFO [00000009] :sas - The maximum number of cluster nodes was set to 8 as a result of the OMA.MAXIMUM_CLUSTER_NODES option. 2021-09-16T14:56:14,362 INFO [00000009] :sas - OMACONFIG option 1 found with value OMA.SASSEC_LOCAL_PW_SAVE and processed. 2021-09-16T14:56:15,160 INFO [00000009] :sas - Using AES with 64-bit salt and 10000 iterations for password storage. 2021-09-16T14:56:15,160 INFO [00000009] :sas - Using SASPROPRIETARY for password fetch. 2021-09-16T14:56:15,160 INFO [00000009] :sas - Using SHA-256 with 64-bit salt and 10000 iterations for password hash. 2021-09-16T14:56:15,169 INFO [00000009] :sas - SAS Metadata Authorization Facility Initialization. 2021-09-16T14:56:15,169 INFO [00000009] :sas - SAS is an adminUser. 2021-09-16T14:56:15,169 INFO [00000009] :sas - SASTRUST@SASPWI is a trustedUser. 2021-09-16T14:56:15,170 INFO [00000009] :sas - SASADM@SASPWI is an unrestricted adminUser. Thanks in advance.
Hello,  I have a CSV file with 2 fields. (field1,field2). The file is monitored and the content is indexed however the content of the file is updated on a daily basis and I want to index only the cha... See more...
Hello,  I have a CSV file with 2 fields. (field1,field2). The file is monitored and the content is indexed however the content of the file is updated on a daily basis and I want to index only the changes of the file.  Example :  Day 1  abcd,100122 abde,100122 abcdf,100122   Day 2 (where the last 2 lines are new in the csv file and needs to be ingested) abcd,100122 abde,100122 abcdf,100122 bcda,100222 bcdb,100222    
I wrote a simple macro for a string builder for a full name when passed params for FirstName, MiddleName, and LastName.;  first screenshot - the macro definition  I can pass the values explicitly ... See more...
I wrote a simple macro for a string builder for a full name when passed params for FirstName, MiddleName, and LastName.;  first screenshot - the macro definition  I can pass the values explicitly to the macro but not by reference from the query that invokes the macro;  second screenshot shows the behavior of the macro both when I explicitly pass the values to it and when I attempt to do so by reference when "Use eval-based definition": is NOT checked. If I DO check the box for "Use eval-based definition",  I get the following error: "Error in 'SearchParser': The definition of macro 'CRE_getFullNameTEST(3)' is expected to be an eval expression that returns a string." What do I have to do to be able to pass the values contained within FirstName, MiddleName, and LastName to my macro? Thanks for any assistance with this. Macro definition SPL that invokes macro
Our KVStore, which is wiredTiger, slowly grows and consumes the entire cache and then will eventually grow outside the cache until restarted. It requires rolling restarts about once a week, and has p... See more...
Our KVStore, which is wiredTiger, slowly grows and consumes the entire cache and then will eventually grow outside the cache until restarted. It requires rolling restarts about once a week, and has persisted for months. Anyone else had this issue?
I have two sourcetypes from the same index, both in JSON formatting.  One contains hosts and vulnerability scan data and the other contains hosts and host info. I ultimately want to tie the vulnerabi... See more...
I have two sourcetypes from the same index, both in JSON formatting.  One contains hosts and vulnerability scan data and the other contains hosts and host info. I ultimately want to tie the vulnerability data to the the hosts in the other sourcetype and create an outputlookup. The matching field I would like to use is IP but the field names are different in each sourcetype. Sourcetype1 has the IP field named ipv4s{} and sourcetype2's IP field is called asset.ivp4. I have tried combing them using eval and coalesce but when I do, ipv4s{} will come up as the field value and not the IPs of the two previously mentioned fields.  Here is the search I've been trying:       index=index (sourcetype=sourcetype1 OR sourcetype=sourcetype2 | eval IP=coalesce("ipv4s{}","asset.ipv4")        
Hello team. Is there an upgrade path to upgrade Splunk on my heavy forwarders? Or is it just a matter of installing the new version of the Splunk RPM? I don't see any docs in Splunk about breaks or p... See more...
Hello team. Is there an upgrade path to upgrade Splunk on my heavy forwarders? Or is it just a matter of installing the new version of the Splunk RPM? I don't see any docs in Splunk about breaks or preparations for this upgrade.
Hi Team, I am trying to search <string1> and <String2> from different lines in same log having 100 lines, if both matched i want to show in result with _time, Sring1, String2. Please assist me. S... See more...
Hi Team, I am trying to search <string1> and <String2> from different lines in same log having 100 lines, if both matched i want to show in result with _time, Sring1, String2. Please assist me. Sample log is like below ... 66 lines omitted ... Linexx Linexx ]: "<string1>" Linexx <string2>   Result should be link  _time , String1 
I have a very simple search and when I add the sort command i lose almost 90% of my actual results.      index="features" application=kokoapp type=userStats | sort feature | dedup feature | tab... See more...
I have a very simple search and when I add the sort command i lose almost 90% of my actual results.      index="features" application=kokoapp type=userStats | sort feature | dedup feature | table feature       Without the sort command I get 35 results and with it included i only get 4 results. Is there something I am missing?
I can set up a query with a simple trendinterval single pane value comparing the same time period for 24 hours prior.  Add that panel to new clean DashBoard studio and it displays different data for ... See more...
I can set up a query with a simple trendinterval single pane value comparing the same time period for 24 hours prior.  Add that panel to new clean DashBoard studio and it displays different data for the trend.  Add the same exact query to a classic dash and all works as intended. Dash Studio "type": "splunk.singlevalue", "options": { "colorMode": "none", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "off", "trendDisplay": "percent", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "trendInterval": "-24h", "unitPosition": "after", "shouldUseThousandSeparators": true, "majorColor": "> majorValue | rangeValue(majorColorEditorConfig)", "trendColor": "> trendValue | rangeValue(trendColorEditorConfig)", "majorFontSize": 100, "trendFontSize": 20 Classic "type": "splunk.singlevalue", "options": { "colorMode": "block", "drilldown": "none", "numberPrecision": 0, "sparklineDisplay": "off", "trendDisplay": "percent", "trellis.enabled": 0, "trellis.scales.shared": 1, "trellis.size": "medium", "trendInterval": "-24h", "unitPosition": "after", "shouldUseThousandSeparators": true, "majorFontSize": 100, "majorColor": "> majorValue | rangeValue(majorColorEditorConfig)", "trendColor": "> trendValue | rangeValue(trendColorEditorConfig)", "trendFontSize": 20
We have some logs coming in the following format.    ERROR | 2023-03-16 01:27:14 EDT | field1=field1_value | field2=field2_value | field3=field3_value | field4=field4_value | field5=field5_value ... See more...
We have some logs coming in the following format.    ERROR | 2023-03-16 01:27:14 EDT | field1=field1_value | field2=field2_value | field3=field3_value | field4=field4_value | field5=field5_value | field6=field6_value | field7={} | message=Message String with spaces. java.stacktrace.Exception: Exception Details. at ... at ... at ... at ...     Splunk's default extraction works well in getting all key=value pairs, except for the field "message" where only first word before the space is extracted  and drops the rest. To get around this, I used the following inline regex.   | rex field=_raw "message=(?<message>.+)"   This works well in search and extracts the entire message string right until the newline. But when I used the same regex in the configuration file, it seems to be ignoring the newline and continues to match everything else all the way until end of the event. Have tried using EXTRACT as well as REPORT(using transforms.conf) but same result. Do props.conf/transforms.conf interpret regex differently? To summarize,  default Splunk extraction,   message = Message   with inline rex   message = Message String with spaces.   with regex in props/transforms,    message = Message String with spaces. java.stacktrace.Exception: Exception Details. at ... at ... at ... at ...     Any suggestions on how to use this regex from configuration?  Thank you,
I have a single-value panel. Is it possible to display another panel only after clicking on the single-value one?
I am trying to create drilldown from a pie chart to a table on same dashboard.  Is it possible ?    Thanks
I have lookup table with a DNS blocklist. What query can I use to search for events with any of the blocklisted domains. I had received advice to create a csv file with two columns: "Domain" and "sus... See more...
I have lookup table with a DNS blocklist. What query can I use to search for events with any of the blocklisted domains. I had received advice to create a csv file with two columns: "Domain" and "suspicious" which is set to 1 for all the domains. Then I would search for the dns sourcetype and suspicious=1. This did not work.
So I am troubleshooting missing data from hosts, I have the index name that is missing the data, and so I would like to track down the suspect forwarder that is not sending the data to the index.  I... See more...
So I am troubleshooting missing data from hosts, I have the index name that is missing the data, and so I would like to track down the suspect forwarder that is not sending the data to the index.  I am not interested in the indexer server.   
My GoogleFu is failing me. There's a lot of btool tutorials, but I can't find this solution... I'm on a Windows 10 system, trying to debug the effective config of it's universal client. This same m... See more...
My GoogleFu is failing me. There's a lot of btool tutorials, but I can't find this solution... I'm on a Windows 10 system, trying to debug the effective config of it's universal client. This same message with both command prompt (cmd.exe) and with PowerShell. I think it's apparent that it's a variable setting, somewhere, but where?