All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'd like to add a filter to the Traffic Size Analysis Dashboard. The filter I'd like to add is the "src_ip" field. Currently, this dashboard doesn't allow you to search by one IP and I think having t... See more...
I'd like to add a filter to the Traffic Size Analysis Dashboard. The filter I'd like to add is the "src_ip" field. Currently, this dashboard doesn't allow you to search by one IP and I think having that filter would be very helpful. What would be the best way in going about and adding this?
Hi, How can I display the actual value of the difference in a new column? The value is "cts16k1sacc". Row 1 in attached screenshot.... I want to be able to display the actual value of my cmts... See more...
Hi, How can I display the actual value of the difference in a new column? The value is "cts16k1sacc". Row 1 in attached screenshot.... I want to be able to display the actual value of my cmtsID besides the difference column for example column name "Added" or "Removed" to reflect the Difference numeric value
Has anyone used the Splunk Quick-Start CloudFormation templates and what was your experience like? Also, how do these 2 templates differ? splunk-enterprise.template.yaml (existing VPC??) & sp... See more...
Has anyone used the Splunk Quick-Start CloudFormation templates and what was your experience like? Also, how do these 2 templates differ? splunk-enterprise.template.yaml (existing VPC??) & splunk-enterprise-master.template.yaml (new VPC??) Resources: https://aws-quickstart.s3.amazonaws.com/quickstart-splunk-enterprise/doc/splunk-enterprise-on-the-aws-cloud.pdf https://github.com/aws-quickstart/quickstart-splunk-enterprise
Hi ! How Can I activate the multiple database in the db-agent for monitoring an Oracle Database ?
Hello!!! I need to calculate the percentage between the rows in my table, like this, for example: Search: | bucket span=10m _time | stats count by _time Result: _time ... See more...
Hello!!! I need to calculate the percentage between the rows in my table, like this, for example: Search: | bucket span=10m _time | stats count by _time Result: _time count 1 2020-06-03 16:10:00 27656974 2 2020-06-03 16:20:00 68834318 3 2020-06-03 16:30:00 68160616 4 2020-06-03 16:40:00 67655028 5 2020-06-03 16:50:00 66023251 6 2020-06-03 17:00:00 65418711 7 2020-06-03 17:10:00 36918173 How can I calculate perc1=row2/row1 , perc2=row3/row2 , and so on?
I am new to Splunk. The cluster command gives me results that I am looking for and some. I would like to filter the results of this command with a list of regular expression patterns that I ha... See more...
I am new to Splunk. The cluster command gives me results that I am looking for and some. I would like to filter the results of this command with a list of regular expression patterns that I have stored in a KV store, but I am having a tough time getting the answers that I am looking for. When I run the map command below it looks like the $payload$ ends up with the value rather than the field name. The app_critical_warning KV store has a list of regexp patterns with one of the column names being regexp_pattern. Here's the search that I have come up with: index="someindex" msgtype::warning | cluster t=0.9 showcount=true field=payload | table cluster_count payload | map [|inputlookup app_critical_warning | regex $payload$=regexp_pattern ] maxsearches=10 Does anybody have any suggestions on how to go about this task? I can compose the search with all the regex patterns, but I would like to maintain it in a KV store for logistic reasons. Thank you!
Hi, I have dateset that contains IP addresses. IP Addresses are coming in variations due to ranges they are assigned to separated by \ backslashes. I need them to be extracted in multiple fields r... See more...
Hi, I have dateset that contains IP addresses. IP Addresses are coming in variations due to ranges they are assigned to separated by \ backslashes. I need them to be extracted in multiple fields regardless of how many variations are there. See sample data below: 1.2.3.4\n4.5.6.7\n8.9.1.2 1.2.3.4\n4.5.6.7\n 1.2.3.4\n4.5.6.7 1.2.3.4\n4.5.6.7\n8.9.1.2 I need them like: 1.2.3.4\n4.5.6.7\n8.9.1.2 Value1: 1.2.3.4 Value2: 4.5.6.7 Value3: 8.9.1.2 Value4: and so on..... So basically all values within backslash, I need them separated out in fields. Also, the letter "n" or any alphabets attached to any IP also needs to go. Thanks in-advance!
I have created a search in order to: Pull traffic log from datamodel "DM_1" Use src_ip and dest_ip as token to pass map search in a different index= with src=$src_ip$ , answer=$dest_ip$ ... See more...
I have created a search in order to: Pull traffic log from datamodel "DM_1" Use src_ip and dest_ip as token to pass map search in a different index= with src=$src_ip$ , answer=$dest_ip$ so that I can pull the domain (URL) name for dest_ip . Then, use eval within the map search to pass in values for all the fields from first search that don't exist in 2nd search. Final results would include all the fields from 1st search and domain field from 2nd search, with map command. | tstats count from datamodel= where earliest="05/28/2020:13:32:00" latest="05/28/2020:13:33:00" log.flags=decrypted log.log_subtype=deny by log.src_ip log.dest_ip log.transport log.app log.dest_port log.src_zone log.flags log.log_subtype _time log.session_end_reason log.rule log.action log.packets_in log.packets_out | rename log.* AS * | map search="search index= answer=$dest_ip$ src=$src_ip$ | eval dest_ip=$dest_ip$ | eval dest_port=$dest_port$ | eval rule=$rule$ | eval map.transport=$transport$ | eval session_end_reason=$session_end_reason$ | eval action=$action$ | eval app=$app$ | eval flags=$flags$ | eval log_subtype=$log_subtype$ | eval packets_in=$packets_in$ | eval packets_out=$packets_out$ | eval time=$_time$" | eval time = strftime(_time, "%m/%d/%Y:%H:%M:%S") | stats values(domain) as domain, values(dest_ip) as dest_ip, values(dest_port) as dest_port, values(rule) as rule, values(packets_out) as packets_out, values(packets_in) as packets_in, values(session_end_reason) as session_end_reason, values(time) as time, values(action) as action, values(flags) as flags, values(log_subtype) as log_subtype, values(transport) as transport, values(app) as app by src_ip The problem I run into is only some of the fields from the first search that do not exist in the 2nd search index ( index_2 ) would return the value ( packets_out , packets_in , session_end_reason , rule , dest_port ) while others don't return any value ( flags , app , action , map.transport ). From the job log, I could see all those values were parsed out with correct values: ...snip... 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval time='_time' 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval packets_out=8 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval packets_in=5 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval log_subtype=deny 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval flags=nat <<<<< value was parsed out correctly but was missing from output 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval app=ssl <<<<< value was parsed out correctly but was missing from output 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval action=allowed 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval session_end_reason="policy-deny" 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval "map.transport"=tcp 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval rule="PERMIT-WEB" 06-02-2020 16:31:26.054 INFO SearchParser - PARSING: | eval dest_port=443 06-02-2020 16:31:26.055 INFO SearchParser - PARSING: | eval dest_ip="y.y.y.y" 06-02-2020 16:31:26.055 INFO SearchParser - PARSING: | search (index=index_2 earliest=05/28/2020:13:32:00 latest=05/28/2020:13:33:00 answer="y.y.y.y" src="x.x.x.x") ...snip... However, at the end it has some waring error to say some of the fields were missing value from the results: ….snip… 06-02-2020 16:31:27.151 WARN StatsProcessor - Specified field(s) missing from results: 'action', 'app', 'flags', 'log_subtype', 'transport' ...snip... I don't see the different between missing fields with others and wonder why there is such inconsistent output. Thanks!
I recently left a company where I had taken some Splunk training through the Splunk account the company gave me. I now would like to move those course credits to my personal Splunk account where I h... See more...
I recently left a company where I had taken some Splunk training through the Splunk account the company gave me. I now would like to move those course credits to my personal Splunk account where I have some other course credits. Is there a way to merge both accounts or transfer credits from one account to the other?
Hello, I am quite green at Splunk and have a problem I could use some help with. My data is coming from a postgres database via the Splunk DB Connect App, where each input (source) into Splunk ... See more...
Hello, I am quite green at Splunk and have a problem I could use some help with. My data is coming from a postgres database via the Splunk DB Connect App, where each input (source) into Splunk is a postgres table. I am trying to join two sources, which I can do in a regular search, but am trying to improve performance since my join search is running quite long, so I am looking at summary indexing. The two sources are as follows: action_times action_time act_id actions_table act_id operation Here is the base search that returns the expected results. source="action_times" | join type=inner act_id [search source="actions_table"] | stats count by operation I have been able to set up a summary index and schedule a report which runs the search above, but the actions_table really does not update often so most subsequent runs of the scheduled report return no events, despite there being tens of thousands of events from action_times . Sample Input with Expected Output Input - action_times Row 1: action_time = 2020-06-03 11:58:10.123. act_id = 1 Row 2: action_time = 2020-06-03 11:59:18.563. act_id = 2 Row 3: action_time = 2020-06-03 11:55:28.752. act_id = 1 Input - actions_table Row 1: act_id = 1. operation = "read register" Row 2: act_id = 2. operation = "write register" Expected Output Row 1: "read register" - 2 Row 2: "write register" - 1 What I would like to do... I would like to use summary indexing to pull in the joined data, either with an actual join command, or without. If there is any other helpful information I can provide, please let me know. Thank you,
I am trying to make an area chart which shows the average size of the parsing queue over time. I would like to add a horizontal bar as a threshold. I noticed that some logs have different values for ... See more...
I am trying to make an area chart which shows the average size of the parsing queue over time. I would like to add a horizontal bar as a threshold. I noticed that some logs have different values for the max_size_kb, so I thought I could use max to get the value and set my threshold to that, but for some reason, my search is returning zero results. I don't know why it's not working. If I hardcode a number for zzz, it'll work, but doesn't seem to work the way it is written now. The value changes between my hosts, so I don't want to hard code it. Current SPL index=_internal host=$hostToken$ group=queue name=parsingqueue | stats max(max_size_kb) AS zzz | timechart avg(current_size_kb) by ingest_pipe | eval threshold = zzz
I am trying to create a dashboard that graphs the parsing queue size for a HF by ingest_pipe . I noticed that most of these logs have that field but some don't (i'm not sure why). sample logs ... See more...
I am trying to create a dashboard that graphs the parsing queue size for a HF by ingest_pipe . I noticed that most of these logs have that field but some don't (i'm not sure why). sample logs 06-03-2020 12:21:30.964 -0400 INFO Metrics - group=queue, name=parsingqueue, max_size_kb=512, current_size_kb=0, current_size=0, largest_size=2, smallest_size=0 06-03-2020 12:21:27.144 -0400 INFO Metrics - group=queue, ingest_pipe=3, name=parsingqueue, max_size_kb=6144, current_size_kb=0, current_size=0, largest_size=2, smallest_size=0 06-03-2020 12:21:27.142 -0400 INFO Metrics - group=queue, ingest_pipe=2, name=parsingqueue, max_size_kb=6144, current_size_kb=0, current_size=0, largest_size=11778, smallest_size=0 current SPL index=_internal host=$hostToken$ group=queue name=parsingqueue | timechart avg(current_size_kb) by ingest_pipe I can't modify the search with ingest_pipe=* because I have tokenized the host field in the search and some of my HF's only have 1 ingest pipe. In that scenario, there is no ingest_pipe field at all so hardcoding that into the search will result in 0 results when the HF only has 1 pipeline. The solution I came up with is to count the # of events where ingest_pipe exists (yesPipe), count the # of events where it does not exist (noPipe), and assign my count by foo value to the field that is greater. If yesPipe is greater, count by ingest_pipe , else count by host . I don't have the query for these counts and checks. Alternatively, I thought I could use a lookup table that has a "count by field" column, where per host I simply specify either ingest_pipe or host to count by. I feel like there is an easy solution and I'm overthinking it. Any ideas?
I've got about 10 or 12 rest api inputs setup in the add-on that are all working fine with 1.5.3 but stop working whenever I upgrade the add-on to 1.8.X is there anything I need to be changing to ... See more...
I've got about 10 or 12 rest api inputs setup in the add-on that are all working fine with 1.5.3 but stop working whenever I upgrade the add-on to 1.8.X is there anything I need to be changing to make it work? I'm on splunk 7.3.1 currently with RHEL7.4
Hi Splunkers, I need to stop a particular service from generating logs in Splunk during downtime, and resume generating logs when the service is restored. And, I want to find the time difference b... See more...
Hi Splunkers, I need to stop a particular service from generating logs in Splunk during downtime, and resume generating logs when the service is restored. And, I want to find the time difference between the last log generated during downtime and the first log generated when service was restored. Example log times : 6/3/20 12:32:03.000 AM ....... (after the service is up) 6/3/20 11:41:33.000 AM........(last log before the service went down) 6/3/20 11:41:20.000 AM 6/3/20 11:41:15.000 AM 6/3/20 11:41:05.000 AM Waiting to hear solutions from you guys! Thanks.
2020-05-12 14:34:52,060 2020-05-12 14:34:52,060 2020-05-12 14:34:52,060 I want to remove ####< from my events, so i used props.conf along with transforms.conf with this below setting. But stil... See more...
2020-05-12 14:34:52,060 2020-05-12 14:34:52,060 2020-05-12 14:34:52,060 I want to remove ####< from my events, so i used props.conf along with transforms.conf with this below setting. But still ####< is not removed from the events. My props.conf [hast_sourcetype] BREAK_ONLY_BEFORE_DATE = CHARSET = UTF-8 DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 29 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRANSFORMS-remove-hash = include-date-item category = Custom description = hash_sourcetype pulldown_type = true My transforms.conf [eliminate-hash-item] DELIMS = ####< DEST_KEY=queue FORMAT=nullQueue Please help me to solve this issue.
I have a common file that appears on multiple servers in a server class. This results in duplicated events. I need to have all servers in the server class because they have other files that are uniqu... See more...
I have a common file that appears on multiple servers in a server class. This results in duplicated events. I need to have all servers in the server class because they have other files that are unique. Is there a way to restrict the ingestion of the common file to only come from one of the servers? I hope this makes sense Thanks in advance.
We want to send custom logs from Wordpress Website to Splunk account. We already have a Splunk account. We tried with PHP SDK but it looks like it is depreciated. We also like to know why The Sp... See more...
We want to send custom logs from Wordpress Website to Splunk account. We already have a Splunk account. We tried with PHP SDK but it looks like it is depreciated. We also like to know why The Splunk Software Development Kit for PHP is Deprecated? We also try to connect the WordPress website with Splunk by using one Plugin called MiragetConnector https://wordpress.org/plugins/miragetconnector/ but this plugin also giving error. Our end goal is to send the log to Splunk either by using PHP or by using any plugin from WordPress. Please let us know how can we achieve it? If we can do it by using any paid service then also we are open to it.
I have multiple inputs in the dashboard. The first input is for various environments (hard coded). And the second input is for various accounts from the selected environment (leverages search string)... See more...
I have multiple inputs in the dashboard. The first input is for various environments (hard coded). And the second input is for various accounts from the selected environment (leverages search string). I have the first input tokenized as "env" however, passing it in the second input search string as environment=$env$ doesn't yield the value from the first input.
I want to schedule a python file which pushes some data into summaryindex. How can i schedule the python file. Regards, Santosh
ERROR ScriptRunner - stderr from '/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=http://localhost.localdomain:8000/app/Splunk_CiscoISE/@go?sid=rt_scheduler__adm... See more...
ERROR ScriptRunner - stderr from '/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=http://localhost.localdomain:8000/app/Splunk_CiscoISE/@go?sid=rt_scheduler__admin_U3BsdW5rX0Npc2NvSVNF__RMD5f595461cdff80ada_at_1591190776_4988.58" "ssname=CISE_Passed_Authentications" "graceful=True" "trigger_time=1591192208" results_file="/opt/splunk/var/run/splunk/dispatch/rt_scheduler__admin_U3BsdW5rX0Npc2NvSVNF__RMD5f595461cdff80ada_at_1591190776_4988.58/results.csv.gz" "is_stream_malert=False"': ERROR:root:Connection unexpectedly closed while sending mail to xx@qq.com