All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Where is that installed? By the sounds of it, you've probably put it in the cloud if you are creating rules. Do you know if Splunk is running? If it's installed in some cloud instance have you tried... See more...
Where is that installed? By the sounds of it, you've probably put it in the cloud if you are creating rules. Do you know if Splunk is running? If it's installed in some cloud instance have you tried SSH to the host to see if Splunk is running. Can you ping or traceroute to the host - that won't necessarily mean anything if you can't if there are firewall rules in place. Anyway, if you could originally login to Splunk and now you can't then it seems likely that a) Splunk is not running or b) Someone (if not you) has put some kind of firewall restrictions in place between you and it  
I have a Splunk Dashboard table with data.  This is the JSON below:             { "type": "splunk.table", "dataSources": { "primary": "ds_zn4Nlcdc" }, "title": "Some... See more...
I have a Splunk Dashboard table with data.  This is the JSON below:             { "type": "splunk.table", "dataSources": { "primary": "ds_zn4Nlcdc" }, "title": "Some title", "options": { "columnFormat": { "name": { "width": 109 }, "team": { "width": 60 } }, "headerVisibility": "fixed" }, "description": "Some description.", "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "$row.url.value$", "newTab": true } } ], "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }             I have Event Handlers to reroute to the correct URL when drilling down. BUT the hyperlink is applied to the whole row. I want the hyperlink to be applied to only a specific column so I can have multiple hyperlinks for one row.  At the moment, I can click any value on the row and I will be routed to $row.url.value$ but I want to click on a specific column and then be routed to that hyperlink specific to that column   
Thanx, I usually rename those fields as remove spaces. In that way it’s much easier to use those.
So, you do  | stats count by user group "connection method" if those are the names of your fields. 
Hello and help.  I've downloaded Splunk enterprise and initially was able to connect to the dashboard then all of a sudden I started to receive the Message "This site can't be reached". I've deleted ... See more...
Hello and help.  I've downloaded Splunk enterprise and initially was able to connect to the dashboard then all of a sudden I started to receive the Message "This site can't be reached". I've deleted cache and cookies per support then was nicely led to community support. Also, I deleted and added inbound rules for Splunk 9997 and splunk web. Thanks  
@isoutamo actually no, in stats for that type of field name it requires double quotes. It's eval that requires single quotes on RHS of expression.
If all you want is a single integer that is the total of all file_count values then stats is the way to go. | rex "..." ``` more query stuff ``` | stats sum(file_count) as Total_Count  
I just want to check an index for the following information and it to be displayed in a chart  I looking for help with the structure of the search the username, the group and the connection method ... See more...
I just want to check an index for the following information and it to be displayed in a chart  I looking for help with the structure of the search the username, the group and the connection method  
If you have a field called “connection method” you must surround it with ’ (use single ‘ in both side of field name). This told to splunk that those are field name, not two separate fields. |stats c... See more...
If you have a field called “connection method” you must surround it with ’ (use single ‘ in both side of field name). This told to splunk that those are field name, not two separate fields. |stats count by username, group, 'connection method'  
What does that mean and what have you tried that you need help with and what is not doing what you expect?
I need help with the structure of this search index=indexname I need help with the structure of this search I would like to display the username, the group and the connection method |stats coun... See more...
I need help with the structure of this search index=indexname I need help with the structure of this search I would like to display the username, the group and the connection method |stats count by username, group, connection method |sort -count
There are 3 apps I have used for network graphs - all good https://splunkbase.splunk.com/app/3767 https://splunkbase.splunk.com/app/4611 https://splunkbase.splunk.com/app/4438 The first two are g... See more...
There are 3 apps I have used for network graphs - all good https://splunkbase.splunk.com/app/3767 https://splunkbase.splunk.com/app/4611 https://splunkbase.splunk.com/app/4438 The first two are good network graphs, one does 3D, and the last allows custom icons. I use all 3 for slightly different purposes. All for classic dashboards
Have you check that this REGEX and also your TIME_PREFIX is working? Try those e.g. regex101.com. I’m not sure what all you could have in TIME_PREFIX? This seems to be working at least on regex101.c... See more...
Have you check that this REGEX and also your TIME_PREFIX is working? Try those e.g. regex101.com. I’m not sure what all you could have in TIME_PREFIX? This seems to be working at least on regex101.com, but check that it works also with splunk rex command. Btw which HEC endpoint you are using? Some of those are not extracting timestamp!
Hey Rich that works and I get the total at the bottom but it shows every single column also. Example I had 98 Events and total was 157,000 but it shows every single event and the columns
One way is with addcoltotals | rex "..." ``` more query stuff``` | addcoltotals file_count  
Thanks This helps extracting the number - how do I do the sum at the end ? in 24 hours I could have 96 * 2000 file uploads
It would help to know what you've tried so far, but getting the other field is just a matter of extending the regex. "Stopping\siteration[\s\-]+(?<stop_reg_id>[^:\s]+):\s*(?<file_count>\d+)"
So basically I need the total number of files I uploaded in a 24 hour period once I get that figure extracted
So I have an Index Index= xxxxxx "Stopping iteration" I have the rex for getting the unique Id Event Sample : Stopping iteration - 1900000000: 2000 Files accepted so my current REX is rex "Stoppi... See more...
So I have an Index Index= xxxxxx "Stopping iteration" I have the rex for getting the unique Id Event Sample : Stopping iteration - 1900000000: 2000 Files accepted so my current REX is rex "Stopping\siteration[\s\-]+(?<stop_reg_id>[^:\s]+)" and it extracts the 1900000000 I want to extract the 2000 number and then do a count for 24 hours. Any help would be great
on HEC - I tried the following by moving the TIME definitions under the source (for all 3 sources) in props.conf and removed them from sourcetype.  Restarted Splunk, but still did not work.   [sour... See more...
on HEC - I tried the following by moving the TIME definitions under the source (for all 3 sources) in props.conf and removed them from sourcetype.  Restarted Splunk, but still did not work.   [source::http:aws-lblogs] EXTRACT-elb = ^\s*(?P<type>\S+)(\s+(?P<timestamp>\S+))(\s+(?P<elb>\S+))(\s+(?P<client_ip>[\d.]+):(?P<client_port>\d+))(\s+(?P<target>\S+))(\s+(?P<request_processing_time>\S+))(\s+(?P<target_processing_time>\S+))(\s+(?P<response_processing_time>\S+))(\s+(?P<elb_status_code>\S+))(\s+(?P<target_status_code>\S+))(\s+(?P<received_bytes>\d+))(\s+(?P<sent_bytes>\d+))(\s+"(?P<request>[^"]+)")(\s+"(?P<user_agent>[^"]+)")(\s+(?P<ssl_cipher>\S+))(\s+(?P<ssl_protocol>\S+))(\s+(?P<target_group_arn>\S+))(\s+"(?P<trace_id>[^"]+)")(\s+"(?P<domain_name>[^"]+)")?(\s+"(?P<chosen_cert_arn>[^"]+)")?(\s+(?P<matched_rule_priority>\S+))?(\s+(?P<request_creation_time>\S+))?(\s+"(?P<actions_executed>[^"]+)")?(\s+"(?P<redirect_url>[^"]+)")?(\s+"(?P<error_reason>[^"]+)")? EVAL-rtt = request_processing_time + target_processing_time + response_processing_time priority = 1 SHOULD_LINEMERGE = false TIME_PREFIX = ^.*?(?=20\d\d-\d\d) TIME_FORMAT = MAX_TIMESTAMP_LOOKAHEAD = 28 [aws:elb:accesslogs]