Activity Feed
- Got Karma for Re: How to display last connection status in a chart?. 06-05-2020 12:49 AM
- Karma Re: Performance issue with auto-refreshing dashboard for jeffland. 06-05-2020 12:47 AM
- Karma Re: How to generate multiple charts from one search result faster? for Raghav2384. 06-05-2020 12:47 AM
- Got Karma for What do backticks do in searches?. 06-05-2020 12:47 AM
- Got Karma for Using Search Tutorial data, how to search for the total number of items sold, the best selling item, and number of the best selling item sold by country?. 06-05-2020 12:47 AM
- Got Karma for Re: Why is rex failing to extract a field and getting error "Regex: unmatched parentheses"?. 06-05-2020 12:47 AM
- Got Karma for Re: Why is rex failing to extract a field and getting error "Regex: unmatched parentheses"?. 06-05-2020 12:47 AM
- Posted Re: Tear down entire cluster (search head and indexer) every week on Deployment Architecture. 06-25-2019 07:15 AM
- Posted Tear down entire cluster (search head and indexer) every week on Deployment Architecture. 06-24-2019 01:38 PM
- Posted Re: Can you help me create a regex expression that captures text with a comma? on Splunk Search. 09-21-2018 12:03 PM
- Posted Re: How to display last connection status in a chart? on All Apps and Add-ons. 09-21-2018 11:56 AM
- Posted Re: How do you set up a time range from 10 pm to 4am for a scheuled hourly report? on Reporting. 09-21-2018 11:41 AM
- Posted Re: How to use encrypted credentials (storage/passwords) in the REST API Modular Input? on All Apps and Add-ons. 08-21-2018 06:28 AM
- Posted Re: Palo Alto: Adaptive Response: Tag to Dynamic Address List requires commit? on All Apps and Add-ons. 03-15-2018 09:48 AM
- Posted Palo Alto: Adaptive Response: Tag to Dynamic Address List requires commit? on All Apps and Add-ons. 03-08-2018 07:11 AM
- Posted Splunk App for Enterprise Security: Why do the Threatintel lookup files not work unless used after the table command? on Splunk Enterprise Security. 10-15-2015 12:11 PM
- Tagged Splunk App for Enterprise Security: Why do the Threatintel lookup files not work unless used after the table command? on Splunk Enterprise Security. 10-15-2015 12:11 PM
- Posted Re: What is the source of indexing lag and how to fix it? on Getting Data In. 06-16-2015 04:52 PM
- Posted Re: What is the source of indexing lag and how to fix it? on Getting Data In. 06-16-2015 04:27 PM
- Posted Re: What is the source of indexing lag and how to fix it? on Getting Data In. 06-16-2015 04:24 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 |
06-25-2019
07:15 AM
you mean the docker?
... View more
06-24-2019
01:38 PM
Hello,
I have a customer who wants to tear down the entire cluster every week.
Long story short, they do not want long lasting VMs.
Does anyone know where I can find some reference document or quotes from Splunk this is recommended?
I have done this from time to time when there were specific reasons (OS out of support, going AWS or Azure)
but not as a regular maintenance work.
Is anyone else doing this?
... View more
09-21-2018
12:03 PM
Try
| rex "MDM,(?<tmp>[^,]+),"
... View more
09-21-2018
11:56 AM
1 Karma
Try
Connecting OR "Connect or create consumer failed with exception" OR "Retry connecting in"
| timechart span=10min first(message) as message
... View more
09-21-2018
11:41 AM
Try below
index=test test test=test earliest=-6h@h
| eval hour=strftime(_time,"%H")
| eval current_hour=strftime(now(),"%H")
| eval hour_filter=if(current_hour>4,21,0)
| where hour>hour_filter
| ...
At 11 PM, current_hour=23 and thus hour_filter=21.
... View more
08-21-2018
06:28 AM
Hello Damien, avilandau,
Were you able to encrypt the password in inputs.conf with above suggestion?
I have tried replacing the auth_user parts as shown above but no luck.
Am I missing something?
I am using version 1.5.3 and yes, I am not sure where this encrypted_username, encrypted_password are being used...
Can anyone shed some light on rest.py ?
... View more
03-15-2018
09:48 AM
Yes, we have created separate account specific for this feature with correct capabilities.
IP is tagged correctly and is added to the group correctly but the issue is that it requires a manual commit.
The only difference I see is the use of Panorama which we do not have.
If I am readying your answer correctly, the dest_ip is added to this DAG as soon as the query is completed? Without any further action?
We have given the "commit" capability to the account as well but still, we need to commit the changes manually for new IP to be added to the group.
... View more
03-08-2018
07:11 AM
Hello,
I am using Palo Alto App for Splunk and its adaptive response feature.
We have done some troubleshooting and testing and based on what we have accomplished so far, I have few questions:
Commit required
According to documents,
"The IP is tagged on the firewall immediately, however, it can take up to 60 seconds for the tagged IP addresses to show up in the corresponding Dynamic Address Group in the security policy. This delay is intentional to prevent accidental DoS scenarios."
We've waited couple minutes or more but we found that admin has to initiate "commit" for the IP to be included in the Group.
This is the command we tried:
index=pan_logs sourcetype=pan:threat host=$PA_FIREWALL$ category=malware vendor_action=allowed dest_zone=internal
| stats count by src_ip
| pantag device="$PA_FIREWALL$" action=add tag="SplunkBlock" ip_field="src_ip"
Change is not visible
We are getting Palo Alto logs from the device and for config type logs, following custom format is used:
$receive_time $admin $host $client $cmd $result $path $before-change-detail $after-change-detail
Strangely, we do not see any log related to the IP being added to the tag or to the group.
Is this expected behaviour? or are we missing some field in syslog setting?
Thanks!
... View more
10-15-2015
12:11 PM
Hello,
I am using the threat intelligence lookup files from the Splunk App for Enterprise Security and the lookup file (e.g. threatintel_by_domain) is giving an error when it is not used after table.
For example,
index=* sourcetype=bluecoat | table cs_host user | lookup threatintel_by_domain.csv domain as cs_host OUTPUT threat_collection | search threat_collection=*
works, but
index=* sourcetype=bluecoat | lookup threatintel_by_domain.csv domain as cs_host OUTPUT threat_collection | search threat_collection=* | table cs_host user
gives error saying The lookup table 'threatintel_by_domain.csv' does not exist or is not available.
All my custom lookup files work without table, but all the lookups in threatintel does not work without table. I've checked the permission and they are all global so it is not an issue with permission.
Any suggestion?
... View more
06-16-2015
04:52 PM
We are using Splunk 6.1x and I've just installed SoS and finished setting and.. it doesn't look good.
I went into Indexing>Distributed Indexing Performance and checked the Real-time measured indexing rate and latency per Type:index and found the average latency of this index to be 30k sec.. which is 8 hours.
I've checked and ran the query I found from http://answers.splunk.com/answers/31151/index-performance-issue-high-latency.html and tried index=_internal source=*metrics.log blocked and found that most of the host reporting to this troubling index are exceeding the max_size... I guess I'll go change the setting on the forwarder.
... View more
06-16-2015
04:27 PM
This is how it looks like
... View more
06-16-2015
04:24 PM
I've tried your query and found that the lagSecs stays 0 but then increases exponentially during the general work hour (yes we are using Splunk for traffic monitoring) and decrease to 0 after work hour. So it's the same question again. Is it the issue with heavy forwarder not able to handle the amount of logs coming in or what else?
... View more
06-16-2015
03:50 PM
Hello,
We've been having issue with license usage lately where we see sudden spike of eps from multiple host.
Recently, I found that there is indextime (eval indextime=_indextime) and used it to compare it with the timestamp (_time).
It is not shown in the image above but the initial time for both graphs was exactly 00:00:00 but there is certainly a lag at the end where there is a spike of event count in indextime.
So my question is, am I interpreting the graph properly? that the forwarder is pushing logs at certain point instead of streaming it continuously?
If so, what could be causing this lag? We have more than 100 devices reporting to our heavy forwarder. Is it too much for heavy forwarder to handle? Or is it the issue with the hardware (memory or CPU) of heavy forwarder or indexer? Again, similar lag is observed over dozens of our hosts which are generating large amount of logs.
Thanks in advance!
... View more
05-21-2015
12:30 PM
Thanks! I'll see what I can find!
... View more
05-21-2015
11:11 AM
Also, what am I supposed to look for regarding performance issue on internal index logs?
... View more
05-21-2015
11:10 AM
I haven't check internal indexes but I think I found the issue. We are using FireFox as our default browser and it is consuming 80% of physical memory on average. If shifting tabs or queries I type appear a minute later, it is issue with the browser and RAM, right?
... View more
05-20-2015
09:43 AM
Hi,
We are sharing multiple dashboards with clients which are automatically refreshing every 5 minutes.
The problem is that from time to time, the dashboard doesn't display panels properly as shown below
!Is there something wrong with the xml code or is it simply a performance issue? We have added new indexer which should have enhanced the performance of Splunk but we are not experiencing any difference. Is there a query to check the performance degradation before and after?
... View more
04-07-2015
09:55 AM
Hello
I've been using metadata command for many reports and alarms for new host added, eps and reporting status and now I wonder if the results of metadata command is, in fact, reliable. For other searches, I can actually check by looking at the raw log but not metadata. Can anyone give me a direction where I can find how metadata command works? because in search reference pdf, it doesn't describe where it is fetching those firstTime, lastTime and totalCount from. I just want to confirm what I hope is true is actually true before putting myself in trouble by blindly believing in a command I don't fully understand.
... View more
- Tags:
- metadata
02-26-2015
04:47 AM
Actually I found easier way!.
I tried dedup 10 sourcetype and it worked like magic.
Thanks for the answers, too!
... View more
02-25-2015
06:19 AM
Hello I have question regarding limiting the number of events on search to reduce the search time.
Currently, I'm trying to get summary of sourcetype for their eps(events per sec) and log stoppage.
Here is the query I'm using currently.
sourcetype=firewall:web1|head 10|stats sparkline count, first(_time) AS LastTime last(_time) as FirstTime values(index) AS Index values(sourcetype) AS SourceType |eval timediff=now()-LastTime|eval duration=LastTime-FirstTime|eval eps=10/duration|fields Index SourceType FirstTime sparkline LastTime duration eps timediff|convert ctime(FirstTime) ctime(LastTime)
Above query gives me the details of the logging activity and I have tried to reduce search time by applying |head 10
The problem is that I have dozens of sourcetype and would like to get the summary for all the sourcetype I have.
However, as I used sourceytpe=*, I can only get first few sourcetype because I have limited search for the first 10 events, not for 10 events per sourcetype.
Is there a way to limit number of events on search by fields such as index or sourcetype?
I've made dashboard with a table applying above query for each sourcetype (one table per sourcetype) but it's taking forever.
Any suggestion?
... View more
02-12-2015
07:43 AM
works like magic! thanks!
... View more
02-10-2015
07:24 PM
Hello,
I have a quick question regarding dashboard. I would like to know if the search queries I have provided on dashboard panels can be applied after I input some value such as a user name. What I mean is that based on the username I provide on search head or wherever, the panels will give results. In other words, theses panels will give different results based on the username. Basically, I want to create a dashboard which summarize user activity.
Search queries I'm using are:
%Windows failed logins (count and source)
Keywords="Audit Failure" (EventCode=4625 OR EventCode=4771) |stats count by user dst src|fields user src dest count|rename dst as System|sort -count
%Windows failed logins (times)
Keywords="Audit Failure" (EventCode=4625 OR EventCode=4771) | bucket _time span=1h | stats count by _time user
%Windows successful logins
EventCode=4624 Logon_Type=10 |stats count by _time host, user, src |sort –count
If all fails, my plan is to mimic the time picker to create user picker... am I going in right direction?
... View more
01-29-2015
04:12 PM
Thanks again but the whole point of not using timechart is because I have more than 200 hosts (in this case, series). As a result, when the provided query is ran, only the results (avg, max, etc.) of few hosts are shown and the rest seems to be aggregated as OTHERS. Unless there is a way to switch column and row, I need to find better way. The original question is to have another field (column) to already generated chart which is possible for the table generated by stats command. What I meant by adding another field is not limited to avg or max but the field generated by eval command as well which enable us to do all kind of stuff.
... View more
01-29-2015
03:10 PM
Thanks but it is the same as the query I put at the last. I need the sum for each day as shown above as well as extra fields (Description 1, 2 etc.)
... View more
01-29-2015
03:01 PM
Hello,
I've been using the query provided at http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume to get the indexed volume by host and I modified it a little to see the pattern - such as log stoppage.
My query is
index=_internal group="per_host_thruput" earliest=-3d@d latest=@d| chart eval(round(sum(kb), 2)/1024) over series by date_mday
which works fine but I would like to add another field (column to the chart) using eval or eventstats command such as finding the average (avg) or peak volume (max). If I was gathering this information by stats command then it would be easy but the problem is that I need to have at least 3 days of time range to see the pattern. Yes, I can use timechart which works but I have over 200 devices which cannot fit in a row (and this must be shown in a dashboard panel).
Currently, my query looks like this
series 26 27 28
10.0.0.1 50 48 24
10.0.0.2 4 8 2
10.0.0.3 1 1
...
which I would like to have this
series 26 27 28 Description(avg) Description(max) Description
10.0.0.1 50 48 24 xx xx
10.0.0.2 4 8 2 x x
10.0.0.3 1 1 x x Alert
...
Can this be done? or can it be done using stats command? The closest I got with stats command is this
index=_internal group="per_host_thruput" earliest=-3d@d latest=@d| eval D1 = max(date_mday) | eval D2 = D1-1 | eval D3 = D2-1 |stats sum(kb) by series D1 D2 D3
but it's not giving sum(kb) by date.
... View more