Activity Feed
- Got Karma for Re: Why is cluster master stuck at "Bundle validation is in progress" indefinitely after configuration-bundle update?. 06-17-2020 04:56 PM
- Karma Re: Why is my timezone configuration in the app directory for my search head cluster being ignored? for ddrillic. 06-05-2020 12:48 AM
- Karma Re: How to compare search results from 2 dates without using subsearch? for sundareshr. 06-05-2020 12:48 AM
- Karma Re: How to compare search results from 2 dates without using subsearch? for cmerriman. 06-05-2020 12:48 AM
- Karma Re: Question about the TA for Microsoft AD for gcusello. 06-05-2020 12:48 AM
- Got Karma for Re: Can Eventgen run on a Universal Forwarder?. 06-05-2020 12:48 AM
- Got Karma for Re: Lookup File Editor App for Splunk Enterprise: Why do other users get error "Client is not authorized" trying to open my CSV Lookup?. 06-05-2020 12:48 AM
- Got Karma for Re: Lookup File Editor App for Splunk Enterprise: Why do other users get error "Client is not authorized" trying to open my CSV Lookup?. 06-05-2020 12:48 AM
- Got Karma for Re: Lookup File Editor App for Splunk Enterprise: Why do other users get error "Client is not authorized" trying to open my CSV Lookup?. 06-05-2020 12:48 AM
- Got Karma for Lookup File Editor App for Splunk Enterprise: Why do other users get error "Client is not authorized" trying to open my CSV Lookup?. 06-05-2020 12:48 AM
- Got Karma for Re: Slack Notification Setup Problems. 06-05-2020 12:48 AM
- Got Karma for Re: Slack Notification Setup Problems. 06-05-2020 12:48 AM
- Got Karma for Re: Can Eventgen run on a Universal Forwarder?. 06-05-2020 12:48 AM
- Got Karma for Re: Can Eventgen run on a Universal Forwarder?. 06-05-2020 12:48 AM
- Karma Re: Dashboards slideshow for martin_mueller. 06-05-2020 12:47 AM
- Karma Re: Where should I check for python.log error messages about generating pdf of scheduled reports? for ronogle. 06-05-2020 12:47 AM
- Karma Re: How to compare a field to another field in a CSV file? for sundareshr. 06-05-2020 12:47 AM
- Got Karma for Dashboards slideshow. 06-05-2020 12:47 AM
- Got Karma for Re: Why is cluster master stuck at "Bundle validation is in progress" indefinitely after configuration-bundle update?. 06-05-2020 12:47 AM
- Got Karma for Import non native python libraries into Splunk. 06-05-2020 12:47 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
05-22-2019
09:00 AM
Cancelling the bundle push didn't actually work for me. I had to restart (./splunk restart) the indexer peers one at a time. Rolling restart from the CM won't work either.
... View more
05-22-2019
08:59 AM
2 Karma
Still works 4.5 years later
... View more
08-01-2018
09:48 AM
I'm not sure if this still works. This was from 2014 and I'm not in that environment anymore.
... View more
07-31-2018
02:22 PM
Hello,
I'm looking to enrich my search results with WHOIS data from a API call. I'm trying to create an external command to take the domain_name from an event, call the external command, add the json fields that it returns to the existing search results. What I have now replaces all of the search results instead of just adding the fields and doesn't currently work. I really do not know a lot about how the external search commands work. Can anyone give me pointers or have existing scripts that I can modify to work for me? I'll also add that I need to be able to enrich the data from the indexer tier. Possible?
import urllib
import json
import sys
from splunklib.searchcommands import dispatch, StreamingCommand, Configuration
@Configuration()
class ExStreamCommand(StreamingCommand):
def stream(self, records):
for record in records:
domain_name = record.get('domain_name')
url = "http://whoisserver:8080/whois/%s" %(domain_name)
response = urllib.urlopen(url)
data = json.loads(response.read())
yield data
if __name__ == "__main__":
dispatch(ExStreamCommand, sys.argv, sys.stdin, sys.stdout, __name__)
TIA,
Todd
... View more
07-24-2017
07:55 AM
Perfect, thank you! I couldn't for the life of me remember where I read that. I did decide to roll out the stanzas that collected the AD logs to the other AD servers, but the stuff that grabbed topology and replication information only happens on 1.
... View more
07-24-2017
07:07 AM
I thought I read somewhere that the TA should only be installed on one of the AD server for a forest, but I can't find that statement anymore. Is this correct or should it be installed on all AD servers?
TIA
... View more
06-27-2017
09:19 AM
Are they both suppose to be Europe/Paris? You can do the btool and the one not acting correctly and see if the config is there.
... View more
06-27-2017
08:16 AM
Yeah, add the tz to your local/user-prefs.conf that you have. I see no reason why that shouldn't work.
... View more
06-27-2017
07:53 AM
Can you do a "/opt/splunk/bin/splunk btool --debug user-prefs list | less" and search for tz?
... View more
06-27-2017
07:51 AM
I believe the first one worked for me.
... View more
02-10-2017
07:15 AM
I believe this approach would account for systems where applications are being decommissioned. Here i'm only looking for ports = 1 and the last recorded time was within the last hour from the end of the search. Does this look right?
|pivot SecOps__Listening_Ports Unix_Listening_Ports SPLITROW _time, SPLITROW host, SPLITROW dest_port | eventstats last(_time) AS maxtime | bucket span=1h _time | dedup host,dest_port,_time,maxtime | stats values(maxtime) AS maxtime last(_time) AS lasttime dc(_time) AS hostcount count as portcount by host,dest_port | where hostcount >= 24 AND portcount = 1 AND lasttime >= relative_time(maxtime, "-1h")
Sorry, also switched it to a pivot table.
... View more
02-09-2017
01:19 PM
Ugh, good point. I'll have to think that through.
Thanks again!!!
... View more
02-09-2017
01:08 PM
Have been monitored for at least 24 hours.
... View more
02-09-2017
01:00 PM
I wanted to avoid false positives for systems being turned up in an active environment.
... View more
02-09-2017
10:47 AM
so simple!!
... View more
02-09-2017
09:21 AM
Hi,
I'm trying to find new ports that are opened up on a system where I have 24 hours of existing data.
sourcetype=Unix:ListeningPorts | join host [search sourcetype=Unix:ListeningPorts | bucket span=1h _time | dedup _time,host | stats count AS hostcount by host | table host,hostcount] | bucket span=1h _time | dedup host,dest_port,_time | stats values(hostcount) AS hostcount count as num_data_samples by host,dest_port | where hostcount = 24 AND num_data_samples = 1
The search takes few minutes to complete and i'm trying to get the time down to run as an alert. I believe the subsearch is causing the delay, but i'm not sure how to get the number of times a host reported in a 24 hour period. If I don't use the subsearch then I can't do the dedup to remove the multiple entries per hour (multiple dest_port) for a host.
Does anyone have any suggestions on how to make this better?
TIA
... View more
01-17-2017
01:55 PM
I'd like to know this as well.
... View more
12-14-2016
12:37 PM
Sorry, the answer is in the service-now.conf
[snow_default]
record_count = 50000
... View more
12-12-2016
02:08 PM
Hi,
I have a fairly large ServiceNow instance and I want to pull more records than the 1k set by default. I tried increasing the DEFAULT_RECORD_LIMIT in bin/snow_consts.py, but it still only pulls 1000. Any suggestions?
TIA
... View more
12-12-2016
12:25 PM
It appears that there is stuff in _internal that is absolutely necessary for this app to work properly. I had not set it to forward to the indexers yet.
Thanks
... View more
12-08-2016
01:43 PM
Hi,
I'm about to pull what little hair I have left out. I have a SH and Indexer Cluster running 6.5.1. My cluster uses our own SSL certs for server.conf, web.conf, and inputs.conf, which appear to be working fine. I've installed Splunk Steam (splunk_app_stream and Splunk_TA_stream) on my deployment/admin server. I've installed Splunk_TA_stream on my indexers and a heavy forwarder. I set the location of my server running the splunk_app_stream in the inputs.conf and the Splunk_TA_stream on the heavy forwarder. My problem is that the heavy forwarder still does not show up in the Distributed Forwarder Manager even though I see 2 way traffic via tcpdump. Can anyone help me who has set this up before? What information do you need?
Thank you so much in advance,
Todd
... View more
10-21-2016
10:26 AM
heh, think i found a solution.
| eval timea = if(len("$timeRangeOld.latest$") < 10,relative_time(now(),"$timeRangeOld.latest$","$timeRangeOld.latest$") | eval when=if(_time<=timea, "old", "new")
... View more
10-21-2016
09:12 AM
One last question on this topic.
Works
eval when=if(_time<=if(isnum(1476860400),1476860400,relative_time(now(),"1476860400")), "old", "new")
eval when=if(_time<=if(isnum("-3d@d"),"-3d@d",relative_time(now(),"-3d@d")), "old", "new")
Doesn't Work
when=if(_time<=if(isnum("1476860400"),"1476860400",relative_time(now(),"1476860400")), "old", "new")
eval when=if(_time<=if(isnum(-3d@d),-3d@d,relative_time(now(),"-3d@d")), "old", "new")
My problem is, I can't figure out how to do the query where it can take calendar and relative times from the time picker. Quotes are needed for the relative and not for the calendar (epoch).
index=f5 instance=test (earliest=1476774000 latest=1476860400) OR (earliest=1476946800 latest=1477033200) | eval when=if(_time<=if(isnum(1476860400),1476860400,relative_time(now(),"1476860400")), "old", "new") | eval urlall=split(uri,"?") | eval url=mvindex(urlall,0)+"*" | chart count avg(reqtime) as avgtime over url by when | rename "count: old" as countold "count: new" as countnew "avgtime: old" as avgtimeold "avgtime: new" as avgtimenew | sort - avgtimeold | where countold > 100 | head 30 | eval avgtimediff=avgtimenew - avgtimeold | eval avgtimediffpercent=tostring(floor(avgtimediff*100/avgtimeold))+"%" | eval countdiff=countnew-countold | eval countdiffpercent=tostring(floor(countdiff*100/countold))+"%" | table url,countold,countnew,countdiff,countdiffpercent,avgtimeold,avgtimenew,avgtimediff,avgtimediffpercent
... View more
10-19-2016
01:33 PM
Nevermind, I used the rename from cmerriman's response and it seems to work. Thank you both for the great help. This query is nice!
... View more
10-19-2016
01:27 PM
index=f5 instance=test (earliest=-4d@d latest=-3d@d) OR (earliest=-2d@d latest=-1d@d)
| eval when=if(_time<=relative_time(now(), "-3d"), "Old", "New")
| eval urlall=split(uri,"?")
| eval url=mvindex(urlall,0)+"*"
| chart count avg(reqtime) as avgtime over url by when
| sort - avgtimeOld
| where countOld > 100
This query doesn't produce results. When I take out the where it does. Am I screwing up the field names? They show in splunk as "count: Old"
... View more