Activity Feed
- Got Karma for Re: How can I set a token to the current logged-in username in SimpleXML dashboards without using Javascript?. 01-24-2022 01:20 AM
- Got Karma for Re: Looking for .conf17 FFIEC Cat initiative paper. 06-05-2020 12:50 AM
- Karma Re: Adjust ES Urgency based on Risk for David. 06-05-2020 12:48 AM
- Got Karma for Re: Searching DNS queries into reports from Splunk Stream. 06-05-2020 12:48 AM
- Got Karma for Re: Searching the _introspection index, why are PerProcess events missing?. 06-05-2020 12:48 AM
- Got Karma for Re: Searching DNS queries into reports from Splunk Stream. 06-05-2020 12:48 AM
- Got Karma for Re: Searching DNS queries into reports from Splunk Stream. 06-05-2020 12:48 AM
- Got Karma for Re: How to prevent users from writing to indexes?. 06-05-2020 12:48 AM
- Got Karma for Re: How to prevent users from writing to indexes?. 06-05-2020 12:48 AM
- Got Karma for Re: How to prevent users from writing to indexes?. 06-05-2020 12:48 AM
- Got Karma for Re: Adjust ES Urgency based on Risk. 06-05-2020 12:48 AM
- Got Karma for Re: When will Cisco Security Suite support Cisco IPS devices on Splunk 6.x?. 06-05-2020 12:47 AM
- Got Karma for Re: How can I set a token to the current logged-in username in SimpleXML dashboards without using Javascript?. 06-05-2020 12:47 AM
- Got Karma for Re: How can I set a token to the current logged-in username in SimpleXML dashboards without using Javascript?. 06-05-2020 12:47 AM
- Got Karma for How can I set a token to the current logged-in username in SimpleXML dashboards without using Javascript?. 06-05-2020 12:47 AM
- Got Karma for Re: Configuring "additional fields" for a notable event in Enterprise Security (ES). 06-05-2020 12:47 AM
- Got Karma for Re: Configuring "additional fields" for a notable event in Enterprise Security (ES). 06-05-2020 12:47 AM
- Got Karma for Re: Configuring "additional fields" for a notable event in Enterprise Security (ES). 06-05-2020 12:47 AM
- Got Karma for Re: Table cell Highlighting for several field values. 06-05-2020 12:47 AM
- Got Karma for Re: Table cell Highlighting for several field values. 06-05-2020 12:47 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
1 | |||
2 | |||
0 |
06-25-2019
08:37 PM
1 Karma
This document can be found here: https://drive.google.com/file/d/1uMmGAW8Vf8lCoZ2arK3Ly2uN8Jx7UXUN/view?usp=sharing
Sorry for the missing links - have moved this from Box to Drive. Thanks!
... View more
07-19-2018
10:54 AM
The error "ERROR SSLCommon - Can't read key file /opt/splunk/etc/certs/cert.pem errno=151441516 error:0906D06C:PEM routines:PEM_read_bio:no start line." can be caused if you mistakenly swap the certificate path with the root CA path in the .conf file.
... View more
08-30-2017
03:13 PM
Hello again @wuming79. Your issue in the first screenshot above is that the "top" command is finding the "most prolific command line length values in the dataset" and NOT the "largest command line length value in the dataset."
To demonstrate this point, try this:
sourcetype=xmlwineventlog:microsoft-windows-sysmon/operational EventCode=1
| eval cmdlen=len(CommandLine)
| fields cmdlen
| sort -cmdlen
| table cmdlen
| dedup cmdlen
| head 20
Or maybe this link.
... View more
08-30-2017
02:49 PM
In general @wuming79 a lot of the searches in the ransomware app are designed for detection and early containment. Whether or not you can use them to "stop" the ransomware depends entirely on the variant of ransomware. I would argue that the best defense against ransomware is user education, combined with a behavior-detecting technology on the endpoint that can observe what's going on and actually take action.
From a Splunk perspective, we have found time and again that many variants of ransomware do not "immediately" take action. Take, for example, the NotPetya wiper, which puts into place a scheduled task that kicks off a reboot routine an hour after infection. Well, if you're regularly searching for unusual scheduled tasks that shouldn't be there on your endpoints, or for the Windows event that tells you that a scheduled task has been added and a further search (which could be automated) also tells you it was an unusual executable that did it, you can take action. Actions might be taken via a Splunk modular alert, old-style alert script, Adaptive Response, or manual intervention. Actions might include shutting down the host and notifying a SOC, modifying a network config to isolate the endpoint to protect from lateral movement or network share encryption, etc.
Certainly some ransomware immediately does damage seconds after it is executed. While Splunk is a great platform to find these executions after the fact and to help you bolster your defenses to protect against that variant in the future, it isn't going to stop the infection in those cases.
... View more
04-05-2017
12:18 PM
1 Karma
Thank you David. I had a customer that asked this very same question earlier this morning! Also, lovely use of "amalgamation" above.
... View more
09-20-2016
09:05 PM
1 Karma
Hi - sorry for the delay here. I think my understanding is that you're talking about a Windows Universal Forwarder, and you don't see the PerProcess component in the _introspection index. I checked a Windows forwarder in my lab (6.4.3, Windows 7 64 bit) and sure enough, even though the introspection app was enabled, I did NOT see PerProcess.
I did get this working, and here's what I did:
Copied server.conf within the introspection app from default to local.
Edited server.conf and set acquireExtra_i_data = true in two stanzas: [introspection:generator:disk_objects] and [introspection:generator:resource_usage]
Because I'm super impatient I set collectionPeriodInSecs = 60 in both stanzas.
Restarted forwarder.
A few minutes later, I had this, where I did not have that component ever before:
Try something like that and let us know? By the way, this is documented here:
https://docs.splunk.com/Documentation/Splunk/6.4.3/Troubleshooting/ConfigurePIF#Populate_.22Extra.22_fields
... View more
09-06-2016
09:46 AM
3 Karma
Hello @brian1_tate, thanks for checking out Stream.
I think Splunk Stream + core Splunk is probably the best way forward for your particular use case of finding malicious DNS communications on your internal network. ES is not going to really add much more functionality here.
We have developed several bits of material that will be directly relevant to what you want to do. They all revolve around using Stream to capture the data (query logging from your DNS servers, while that will accomplish the same thing at the end of the day, will likely greatly annoy your server administrators.)
Here's what I would look at:
"Hunting the Known Unknowns with DNS" by Ryan Kovar and Steve Brant from our security practice:
https://conf.splunk.com/session/2015/conf2015_SBrant_RKovar_Splunk_SecurityCompliance_HuntingKnownUnkownsWith.pdf
"Random Words on Entropy and DNS" also by Ryan Kovar
http://blogs.splunk.com/2015/10/01/random-words-on-entropy-and-dns/
The recent SplunkLive security hands-on sessions, delivered by myself and others - which have a section on DNS exfil:
http://www.slideshare.net/Splunk/splunk-enterprise-for-infosec-handson-breakout-session-65534395
On that last one, there's a complementary document that you can review that has some good example searches to find malicious DNS comms - see the text on the bottom of slide 61. Hope that helps.
In my humble opinion - this part NOT speaking for Splunk - it makes sense for us to roll some of this functionality into ES in the "Advanced Threat" section of that product. I'll bring this up with our product managers.
... View more
06-15-2016
10:00 AM
3 Karma
In Splunk 6.4.0, we added a security check so that if users do not have "read" capability to the summary index they should not be able to write to that summary index via collect. If this is not working for you (after upgrading to 6.4.x) then log a support case and have them reference internal SPL-50063.
... View more
04-30-2016
07:30 AM
@joshd I believe you are correct. I also believe that your responses and @yannK 's responses are providing some needed clarification here. Yes - on Splunk Cloud we provide an outputs.conf, a specific custom SSL cert, and we turn on SSL forwarding with this cert, and we verify the server cert via the "sslVerifyServerCert=true" parameter. We don't do anything for Deployment Server (DS) because as you state, that's on prem.
In Splunk Cloud we do not run DS - you run DS on-prem. If you are using the default certs for this communication, then cert validation is not done by default, and communication between UF and on-prem DS should not be affected. I can also confirm that on a 6.2.3 forwarder here in my lab with completely stock configs, it communicates with my DS (6.3.3) just fine even though I set the date on it to 8/5/16 for testing. This is because there's no default enforcement of cert validation for UF to DS communication.
TL; DR - Splunk Cloud customers do not have default certs for forwarding into Splunk Cloud and should not be affected. IMHO: Splunk Cloud customers running an on-prem DS to configure their UFs who have not changed their default SSL configs should not be affected because certificate validation is not enforced. I am not the ultimate authority on this matter however, and would like confirmation from others at Splunk.
... View more
04-29-2016
09:48 PM
No @Jscordo. by design each Splunk Cloud customer is issued specific, unique to their organization, certificates. This is what comes in the forwarder app that you are given when you become a Splunk Cloud customer. Splunk Cloud does NOT use the default root certificates that ship with Splunk Enterprise, Light, and Hunk.
... View more
11-19-2015
07:11 PM
Thanks, Steve. Note that the best way to do this is to put the changes in a "local" version of commands.conf. See this post here:
https://answers.splunk.com/answers/118670/error-code-255-on-sentiment-app.html
... View more
11-19-2015
07:05 PM
This has been seen on distributed environments. The fix is to go into commands.conf, and set "local = true" for each of the four custom commands. Copy commands.conf to local before you do this. See below.
[sentiment]
filename = sentiment.py
retainsevents = true
streaming = true
local = true **<-- add this here**
supports_getinfo = false
run_in_preview = true
enableheader = false
#changes_colorder = true
#overrides_timeorder = false
supports_rawargs = true
Do the same for language, tokens, and heat commands in the same file. A restart should not be required.
... View more
10-03-2015
09:14 PM
3 Karma
Here is some example code. You need to be running Splunk 6.3 or greater, and you use the "finalized" handler to set a token to the first search results from a REST search that returns the current username (in the field 'title'). The example also shows a "depends" on the table that never gets set ($neverdisplay$), so the table that runs the search remains hidden, although the search runs and the token gets populated.
With the example below you will have a new token called "loggedinuser" that you can then use throughout the rest of the dashboard.
<dashboard>
<label>Token Tester</label>
<row>
<panel>
<table depends="$neverdisplay$">
<title>get a token</title>
<search>
<query>|rest /services/authentication/users splunk_server=local | search [| rest /services/authentication/current-context splunk_server=local | rename username as title | fields title]</query>
<earliest>-60m</earliest>
<latest>now</latest>
<finalized>
<set token="loggedinuser">$result.title$</set>
</finalized>
</search>
</table>
</panel>
<panel>
<title>Token Display</title>
<html>
<h3>Logged In User Token is...</h3>
<div class="custom-result-value">$loggedinuser$</div>
</html>
</panel>
</row>
</dashboard>
... View more
10-03-2015
09:11 PM
1 Karma
How can I retrieve the current username of a SplunkWeb user, and use that value in a token so that I can automatically customize subsequent searches on the dashboard to that username? I don't want to use any Javascript and I don't want to have to convert my dashboard to HTML to do this.
... View more
09-05-2015
03:50 PM
The Tripwire Enterprise app runs via a scripted input that in turn requires python. Therefore, the component that retrieves data from the TE console needs to be on either a Heavy Forwarder or a full splunk instance like a Search Head. The python scripted input pulls back data and writes it in CSV format in a flat file, and then a standard Splunk monitor input picks it up. My suggestion to keep things simple, and not have to maintain monitor inputs on all of your search heads in a cluster, is to put the TA portions of the app on a Heavy Forwarder. There is no reason that you can't run the rest of the app on a Search Head Cluster (disable the monitor inputs in the app).
... View more
08-21-2015
12:38 PM
So Matt, I'm late to the game, but you mention that changes to log_review.conf are not making any difference. Can you go through the more detailed example given by @ekost and let us know what the results are? I'm curious as to the output of btool...
... View more
08-03-2015
07:25 AM
3 Karma
The answer that mentions editing of notable2.html is no longer valid in recent versions (3.x) of ES. Instead, copy to local and edit log_review.conf, under $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/. Place your new field in the log_review.conf file, which should now reside in $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/local. A restart is not needed.
... View more
07-24-2015
09:21 AM
2 Karma
And to access the file in your myapp app, it would be:
http://localhost:8000/en-US/splunkd/__raw/servicesNS/admin/myapp/static/myfile.png
... View more
07-13-2015
03:37 PM
3 Karma
You may notice that the Splunk App for Stream has a number of dashboards to allow you to monitor your Stream environment. Also, in version 6.3 we added the "distributed forwarder management" feature that allows different Stream forwarders to monitor different protocols.
In order for these two features to work, your remote Universal Forwarders (UF) must have a modification to their outputs.conf, so that they can write directly to the _internal index with your Splunk environment. This is actually documented here:
http://docs.splunk.com/Documentation/StreamApp/6.3.0/DeployStreamApp/Deploymentrequirements#Splunk_component_requirements
When you do this, the stream:stats sourcetype will populate data into _internal, and these two features will work.
... View more
07-13-2015
03:36 PM
2 Karma
Why don't the default dashboards in the Splunk App for Stream get populated with data? Why can't I assign a forwarder to a group within the Distributed Management capability in the Stream app?
... View more
06-02-2015
07:41 PM
Great diagram. Is there an updated one to include Search Head Clustering? New ports required are 8191 for the KV store, and a replication port chosen at implementation time (I have seen 8989 used) for search head cluster members to replicate data.
... View more
05-11-2015
02:57 PM
Interesting question, brodsky.
To fix, in the app, copy commands.conf from default to local. Then add the line "overrides_timeorder = true" at the end. No restart is necessary.
[faroo]
filename = faroo.py
streaming = true
passauth = true
overrides_timeorder = true
... View more
05-11-2015
02:55 PM
After installing SA-Faroo and supplying the appropriate API key, why does the following message appear in the Splunk search window?
"The external search command 'faroo' did not return events in descending time order, as expected."
... View more
- Tags:
- SA-Faroo
03-18-2015
09:50 AM
Thank you Stefan and Kamermans - I had a customer running into this same issue today and this Answers post allowed me to avoid a ton of testing.
... View more
02-25-2015
09:42 PM
thanks Raghav!
... View more