Activity Feed
- Posted Re: Add Notable Event to an ES Investigation using the API on Splunk Enterprise Security. 05-13-2024 11:37 PM
- Posted Add Notable Event to an ES Investigation using the API on Splunk Enterprise Security. 05-13-2024 10:16 PM
- Posted Re: ERROR: Failed to migrate to storage engine wiredTiger, reason=[blank] on Installation. 12-29-2021 11:00 PM
- Karma Re: SEP 14.2 RU1 log format change for jtwind_2. 06-05-2020 12:50 AM
- Karma Re: DBConnect: OSError: [Errno 2] No such file or directory validate java command: /usr/java/latest/bin/java. for teunlaan. 06-05-2020 12:50 AM
- Got Karma for Splunk ES Duplicate of Notable Events displaying in Incident Review Dashboard. 06-05-2020 12:50 AM
- Karma BucketMover error for pil321. 06-05-2020 12:49 AM
- Got Karma for API Modular input Response Handle JSON. 06-05-2020 12:49 AM
- Got Karma for How to edit my search to track the amount of data being ingested to a specific index, measured in MB/per minute?. 06-05-2020 12:48 AM
- Got Karma for How to edit my search to track the amount of data being ingested to a specific index, measured in MB/per minute?. 06-05-2020 12:48 AM
- Karma Re: Add Picture to Dashboard for Michael. 06-05-2020 12:46 AM
- Posted Re: SEP 14.2 RU1 log format change on All Apps and Add-ons. 08-27-2019 10:49 PM
- Posted Re: Log Event Alert Action not visible when creating alert on Security. 08-15-2019 03:10 AM
- Posted Log Event Alert Action not visible when creating alert on Security. 08-15-2019 01:53 AM
- Tagged Log Event Alert Action not visible when creating alert on Security. 08-15-2019 01:53 AM
- Posted Splunk ES Duplicate of Notable Events displaying in Incident Review Dashboard on Splunk Enterprise Security. 02-03-2019 04:58 AM
- Tagged Splunk ES Duplicate of Notable Events displaying in Incident Review Dashboard on Splunk Enterprise Security. 02-03-2019 04:58 AM
- Posted Re: Datamodel Acceleration questions on Knowledge Management. 01-16-2019 10:53 PM
- Posted Datamodel Acceleration questions on Knowledge Management. 01-16-2019 03:46 AM
- Posted API Modular input Response Handle JSON on All Apps and Add-ons. 01-31-2018 03:00 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
1 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
05-13-2024
11:37 PM
Yeah that’s the document I’ve been following. I’ve tried many different combinations and so far nothing has worked. are you able to share the correct api query to use?
... View more
05-13-2024
10:16 PM
I would like to have an investigation created with a notable event recorded in there using the API. I've been trying to achieve this by adding a notable event to an ES investigation using the API. So far I have been able to create an investigation and then add an artifact to it using the API. Next step I need to complete is to insert a notable event into an ES investigation using the API. Alternatively if its possible to create an investigation from a notable using the API then I would also be happy with that option.
... View more
Labels
12-29-2021
11:00 PM
I had the same issue. For me it was because I didnt have any data in my kvstore to migrate. What worked for me was creating a kvstore and entering some dummy data. When I tried the migration again it was successful.
... View more
08-15-2019
03:10 AM
I've found the solution.
To fix this I edited default.metadata
[]
import = app1, app2, alert_logevent
... View more
08-15-2019
01:53 AM
Hi All,
I am creating an alert in an app which I have made using the add-on builder, my app name starts with SA-. As part of the alert I would like to use the log event trigger action. For some reason when I am in the context of my app I am unable to see this trigger action option. In the context of other apps such as search and other Splunk apps downloaded from splunk base I am able to see the log event trigger action.
under settings>alert actions I have confirmed the log event alert action has been shared globally.
Confirmed default.metadata in the alert_logevent app:
[alert_actions]
export = system
Confirmed my app is also shared globally.
I've made the alert_logevent app visible which did not work.
Tried renaming the app to remove the SA-
If I go to settings>searches,report and alerts>new alert. Then create the alert from the context of my app, I am now able to see the alert action but when it runs I get the following error
ERROR SearchScheduler - Error in 'sendalert' command: Alert action "logevent" not found., search='sendalert logevent results_file="/opt/splunk/var/run/splunk/dispatch/scheduler__nobody_U0EtZGFya21hdHRlci10aHJlYXQtZGV0ZWN0aW9u__6005_at_1565846400_1262_27223330-DB35-4A3A-8767-873F2404D37B/per_result_alert/tmp_5.csv.gz" results_link="https://splunkserver:8000/app/app_name/app_name?q=|loadjob scheduler__nobody_U0EtZGFya21hdHRlci10aHJlYXQtZGV0ZWN0aW9u__6005_at_1565846400_1262_27223330-DB35-4A3A-8767-873F2404D37B | head 6 | tail 1&earliest=0&latest=now"'
08-15-2019 09:20:02.390 +0400 INFO sendmodalert - Invoking modular alert action=logevent for search="6005"
I feel like it is a permission issue but not sure what else I can change.
Splunk Enterprise V7.0 and also on V7.1.3
... View more
02-03-2019
04:58 AM
1 Karma
Hi Everyone,
I have an issue where I am seeing am seeing duplicate notable events for a single event.
So heres the details:
-First off, this is not occurring for every notable, its random.
-cron schedule is every 20 minutes. With a time range of -60min
-There is only one trigger event, and only one event in the notable index.
-The duplicate and the original appear at the same time.
I've also had other issues where depending on what urgency I've filtered out, can impact the values of the other urgencies. For example I have 50 highs and 100 lows, I now filter out the lows and high drops to 25. I'm thinking this randomness may be related to my duplicate notable event issue.
ES version 5.2.0
Splunk Enterprise version 7.1.3.1
... View more
01-16-2019
10:53 PM
It's not the answer I was hoping for, but thank you for clarifying this.
... View more
01-16-2019
03:46 AM
Hi Everyone,
I have a few questions which I haven't been able to find answers too.
I have more than one search head cluster searching across the same indexer cluster. I would like to use the SA_CIM's accelerated datamodels on all my search head clusters so that I can use TSTATS commands and have wicked fast searches + all the benefits of apps which use the CIM datamodel. From what I can tell is I'll have to have the SA_CIM app with accelerated datamodels running on each of the search head clusters, and each search head cluster will be generating its own copy of accelerated data on the indexer cluster. I would like to avoid the extra storage consumption and the extra load on the indexers for generating the accelerated data.
Is there a way that I can reuse the one copy of accelerated datamodel data for all of the search head clusters?
Or is there a better way to do what I'm trying to do?
... View more
01-31-2018
03:00 PM
1 Karma
Hi Everyone,
I am using the API modular input to collect data from an API. Problem is the getting the data in, correctly. Without using a response handle I can get everything in one event and I'm not able to break them into individual events. Using a response handle I'm not able to get the complete data.
SAMPLE Data looks like this:
{
"Meta Data": {
"1. Information": "Intraday Prices and Volumes for Digital Currency",
"2. Digital Currency Code": "BTC",
"3. Digital Currency Name": "Bitcoin",
"4. Market Code": "AUD",
"5. Market Name": "Australian Dollar",
"6. Interval": "5min",
"7. Last Refreshed": "2018-01-31 22:20:00",
"8. Time Zone": "UTC"
},
"Time Series (Digital Currency Intraday)": {
"2018-01-31 22:20:00": {
"1a. price (AUD)": "12474.57250163",
"1b. price (USD)": "10049.09321579",
"2. volume": "1947.32278014",
"3. market cap (USD)": "19568828.13886100"
},
"2018-01-31 22:15:00": {
"1a. price (AUD)": "12477.02432481",
"1b. price (USD)": "10051.06832152",
"2. volume": "1946.46090562",
"3. market cap (USD)": "19564011.54755900"
},
"2018-01-31 22:10:00": {
"1a. price (AUD)": "12482.11267077",
"1b. price (USD)": "10055.16732073",
"2. volume": "1947.86047516",
"3. market cap (USD)": "19586062.99517900"
},
"2018-01-31 22:05:00": {
"1a. price (AUD)": "12469.69385098",
"1b. price (USD)": "10045.16314001",
"2. volume": "1957.84301556",
"3. market cap (USD)": "19666852.49383900"
},
"2018-01-31 22:00:00": {
"1a. price (AUD)": "12465.88588954",
"1b. price (USD)": "10045.88292284",
"2. volume": "1967.23063556",
"3. market cap (USD)": "19762568.64706900"
},
"2018-01-31 21:55:00": {
"1a. price (AUD)": "12449.19799086",
"1b. price (USD)": "10032.43464665",
"2. volume": "2016.41002232",
"3. market cap (USD)": "20229501.76977700"
},
"2018-01-31 21:50:00": {
"1a. price (AUD)": "12459.09475997",
"1b. price (USD)": "10040.41015555",
"2. volume": "2016.23520360",
"3. market cap (USD)": "20243828.41419600"
},
};
I am also using a customer response handler, see below:
class BitcoinHandle:
def __init__(self,**args):
pass
def __call__(self, response_object,raw_response_output,response_type,req_args,endpoint):
if response_type == "json":
output = json.loads(raw_response_output)
for price in output["Meta Data"]:
print_xml_stream(json.dumps(price))
else:
print_xml_stream(raw_response_output)
The issue that I'm having is that it's not looping through the nested JSON data, I am only getting the following data, not I'm not even getting the key value pairs, just the heading. I have no idea what I'm doing wrong. I would like to have all the data coming in and breaking correctly. Please help!
... View more
01-30-2018
09:50 PM
Hi Damien,
using the config which you given me I was able to get Splunk to index the following. (this is part of "Meta Data" heading and also it is the first part only, not key value pairs. I would like to index "Time Series (Digital Currency Daily)")
"4. Market Code"
"2. Digital Currency Code"
"1. Information"
"7. Last Refreshed"
"6. Interval"
"3. Digital Currency Name"
"5. Market Name"
"8. Time Zone"
"2. Digital Currency Code"
Been trying lots of different stuff including replace metadata with "Time Series (Digital Currency Daily)" and it stopped working completely.
Any idea what I'm missing?
... View more
01-30-2018
07:05 PM
Oh ok, I've tried the code which you've given me but it did not work (i restarted splunkd to be sure the config had loaded). I'm not familiar with python. any chance you'd be able to knock up a config for me to put into my responsehandlers.py?
... View more
01-30-2018
06:44 PM
Thanks Damien for you response. I've had a look at the link and I'm using the TIME_PREFIX setting; TIME_PREFIX = \s+"
Looks like I need to drop the metadata header section i.e.
"Meta Data": {
"1. Information": "Daily Prices and Volumes for Digital Currency",
"2. Digital Currency Code": "BTC",
"3. Digital Currency Name": "Bitcoin",
"4. Market Code": "AUD",
"5. Market Name": "Australian Dollar",
"6. Last Refreshed": "2018-01-29 (end of day)",
"7. Time Zone": "UTC"
},
Also I need to break after },
Any ideas how to do this?
... View more
01-30-2018
05:27 PM
Hi Everyone.
I am using the API data input with Splunk to collect the following data. The format I'm using is JSON.
SAMPLE:
{
"Meta Data": {
"1. Information": "Daily Prices and Volumes for Digital Currency",
"2. Digital Currency Code": "BTC",
"3. Digital Currency Name": "Bitcoin",
"4. Market Code": "AUD",
"5. Market Name": "Australian Dollar",
"6. Last Refreshed": "2018-01-29 (end of day)",
"7. Time Zone": "UTC"
},
"Time Series (Digital Currency Daily)": {
"2018-01-29": {
"1a. open (AUD)": "14557.05214175",
"1b. open (USD)": "11804.12866653",
"2a. high (AUD)": "14582.48830689",
"2b. high (USD)": "11835.94535201",
"3a. low (AUD)": "13861.27196591",
"3b. low (USD)": "11216.74489015",
"4a. close (AUD)": "13919.77783987",
"4b. close (USD)": "11271.81445898",
"5. volume": "997.57467196",
"6. market cap (USD)": "11244476.61130983"
},
"2018-01-28": {
"1a. open (AUD)": "14229.70171702",
"1b. open (USD)": "11539.57330826",
"2a. high (AUD)": "14683.13628361",
"2b. high (USD)": "11907.28596490",
"3a. low (AUD)": "14202.27268193",
"3b. low (USD)": "11517.32973861",
"4a. close (AUD)": "14590.69388649",
"4b. close (USD)": "11831.40832999",
"5. volume": "874.38330435",
"6. market cap (USD)": "10345185.91069148"
},
"2018-01-27": {
"1a. open (AUD)": "13905.65975789",
"1b. open (USD)": "11276.79155663",
"2a. high (AUD)": "14362.62591110",
"2b. high (USD)": "11647.36815262",
"3a. low (AUD)": "13734.89429681",
"3b. low (USD)": "11138.30934556",
"4a. close (AUD)": "14229.60710190",
"4b. close (USD)": "11539.49658014",
"5. volume": "584.99890477",
"6. market cap (USD)": "6750592.86098091"
},
},
So far I've only been able to bring in the entire feed in one event. I would like to be able to break the feed into single events but cannot figure out how to achieve this.
Here is my Props.conf
[btc:json]
CHARSET =
DATETIME_CONFIG =
EVENT_BREAKER = .*\},
INDEXED_EXTRACTIONS = json
KV_MODE = none
MAX_TIMESTAMP_LOOKAHEAD = 550
TIME_FORMAT = %Y-%M-%d %H:%M:%S
TIME_PREFIX = \s+"
TRUNCATE = 500000
TZ = UTC
disabled = false
pulldown_type = true
... View more
01-29-2018
07:33 PM
Hi Everyone,
I am using the Splunk Add-on builder and trying to create an API input.
The url is a website using TLS. The string which I'm works when put into a browser, but when clicking test I get the following error:
HTTPError: HTTP Error [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:676)
Not sure how to fix this.
... View more
08-26-2017
06:27 PM
thanks for sharing, but will only remove the null value when performing a search. I need to this to happen at index time.
... View more
08-26-2017
06:10 PM
I am building a TA.
The issue I am having is the log file has a field error="". Even though it is null the error field is still there and causing CIM to tag the logs as error. I am hoping you can help me to only return the error field if there is a value other than null. Also note, I am looking for a way to do this without having to write a regex string as I have the same issue across a bunch of other sourcetypes.
<30>2017:08:27-10:30:12 sophos httpproxy[19742]: id="0001" severity="info" sys="SecureWeb" sub="http" name="http access" action="pass" method="CONNECT" srcip="1.1.1.1" dstip="1.1.1.1" user="" group="" ad_domain="" statuscode="200" cached="0" profile="REF_DefaultHTTPProfile (Default Web Filter Profile)" filteraction="REF_DefaultHTTPCFFAction (Default content filter action)" size="855" request="0xdffdb" url="https://www.google.com.au/" referer="" error="" authtime="0" dnstime="579003" cattime="288" avscantime="0" fullreqtime="109809548" device="0" auth="0" ua="" exceptions="" category="145" reputation="trusted" categoryname="Search Engines" application="google" app-id="182"
... View more
05-11-2017
01:53 AM
Hi,
I would like to replace the "action" field so it conforms with the CIM datamodel.
action at present will alway equal either "Successful" or "error".
I would like to replace "Successful" to "success" and "error" to "failure".
For example
Current fields
action=Successful
action=error
After field replacement
action=success
action=failure
Thank you
... View more
11-10-2016
08:21 PM
Hi Everyone,
I cannot figure what I am doing wrong.
I am using pfsense and I am receiving the logs into splunk but the logs are not being formatted.
This is the event which I am receiving in splunk
Nov 10 20:12:12 FWPFS001.localdomain Nov 11 15:12:12 filterlog: 84,16777216,,1000003811,igb0,match,pass,out,4,0x0,,127,2445,0,none,17,udp,1378,192.168.0.100,216.58.199.46,28180,443,1358
As you can see it is not being tagged with what is configured in the props.conf and transforms.conf
Here are my config files.
[syslog]
SHOULD_LINEMERGE = true
TRUNCATE = 0
MUST_NOT_BREAK_AFTER = pf: .* rule ([-\d]+\/\d+)(.?):
MUST_BREAK_AFTER = pf: . (<|>) +(\d+.\d+.\d+.\d+).?(\d*):
REPORT-pf2 = pf2
Transforms.conf
[pf2]
REGEX= .* (?pass|block) .* (?TCP|UDP|IGMP|ICMP) .* (?(\d+.\d+.\d+.\d+)).?(?(\d*)) <|>.?(?(\d*)): (.*)
Any help is greatly appreciated.
... View more
11-02-2016
09:25 PM
I would like to change the name of an index without losing any data etc.
Is it possible to modify an index name in the indexes.conf file on the cluster master. After changing the name, I'll push the config to my indexes using the cluster master.
e.g. of changing name in indexes.conf
[index-for-indexing]
homePath = $SPLUNK_DB/index-for-indexing/db
coldPath = $SPLUNK_DB/index-for-indexing/colddb
thawedPath = $SPLUNK_DB/index-for-indexing/thaweddb
repFactor = auto
to
[no-indexing-here]
homePath = $SPLUNK_DB/no-indexing-here/db
coldPath = $SPLUNK_DB/no-indexing-here/colddb
thawedPath = $SPLUNK_DB/no-indexing-here/thaweddb
repFactor = auto
Am I heading down the right track or am I completely wrong?
P.S I am using a distributed deployment.
... View more
10-28-2016
09:24 PM
2 Karma
I'm trying to write a search to track the amount of data being ingested to a specific index, measured in MB/per minute.
This is what I have so far:
index=my_index_name metrics name=index_thruput sourcetype=splunkd | timechart span=1m sum(eval(kb/1024)) as "MB/min"
... View more