Activity Feed
- Karma Re: Why receiving an ERROR when updating mmapv1 storage engine to wiredTiger? for TassiloM. 12-16-2024 01:04 AM
- Got Karma for Re: search is very slow,no result found yet. 08-12-2024 07:52 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:36 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:33 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:29 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:28 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:27 PM
- Karma Re: How to get Splunk Webhook Alert actions to send entire search results as JSON payload? for Mathanjey. 07-24-2024 03:02 AM
- Posted search is very slow,no result found yet on Splunk Enterprise. 07-24-2024 01:02 AM
- Posted Re: splunk webhook alert how to send entire search result payload and send an email with entire search results on Alerting. 05-26-2024 08:23 PM
- Posted How to find high-frequency behavioral events using Splunk? on Splunk Search. 06-12-2023 01:02 AM
- Got Karma for Re: Can I call the static resources from an app that has global permissions?. 01-19-2022 12:23 PM
- Posted how to use splunk mobile? on Splunk Enterprise. 09-16-2021 06:57 PM
- Posted how to use splunk monitor a cron job add action on Splunk Enterprise. 08-06-2020 07:03 PM
- Karma Re: How to associate dbxquery results with search results? for sduff_splunk. 06-05-2020 12:50 AM
- Karma Re: Can I call the static resources from an app that has global permissions? for deepashri_123. 06-05-2020 12:50 AM
- Karma Re: Can I convert an indexer cluster into a single indexer without losing any data? for tiagofbmm. 06-05-2020 12:50 AM
- Karma Re: How does the search header cluster change to a single search header instance? for chrisyounger. 06-05-2020 12:50 AM
- Got Karma for Error migrating deprecated review status transitions. 06-05-2020 12:50 AM
- Karma Re: how to list all hosts and sourcetypes of all indexes quickly for niketn. 06-05-2020 12:49 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
09-11-2019
05:39 PM
Because I saw the title of the website is nessuc scanner(SC), so I mistakenly think it is Tenable.sc
... View more
09-11-2019
02:39 AM
hello everyone. please forgive my English level. I'm a splunk novice and nessus novice.
I am trying to ingest the Tenable(sc) vulnerability data into the splunk indexer, I have read the official document related to Tenable Add-On for Splunk , (link : https://docs.tenable.com/integrations/splunk/Content/Splunk%20Add%20On.htm). I think my type of Tenable is Tenable.sc, not Tenable.io.Please see the screenshot below:
I'am trying to configure Tenable Add-On for Splunk ,but it return an error message Please enter valid Address, Username and Password. , as shown below:
I have tried to delete the port number in the address (192.168.20.129) and checked "Verify SSL Certificate", but still not working , I have tested all the configurations.
if I change the "Tenable Account Type" to Tenable.io and using access_key 、 secret Key , it can successfully create configuration successfully. but after successfully create the input, the vulnerability data is not indexed into splunk. and I can find the following error from the log file error.log (/opt/splunk/var/log/splunk/ta_tenable_tenable_io.log)
2019-09-11 16:09:56,668 INFO pid=10770 tid=MainThread file=base_modinput.py:log_info:293 | Tenable.io vulnerability data collection started
2019-09-11 16:09:56,669 INFO pid=10770 tid=MainThread file=splunk_rest_client.py:_request_handler:100 | Use HTTP connection pooling
2019-09-11 16:09:56,670 INFO pid=10770 tid=MainThread file=connectionpool.py:_new_conn:758 | Starting new HTTPS connection (1): 127.0.0.1
2019-09-11 16:09:56,693 ERROR pid=10770 tid=MainThread file=io_connect.py:__check_response:80 | Tenable Error: response: {"error":"The requested file was not found"}
I am trying to force the creation of Tenable.sc account configuration information (ta_tenable_account.conf) on the command line. then select Global Account* when configuring the input. When I click the "add" button, the following error still occurs:
2019-09-11 16:34:12,528 ERROR pid=19830 tid=MainThread file=sc_connect.py:_check_response:98 | Tenable SC Error: URL: https://192.168.20.129:8834/rest/system, HTTP status code: 404, error code: 1
So I re-read the principle of Tenable Add-on, which calls the Tenable API to extract data from the Tenable platform. link (https://docs.tenable.com/integrations/splunk/Content/Splunk%20Add%20On.htm)
The Tenable Add-On for Splunk pulls data from Tenable platforms and normalizes it in Splunk.
The current Tenable Add-On uses the following endpoints.
Tenable.io
Request Export: /vulns/export
Vulnerability Export: /vulns/export
Asset Export: /assets/export
Tenable.sc
Vulnerability and assets details: /rest/analysis
Plugin details: /rest/plugins
Repository details: /rest/repository
The reason for the error was that my Nessus does not provide an API at all. . When I tried to access these API links using a browser, it returned 404 not found.
E.g:
Tenable.io Vulnerability Export is called api /vulns/export . When I try to access "https://192.168.20.129:8834/vulns/export", the browser returns a status code of 404 with the content: "{"error":" The requested file was not found"}"
Tenable.sc Vulnerability and assets detail is called api /rest/analysis , when I try to access the "https://192.168.20.129:8834/rest/analysis" browser returns a status code of 404, the content is: "{"error": "The requested file was not found"}"
question:
Why doesn't my Nessus provide an API interface?
Do I need to configure nessus to enable the api interface?
Is there a nessus expert who can tell me whether my nessus type is tenable.io or tenable.sc
... View more
Labels
- Labels:
-
troubleshooting
08-26-2019
06:13 PM
@Sukisen1981 thank you for your reply ,but It doesn't seem like a very easy way to do that. I have more than 100 alerts. Do I need to merge each one alert into a dummy json? In addition, this format looks very ugly in mail messages.
... View more
08-22-2019
12:53 AM
@Sukisen1981 I added pictures to the post and updated it.
... View more
08-21-2019
11:44 PM
@Sukisen1981 I have updated the picture link, you could see it.
Thank you, so how should I solve this problem?
... View more
08-21-2019
06:13 PM
ok,
if I set Trigger Condition as once ,webhook alert sends only the first row of events in the table.( User unknown: ccc),The following screenshot is the payload sent by webhook alert, which you can see captures only the first line of events
The following screenshot is my e-mail content,my mailbox received an e-mail containing four lines events.
So I want to solve this problem. I hope webhook alert can send payload containing the entire search results (4 events in a table). But I don't want to merge them into one line through stats or any other command.
in addition. I can't set Trigger Condition as For each result ,otherwise, it will be an email bomb, which constantly sends me emails, each containing the result of one line of events, that is, when the alert triggers, sendemail will be split into four emails and sent to me.
Do custom alerts require development ability? Do I need to do a lot of development work?
... View more
08-21-2019
12:58 AM
yes, I am sure, don't know if you understand what I mean? sorry, my English is not good. I mean is that when an alert is triggered, the alert result returns a multi-line event (e.g., three results), and the webhook sends only the first line event result. but my email received a complete list of events(i.e., mail containing three results)
index=xxxx status=404|table host,source,status
1.1.1.1 /www/logs/nginx.log 404
2.2.2.2 /www/logs/nginx.log 404
3.3.3.3 /www/logs/nginx.log 404
webhook send only the first line of results 1.1.1.1 /www/logs/nginx.log 404 , But the received alert mail message contain three lines of event record
... View more
08-20-2019
11:46 PM
I have an alert that sets 2 actions, sending emails and webhook.
if I set Trigger Condition as once,webhook alert sends only the first row of events in the table.( User unknown: ccc),The following screenshot is the payload sent by webhook alert, which you can see captures only the first line of events
The following screenshot is my e-mail content,my mailbox received an e-mail containing four lines events.
This is not my ideal result.
if I set Trigger Condition as For each result When the alert trigger returns the multi-line event. webhook will send each search result in turn and I will get a lot of emails,It's like an email bomb. I wish I could get an email containing all the search results. This is also not my ideal result.
so. I hope it can send the entire search results and an email with all the search results.
My alert
Enabled: Yes
App: search
Permissions: Shared in App
Alert Type: Scheduled (Run on cron schedule)
Time Range : Last 5 minutes
Cron Expression: */5 * * * *
Trigger Condition: Number of Results is > 0
Actions: Send Email + Webhook
So I want to solve this problem. I hope webhook alert can send payload containing the entire search results (4 events in a table). But I don't want to merge them into one line through stats or any other command.
in addition. I can't set Trigger Condition as For each result,otherwise, it will be an email bomb, which constantly sends me emails, each containing the result of one line of events, that is, when the alert triggers, sendemail will be split into four emails and sent to me.
Do custom alerts require development ability? Do I need to do a lot of development work?
... View more
Labels
- Labels:
-
alert action
07-14-2019
06:15 PM
@jkat54 How to fix this error? _time>=24hrAgo AND _time<=25hrAgo
the field _time' with a minute (%Y/%m/%d %H:%M:%S) , but the 24hrAgo and 25hrAgo with a hour time format (%Y/%m/%d %H)
... View more
07-12-2019
01:22 AM
@jkat54 thank you for your reply.
When I tried to search for the first step, splunk prompted an error.
Error in 'stats' command: The eval expression for dynamic field 'eval(_time>=24hrAgo AND _time<=25hrAgo)' is invalid. Error='The operator at 'hrAgo AND _time<=25hrAgo' is invalid.'
This error may be because you are using numbers as the beginning of the field
In addition, my program is counting the number of uri requests per minute. maybe i need add |bin _time span=1h to the search statement you provided?
... View more
07-11-2019
06:41 AM
hello everyone!
I have a program that counts the number of requests for website api per minute.the log format is as following, where field time is the time of the statistics.
request_domain time uri min_count
www.test.com 11/Jul/2019 15:51 /api/test 19
www.test.com 11/Jul/2019 15:51 /api/exmple 208
m.test.com 11/Jul/2019 15:52 /api/search 80
www.test.com 11/Jul/2019 15:52 /api/test 31
www.test.com 11/Jul/2019 15:52 /api/exmple 253
m.test.com 11/Jul/2019 15:52 /api/search 62
I want to create an alert based on the following requirements, but I don't know how to do it.
If the number of uri requests is greater than 100 times per hour.
Compare the previous hour, if the growth rate is greater than 80%, then alert
Compare the same time period yesterday, if the growth rate is greater than 80%, then the alarm.
If the number of uri requests is less than 100 times per hour.
Compare the previous hour, if the growth number is greater than 50 times, then alert
Compare the same time period yesterday, if the growth number is greater than 50 times,then alert
I think it needs to be split into at least 2 alert to achieve
all help will be greatly appreciated!
... View more
- Tags:
- splunk-enterprise
07-11-2019
01:03 AM
this issue has been solved. I used an old version of the driver. so .u need download new version of mongodb driver from http://unityjdbc.com/mongojdbc/mongo_jdbc.php
... View more
07-09-2019
06:23 PM
this issue has been solved. I used an old version of the driver. so .u need download new version of mongodb driver from http://unityjdbc.com/mongojdbc/mongo_jdbc.php
... View more
07-02-2019
01:05 AM
hi. everyone .
My website has some API interfaces. Sometimes malicious attacks will request these api continuously. It is clear on the time chart that the peak has been reached. How do I detect and alert?
for example:
I have a search like this now, I can see the number of requests per hour for these URIs.
index = web sourcetype=nginx_access uri=/api/getuserInfo OR uri=/api/featchData OR uri=/login OR uri=/home
|timechart span=1h count by uri
under normal conditions the request per hour of /api/getuserInfo is about 1000~5000 times, if a certain time period encounters a malicious attack, the interface requests 50,000 times. I think this is an anomaly. How should I use a smarter method to detect abnormal peaks and issue alarms?
I think of a stupid way, i can write the number of api interface requests per hour to csv or kvstore, and then use today's and yesterday's comparisons to see the magnitude of the rise. If the rise is too high, I think this is abnormal peaks
But I think there are more efficient methods, such as machine learning? Can someone help me and share a use case with me, thank you
Note: I have a lot of API interfaces, about 20, I want to monitor the abnormal peak of each API interface,
... View more
06-25-2019
11:05 PM
@jnudell_2 I'm sorry, forgive my English. I didn't express it clearly enough. i have a indexer cluster that contain 3 peer nodes (peer ip: 172.25.105.158/159/160)。the linux secure log on the 3 peer nodes(/var/log/secure),by the default, The secure log will be forwarded to the indexer cluster if i only configured inputs.conf. and I can search for them using the search header.
now. i want to forwarder them to another standalone instance(10.10.20.100). So how do I forward the secure logs of these 3 peers to a standalone splunk instance.
... View more
06-25-2019
08:50 PM
What host is the secure log located on.
secure log on the peer node.
When I followed your approach, one of the peer node had the following error message:
peer node : 172.25.105.159
connect to 172.25.105.159:9997 failed
Forwarding to indexer group default blocked for 370 secounds
I suspect this error occurs because they forwarded the data to their own port.
... View more
06-25-2019
08:27 PM
By default, there is no tcpout on all peers. If I need to forward the secure log of the peer to another splunk instance, I need to add the tcpout of the indexer cluster to outputs.conf and put it Set as the default group, right?
... View more
06-25-2019
06:59 PM
hello everyone, forgive my English
i have a splunk indexer cluster (3 peer + master node + 1 search header), now ,I don't want to forward the secure log on the peer node to the indexer cluster, I want to forward the secure log on the peer node to another splunk enterprise(alone instance). I tried the following method, please point out my mistake:
1、Point all peer nodes to the deployment server and use the deployment server to distribute the apps.
2、use deployment server put following apss to all peer node:
path on the DS : /opt/splunk/etc/deployment-apps/linux/local/inputs.conf
[monitor:///var/log/secure]
index = linux
sourcetype = linux_secure
path on the DS : /opt/splunk/etc/deployment-apps/linux/local/outputs.conf
[tcpout:test1]
server = 10.10.20.100:9997
3、push apps to all peer node by deployment-server
Something unexpected happened:
All logs originally forwarded to the indexer cluster changed the forwarding route ,The peer node forwards them all to a alone splunk instance.(10.10.20.100).
I don't know why this happened, I think that logs from other hosts should be forwarded to the indexer cluster in addition to the secure logs on the peer nodes.But this is not the case, the logs arriving at the peer node are all routed to the alone splunk instance.This means that the wrong configuration results in a change in peer routing
Does anyone know how to solve this problem? All help would be greatly appreciated
... View more
- Tags:
- splunk-enterprise
06-16-2019
06:09 PM
Thank you, I solved this problem through kvstore. firstlly, I query the result from mongodb, and then search for it in Kvstore. If the result does not exist in kvstore, an alert is triggered, finally fill all the results into the kvstore..
|dbxquery connection="testmongodb" query="select * from Result"
|search NOT [|inputlookup resultcollections]
|outputlookup resultcollections
... View more
06-13-2019
02:48 AM
@DavidHourani
what should I do? Is there a link to the documentation? These data are vulnerability information, meaning that a host has discovered a new vulnerability, so I want alert it if result collection adding a new recorder
... View more
06-12-2019
11:53 PM
hello. I use splunk db connect 3.1.3 connect mongodb database. it is working now. and I can use SQL statement query data from mongodb
|dxquery connection="testmongodb" query="select * from result" .
We all know that mongodb does not contain a self-incrementing column. So mongodb's data is similar to the following format:
info ip port task_date task_id time vul_info
Unauthorized Access 172.16.10.9 6379 2019-6-6 d40617172258939a57fdb5617724fc55 2019-6-6 {"vul_type":"Weak password",vul_name:"Redis Weak password",vul_level:"High"}
SMB Remote Overflow 10.10.2.8 445 2019-6-6 cfab842aa0e8166cabb2f4548477756b 2019-6-6 {"vul_type":"Remote Overflow",vul_name:"SMB Remote Overflow",vul_level:"High"}
MySQL Weak password 10.10.2.7 3306 2019-6-13 2389ccda6788fc124d1cec7a951f7089 2019-6-13 {"vul_type":"Weak password",vul_name:"MySQL Weak password",vul_level:"High"}
Firstly, it does not have an rising column, for example id , secondly, it does not have a timestamp.
If I use input with batch to index these data into splunk, there will be a lot of duplicate data. So I hope. Every time there is new data in the result collections, it can automatically index to Splunk .
If this mongo collection have a rising column, it will be easy to implement this requirement, unfortunately not.
So is there a clever way to index new data from mongodb to splunk?
... View more
06-10-2019
06:03 PM
@kc64645 plase download mongodb_unityjdbc_full.jar, https://raw.githubusercontent.com/michaelloliveira/traccar-mongodb/master/lib/mongodb_unityjdbc_full.jar
... View more
06-10-2019
02:56 AM
thank you for your reply
I have download mongodb_unityjdbc_full.jar and copy the jar in the directory $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers and add following data in the db_connection_types.conf file
[mongodb]
displayName = MongoDB
serviceClass = com.splunk.dbx2.DefaultDBX2JDBC
jdbcDriverClass = mongodb.jdbc.MongoDriver
jdbcUrlFormat = jdbc:mongo://host:port/database
port = 27017
ui_default_catalog = $database$
but is not working.
Customizing the API seems to be a complicated thing, and I don't want to spend too much time.
I believe DB connect 3 can connect to mongodb, and I suspect my JDBC URI configuration is error
... View more
06-10-2019
01:13 AM
I am trying to index mongodb data into splunk,
The Hunk App for MongoDB seems to be an obsolete. Is there a best way to import mongodb data into splunk? similar to the input function of db_connect? Because I don't just import it once, when mongodb has new data, I hope it can be automatically indexed to splunk
I refer to the following documentation, but when I connect to the mongodb database that requires authentication, it prompts an error. not authorized for query on xunfeng._schema . I am sure I have created an identity XunfengMongodb .
http://www.unityjdbc.com/mongojdbc/setup/mongodb_jdbc_splunk_dbconnect_v2.pdf
splunk version: 7.2.3
db connect version 3.1.3
... View more
- Tags:
- splunk-enterprise
04-21-2019
01:05 AM
@niketnilay thank you~ please convert your comment to answer, I will accept your reply
... View more