Activity Feed
- Posted What is the capability needed to view Adaptive responses section in incident review page on ES? on Splunk Enterprise Security. 12-14-2022 03:00 AM
- Posted Re: large number of inputs maxing out HF on Knowledge Management. 07-01-2022 01:21 AM
- Posted How to stop large number of inputs maxing out HF? on Knowledge Management. 07-01-2022 12:46 AM
- Tagged How to stop large number of inputs maxing out HF? on Knowledge Management. 07-01-2022 12:46 AM
- Got Karma for Re: Phantom App for Splunk not respecting Global field mappings. 06-27-2022 09:44 AM
- Posted Re: Phantom App for Splunk not respecting Global field mappings on Splunk SOAR. 06-27-2022 01:47 AM
- Posted Phantom App for Splunk not respecting Global field mappings on Splunk SOAR. 02-03-2022 06:40 AM
- Posted delete top notable event from the "Top Notable Events" panel in ES Security Posture page on Splunk Enterprise Security. 11-29-2021 02:55 AM
- Posted Re: Search Scheduler Searches Skipped on Splunk Search. 11-11-2021 07:41 AM
- Posted Re: Splunk search query on Splunk Search. 11-11-2021 07:30 AM
- Got Karma for Re: Search Scheduler Searches Skipped. 11-10-2021 08:17 AM
- Posted Re: Search Scheduler Searches Skipped on Splunk Search. 11-10-2021 08:11 AM
- Posted Re: Table colouring by overall range on Dashboards & Visualizations. 11-10-2021 07:31 AM
- Posted Migrate ES correlation rules to a custom app on Splunk Enterprise Security. 09-15-2021 03:10 AM
- Posted REST API Modular Input add on stopped reading data on Getting Data In. 05-19-2021 12:58 AM
- Posted heavy forwarder of higher version than indexers on Splunk Enterprise. 03-18-2021 08:41 AM
- Got Karma for Re: No message was deserialized prior to calling the DispatchChannelSink. Parameter name: requestMsg. 01-08-2021 04:47 AM
- Posted Re: Splunk Indexers sending too much of data to search heads on Splunk Enterprise. 11-12-2020 12:31 AM
- Posted Splunk Indexers sending too much of data to search heads on Splunk Enterprise. 11-03-2020 04:34 AM
- Got Karma for Re: Email alert changes. 06-05-2020 12:51 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
12-14-2022
03:00 AM
can someone point me to the capabilty that needs to be provided for ES users to be able to view Adaptive responses section in ES Incident review page. Few of my ES users from the security team who have admin access to ES but cant see this section, however admin users can.
Non admin users get an error message saying:
Adaptive Responses:
Error: No adaptive response actions found.
Admin Users see this
... View more
- Tags:
- es
Labels
07-01-2022
01:21 AM
thanks, i too am of the same opinion, i will implement option 1 as a bug fix and scale the HF
... View more
07-01-2022
12:46 AM
I have close to 200 inputs configured on Splunk TA for MS cloud services on a HF along with other TAs that are also pulling data from other sources but the TA for ms cloud services makesup the majority of the the inputs on this HF.
The issue i am facing at the moment is owing to the huge number of PULL data ingestion my HF CPU is frequently maxing out thereby leadin to ingestion delays for all associated data sources. Splunk docs here suggests "Increase intervals in proportion to the number of inputs you have configured in your deployment"
My guess is since all the inputs are configured with interval=300 all of them "might" be trying to fetch data at the same time.
OPtions available for the fix is:
1. Change the interval settings on each stanza in inputs.conf by 1-10 secs that should make them run at different times
2. reduce the number of inputs on this overutilised HF and move them to another HF
Is there any other option that i can implement to remediate this situation?
Also, for option 1, is there a way where i can timechart (or plot) the count of inputs over time to see the trend of the inputs being triggered over say 24 hrs
regarding option 2, this might result in double ingestion of data, i would be interested to know how i might go forward so as to minimize dual ingestion
... View more
06-27-2022
01:47 AM
1 Karma
Nope, I would suggest to work with splunk support on this one. As an immediate way out I followed the below steps: 1. Create 1 event forwarding and map all necessary fields locally in the event forwarding. 2. Clone the event forwarding of step 1. Make sure you just clone the event forwarding and do not make any changes to the cloned event forwarding from the GUI. 3. Make additional clones as required, example, if you initially had 7 event forwardings make 6 clones (1 will be the event forwarding created with all settings and mappings in step 1) 4. Login to the Splunk sh cli. Go to the add on local directory and make the necessary changes on the phantom.conf file. This file will have all the event forwardings, you would just have to rename the search names of the clones and modify the other settings via the cli on the phantom.conf files and then push the file to the sh cluster via the deployer. For me these steps remidiated the situation, but do make sure no one makes changes via GUI to the event forwarding config page in the future. I think changes to field mappings are still possible (after the above steps are done) but make sure you test it. I suggest create a test event forwarding on which you can test the additional field mappings via GUI. I have not tested it though. Hope this helps.
... View more
02-03-2022
06:40 AM
i have Multiple event forwardings enabled on my Phantom App for Splunk that use saved searches to trigger notable events to phantom. I had recently we upgraded the App from ver 4.0.35 to 4.1.73. With this upgrade all field mappings that were saved in the Event forwardings (locally) were erased. and now there are 0 fields that are mapped in the event forwardings. Since almost all the mapped fields in each of the Event forwardings were same, i have re-mapped them manually on one of the event forwarding and while saving it i have checked the "Save Mappings" option which saves those fields in the global mappings. but now the mappings dont work for all the event forwardings as it should (due to global mapping) but only works for the single event forwarding where the mapping is saved locally. Troubleshooting done: 1. Tried to restore the phantom.conf file - did not work, no mapping was detected after the restore. 2. tried to clone the single event forwarding with locally mapped fields - did not work, as soon as i change the Saved search setting (as shown in screenshot) and save, the mapped fields turns to 0 (this is in the cloned event forwarding), as shown in the 2nd screenshot. I really dont want to manually map the fields locally since it would result in close to 2000 fields in total (300 fields on each of the 7 event forwardings). Any help on this issue is really appreciated. This is the sample event forwarding config page, the field mappings gets reset after saving the event forwarding. I am on splunk 8.1.5
... View more
Labels
11-29-2021
02:55 AM
I have disabled a few of the Correlation searches and would like to delete them from the "Top Notable Events" panel in ES Security Posture page. There is some recommendation on this but the answers are quite old, is there any good way to achieve this with min impact as i understand that i would have to modify the KVstore lookup for it.
... View more
Labels
11-11-2021
07:41 AM
I think (i might be wrong though) that you have a multi-tenant splunk environment, there are chances that some other team is using those ITSI searches. My suggestion would be to reach out to Splunk support, they will surely assist you.
... View more
11-11-2021
07:30 AM
check this page if it helps. If not, could you please share some sample data. Please ensure to mock/anonymize any sensitive information in your data/code before posting on Splunk community.
... View more
11-10-2021
08:11 AM
1 Karma
So your SHs are running real-time searches which are really resource intensive, as a quick fix try to convert them into scheduled searches, in "most" cases a real time search is never required. Regarding the Skipped concurrent searches you can follow the below steps: 1. Detect which searches are being skipped index=_internal earliest=-24h status=skipped sourcetype=scheduler 2. Which apps the Skipped Searches are coming from index=_internal earliest=-24h status=skipped sourcetype=scheduler
| stats count by host app | sort - count 3. Identify the bad searches (change the timeframe and host as per your needs) index=_internal sourcetype=scheduler status!=queued earliest=-30d@d host=<server_name>
| eval is_realtime=if(searchmatch("sid=rt* OR concurrency_category=real-time_scheduled"),"yes","no")
| fields run_time is_realtime savedsearch_name user
| stats avg(run_time) as average_run_time max(run_time) as max_run_time min(run_time)
as min_run_time max(is_realtime) as is_realtime by savedsearch_name user
| eval average_run_time = average_run_time/60| eval min_run_time = min_run_time/60
| eval max_run_time = max_run_time/60 | sort - average_run_time
| join savedsearch_name [|rest /servicesNS/-/-/saved/searches splunk_server=<server_name>
| search is_scheduled=1 | rename title AS savedsearch_name | rename eai:acl.app
as title| table splunk_server title savedsearch_name cron_schedule search ]
| rename title as "App Name" | fields splunk_server savedsearch_name user
average_run_time max_run_time min_run_time cron_schedule is_realtime search "App Name"
| sort splunk_server, -average_run_time 4. Modify scheduler limits In case the above did not remediate your situation the last option is to increase limits on a system by modifying limits.conf (make sure you do it on a custom app and not the one in $Splunk_HOME$/etc/default) Out of the box with a Splunk 16 core system, Splunk can run 22 searches at any one time. That is calculated using the following formula: max_hist_searches = max_searches_per_cpu ( default of 1) x number_of_cpus (16) + base_max_searches (default of 6). Of those 22 searches, the scheduler is allocated 50 percent of that number by default (so 11 searches) according to the setting max_searches_perc. Of those 11 searches that the scheduler can run, the auto summarization (things like Data Models) are allowed 50% of the number of scheduled searches. Taking 50% of the number of searches that the scheduler can run, we end up with about 6 searches that can be run at a time for your Data Models according to the setting for auto_summary_perc. This all being said, if your data models are taking an extremely long time to run, or not completing and consistently skipping, increasing your base max searches from 6 to 12, will only get you to 7 auto summary searches at one time. You may want to alter auto_summary_perc in order to allow more searches to be dedicated to your Data Models. If you have the CPU resources, you may want to up the max_searches_per_cpu to something like 2. This would allow you to run 10 Data Model summarizations at any one time and give an overall number of 38 maximum historical searches. The above info is derived from here
... View more
11-10-2021
07:31 AM
I am guessing you have used the Color settings with mode "scale" set to the Presets - divergent something like this: so this applies the settings dynamically as you move to the later pages - page 1 vs page 13 in your screenshots. So this color scale maps the colors as per the values in each page. What you could do is: option 1. Set some static ranges and associated color for them like below option 2. you could define a "custom" scale (as you did already) but define some sort of static range by specifying min, midpoint and max values for the logcount number
... View more
09-15-2021
03:10 AM
I would have to move my custom Correlation rules to a custom TA-foo app My correlation searches comprises of: custom rules created from scratch (all across the apps estate - yup, its a mess) and a few of the OOB CRs from the DA-ESS-, SA-, TA-, Splunk_SA_, Splunk_TA_, and Splunk_DA-ESS_ apps that were modified as per my requirement Are there any best practices/recommendations that i need to consider other than Add import = TA-foo in local.meta in <Splunk_HOME>/etc/apps/SplunkEnterpriseSecuritySuite/metadata add request.ui_dispatch_app = SplunkEnterpriseSecuritySuite in savedsearches.conf for each of the Correlation searches that i migrate PS: I will also migrate the dependant KOs (macros/lookups etc) in a similar fashion to the TA-foo add on. Is there any other better way to go about it, just to be future safe for upgrades, so that i have a single touchpoint rather than running after optimisations in each app after any activity such as a version upgrade . Splunk version 7.3.0 ES version 5.3.1
... View more
Labels
05-19-2021
12:58 AM
i am using REST API Modular Input add on to ingest data from PRTG in JSON format which was working fine until yesterday when i tried to add some more configs to better process the data (props.conf and inputs.conf - a minor change in the endpoint). but strangely the data ingest stopped after the change, the endpoint query works fine since its working in the browser, checked splunkd logs as well there is nothing much that i can see from the time the data stopped coming in, did anyone face similar issues earlier? this is my inputs.conf [rest://PRTG_<IP>] activation_key = <key> auth_password = <username> auth_type = basic auth_user = <password> endpoint = https://<IP>/api/table.json?content=sensors&output=json&columns=objid,probe,group,device,host,sensor,status,message,lastvalue,priority&count=100&username=<user>&passhash=<pass> host = <ip> http_method = GET index = prtg index_error_response_codes = 0 log_level = INFO response_type = json sequential_mode = 0 sourcetype = mysourcetype streaming_request = 0 disabled = 0 this is the props.conf [mysourcetype] KV_MODE = JSON
... View more
Labels
- Labels:
-
JSON
-
modular input
03-18-2021
08:41 AM
i have my splunk indexer cluster on 7.3.0 and i am planning to add a new heavy forwarder, can the new HF be the latest version of splunk (8.1.2) or does it need to be the same as the indexer cluster. Based on what i know, there is not much difference functionally between a HF and indexer and since indexers should always be of the exact same version, i am a bit confused if there will be a version mismatch issue. I do have an upgrade plan in pipeline which will eventually happen in 1 or 2 months, just wanted to know if the higher version of HF and lower version in indexer cluster would work as an interim solution. Note: I do use DB connect intensively on the HF. Did anyone try something like this, were there any issues.
... View more
Labels
- Labels:
-
upgrade
11-12-2020
12:31 AM
Thanks @richgalloway, i could not find any issues with any search in particular (yes there were users with badly written searches but that should not impact so much) as a test i disabled the realtime metadata search that populates the search summary page (disabled it globally so that no apps have that search running) and looks like it solved the issue.
... View more
11-03-2020
04:34 AM
my indexers are sending way too much of data to my search heads (close to 500 GBs in a day) which is having an impact on the bandwidth utilisation. Although from initial investigation it seemed like some of the dashboards were running long running searches which i had killed manually, but that just helped partially, is there any other aspects that i need to look into.
... View more
Labels
- Labels:
-
administration
-
troubleshooting
05-18-2020
03:16 AM
i am trying to connect to my 2nd LDAP instance using the SA-LDAPSearch app (Splunk Supporting Add-on for Active Directory 3.0.1) and am getting the below error
External search command 'ldaptestconnection' returned error code 1. First 1000 (of 1921) bytes of script output: "error_message= # host: <hostname>: Could not access the directory service at ldaps://<hostname>:<ldaps_port>: ('unable to open socket', [(datetime.datetime(2020, 5, 18, 10, 57, 16, 524688), , LDAPSocketOpenError('socket connection error while opening: [Errno 110] Connection timed out',), ('<ip_address>', <ldaps_port>)), (datetime.datetime(2020, 5, 18, 10, 57, 31, 532624), , LDAPSocketOpenError('socket ssl wrapping error: [Errno 104] Connection reset by peer',), ('<ip_address>', <ldaps_port>)), (datetime.datetime(2020, 5, 18, 10, 59, 38, 860630), , LDAPSocketOpenError('socket connection error while opening: [Errno 110] Connection timed out',), ('<ip_address>', <ldaps_port>)), (datetime.datetime(2020, 5, 18, 10, 59, 53, 851213), , LDAPSocketOpenError('socket ssl wrapping error: [Errno 104] Connection reset by peer',), ('".
I do have a working ldap connection on a different domain that works fine and does not throw any error.
Is there any configs that i am missing or is it an issue with the connectivity from my splunk server to the ldap server?
... View more
04-28-2020
03:52 AM
i have recently upgraded SPlunk from 7.1.1 to 7.3.4 and ES from 5.2.2 to 5.3.1, but after the upgrade i can see that the threat activity dashboard does not show any data (data is only available till the time splunk was upgraded)
i can see that the Threat Intelligence Data Model is enabled and accelerated, but when i see "Data Model Audit" it still shows the date and time of the last time for which the dashboard shows data.
Also, i can see that the datamodels Web and network traffic do report data (there are the dependant data models for Threat intelligence data model)
Since its on my prod system, i have not yet tried to rebuild the data model, wanted to know if there is any other possible fix for this.
... View more
03-05-2020
07:08 AM
AWS had announced that they would deprecate deprecate the path-based access model that is used to specify the address of an object in an S3 bucket and this kicks in from 30th Sept 2020.
Example:
Current format:
https://s3.amazonaws.com/jbarr-public/images/ritchie_and_thompson_pdp11.jpeg
New format
https://jbarr-public.s3.amazonaws.com/images/ritchie_and_thompson_pdp11.jpeg
More info on this can be found here
My question is, what changes needs to be done on splunk configs end so that we continue to receive data (in my case cloudtrail) from S3 buckets with the new naming convention.
i can only see from the config files that the only place where the "bucket_name " and "hostname is referanced is in inputs.conf in splunk_TA_aws.
Do i need an upgrade of the Splunk TA to support this. i am currently on Splunk TA version 4.5.0 and splunk version 7.1.1
... View more
01-27-2020
06:52 AM
Hi @zooky92 try something like the below query.
index=myindex host=myhost source="*access_log" duration!="" NOT "status" earliest=-24h latest=-1h (date_hour > 00 AND date_hour < 14)
|eval apacheDuration_today=apacheDuration/1000
|stats avg(apacheDuration_today) as avg_apacheDuration_today by host
| appendcols
[search index=myindex host=myhost source="*access_log" duration!="" NOT "status" earliest=-8d latest=-1d (date_hour > 00 AND date_hour < 14)
|eval apacheDuration_week=apacheDuration/1000
| stats avg(apacheDuration_week) avg_apacheDuration_week by host]
| eval is_spike=if(avg_apacheDuration_today >1.2 * avg_apacheDuration_week, avarageToday, 0)
| stats values(spike_value) as spike_value values(apacheDuration_today) as apacheDuration_today values(avg_apacheDuration_week) as avg_apacheDuration_week values(is_spike) by _time
you can try to run it as a scheduled search at 15:00.
... View more
01-22-2020
06:32 AM
As far as I know you need to have purchased a valid license for the data stream processor before you can use splunk DSP. Take a look here
... View more
01-22-2020
06:22 AM
On your question about the performance impact, you might end up running into OOM issues and your search performance might also get degraded, as suggested by @to4kawa there might be alternatives available, if you can post your query and some mockup data and the desired output we can try to help you out
... View more
01-22-2020
06:03 AM
Can you check web.conf in /etc/system/default and check for the port that is set for splunkweb see attribute "httpport".
So if it is set to port 8002 you use http://localhost:8002
... View more
01-21-2020
03:46 AM
when you say you have configured re-balancing, did you configure your UF outputs.conf with target groups?
DO you use indexer discovery configured?
... View more
01-20-2020
07:43 AM
You can create a DELIMS based extraction (transforms.conf) to extract the subfields:
[your_transform_rule]
SOURCE_KEY = _raw
DELIMS = "|"
FIELDS = Extracted_time, Error, ..., General_Agency
Then, you'd call that rule from the props.conf of your sourcetype, like this:
[your_sourcetype]
REPORT-extracted_fields = your_transform_rule
... View more
01-20-2020
07:31 AM
it is quite difficult to tell the exact message that splunk will throw when an indexer goes down since it might go down for a variety of factors (maybe the disk/memory/cpu utilization had spiked), but you should be able to figure it out from the splunkd logs just look into the error logs (index=_internal source=*splunkd.log log_level=ERROR host=).
... View more