Activity Feed
- Karma Re: Do we need to install the "admin's little helper" app on the indexers, SHs, and HFs? for livehybrid. Friday
- Karma Re: Do we need to install the "admin's little helper" app on the indexers, SHs, and HFs? for gcusello. Friday
- Posted Re: How do I set up a KV Store lookup? on Getting Data In. Friday
- Posted Do we need to install the "admin's little helper" app on the indexers, SHs, and HFs? on All Apps and Add-ons. Friday
- Karma Re: How do I set up a KV Store lookup? for PickleRick. Thursday
- Karma Re: How do I set up a KV Store lookup? for livehybrid. Thursday
- Karma Re: How do I set up a KV Store lookup? for isoutamo. Thursday
- Posted How do I set up a KV Store lookup? on Getting Data In. Wednesday
- Posted Re: Do we need to run an indexer rolling restart when getting new HEC data stream? on Getting Data In. 4 weeks ago
- Posted Re: How to configure the load balancer to handle HEC data? on Getting Data In. 4 weeks ago
- Karma Re: Is saturation level fine as a preparation for additional HEC data stream? for livehybrid. 4 weeks ago
- Posted Do we need to run an indexer rolling restart when getting new HEC data stream? on Getting Data In. 4 weeks ago
- Posted How to configure the load balancer to handle HEC data? on Getting Data In. 4 weeks ago
- Posted Is saturation level fine as a preparation for additional HEC data stream? on Splunk Enterprise. 4 weeks ago
- Posted Integration between Confluent and Splunk on Getting Data In. 02-26-2025 08:15 AM
- Karma Re: What app can execute the btool command on the cloud? for livehybrid. 02-26-2025 08:11 AM
- Posted What app can execute the btool command on the cloud? on Splunk Cloud Platform. 02-25-2025 10:07 AM
- Tagged What app can execute the btool command on the cloud? on Splunk Cloud Platform. 02-25-2025 10:07 AM
- Posted Re: Why don't I see my three Splunk servers via the rest call? on Splunk Search. 02-17-2025 08:42 AM
- Karma Re: Why don't I see my three Splunk servers via the rest call? for richgalloway. 02-16-2025 11:03 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
05-17-2024
01:43 PM
We want to add a TA (app) to our indexers at the path /opt/splunk/etc/master-apps by running the command /opt/splunk/bin/splunk apply cluster-bundle My question is if we can deploy an indexer app without a restart of the indexer? The TA we want to deploy is an extension to the nix TA, and all it does is run some simple bash scripted inputs.
... View more
Labels
- Labels:
-
installation
04-29-2024
12:16 PM
We are in the midst of a migration from physical servers to virtual servers, and we wonder if stopping Splunk is mandatory in order to perform the Cold Data migration or if there’s a workaround to this and this can be safely done without stopping Splunk.
... View more
Labels
- Labels:
-
administration
04-07-2024
11:19 AM
Since we are in early stages of using Splunk cloud, we don't define props.conf as part of the onboarding process, and we introduce props for a certain sourcetype only when parsing goes off. So for a certain sourcetype, line-breaking is off, and when looking for the sourcetype on the indexers (via support) and on the on-prem HF where the data is ingested and I cannot find the props for this sourcetype. So I wonder is it possible to have no sourcetype anywhere for a particular source?
... View more
Labels
- Labels:
-
props.conf
03-04-2024
10:21 AM
It comes down to whether Kafka can transport UDP and TCP reliably from all the devices to the various Splunk clusters.
... View more
03-03-2024
01:10 PM
With syslog-ng we hit all kinds of limitations from the inability to support TCP, to the inability to write fast enough to disk, and therefore losing vast amounts of UDP data, struggling to have the various F5 LBs to distribute the data evenly to the syslog servers behind the F5s... and all of this led to Kafka as a potential solution. Does anybody use Kafka effectively instead of syslog-ng? By the way, we did look at Splunk Connect for Syslog (SC4S) without much luck.
... View more
Labels
- Labels:
-
using Splunk Enterprise
02-21-2024
10:05 AM
Thank you @richgalloway you are helpful as always
... View more
02-21-2024
07:29 AM
I just realized that the NIX TA is being deployed to our forwarders via the deployment apps, to the indexers via the master apps and to the SHs via the SH apps. It was a surprise for me to realize that the TA is not being deployed to the deployment and deployer servers, the license master and the cluster master. And therefore, how can the TA be deployed to all Splunk servers?
... View more
Labels
- Labels:
-
installation
01-29-2024
01:10 PM
Sorry for not being clear. The process of building these VMs is ongoing, and during this process we would like to be absolutely sure that these VMs are ready to host all the Splunk components. For example. our SE suggested to check the following - THP turned off. All internal logs are being forwarded to indexer tier. MC set-up Splunk running as correct user Splunk restart enabled So I wonder what would be the best way to ensure that these VMs are built correctly? I’m thinking about scripted input, and a dedicated dashboard to monitor and verify all the settings. Do you have any other suggestions, by any chance?
... View more
01-26-2024
11:11 AM
We are in the midst of a virtualization project, and we are looking for a way to sanity check all the different components. I know that the MC does some of it, but I’m not sure if it covers all aspects. I’m thinking about scripted input, and a dedicated dashboard to monitor and verify all the settings. Do you have any other suggestions, by any chance?
... View more
- Tags:
- validations
Labels
- Labels:
-
administration
-
configuration
-
installation
12-18-2023
10:50 AM
We use the free version of syslog-ng, and recently we had a requirement to have TLS on top of TCP, and we don't have the knowledge to implement it. Therefore we wonder whether anybody has migrated from syslog-ng to SC4S to assist with more advanced requirements such as TCP/TLS? https://splunkbase.splunk.com/app/4740 If so, what lessons learned and motivations to do that?
... View more
Labels
- Labels:
-
administration
12-12-2023
07:13 AM
Thank you all, so, we have the concept of regions, and our Splunk architecture revolves around it. So, let’s say the European one - it has the all the Splunk data of Europe in the European indexer cluster and because of that I asked the question, whether each region should have its own cluster master or they can share. If they share, how can I figure out how many buckets the cluster handles? So, we won’t reach the one million ..
... View more
12-11-2023
01:08 PM
We are in the process of virtualizing our environments and then we are facing the question of whether to use multiple cluster masters or to have fewer cluster masters that serve more indexers each. However, we don’t know how to go about it. Therefore the question, what are the scalability rules for a cluster master?
... View more
Labels
- Labels:
-
installation
11-29-2023
12:42 PM
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, and I grew up with using maintenance mode for the entire operation. Any thoughts? The current plan - Event monitoring
Suspend monitoring of alert on the indexers
(Repeat following for each indexer, one at a time)
Splunk Ops:
Put Splunk Cluster in Maintenance Mode
Stop Splunk service on One Indexer
VM Ops
vMotion the existing 2.5TB disk to any Unity datastores
Provision new 2.5TB VM disk from the VSAN datastore
Linux Ops
Rename the existing hot data logical volume "/opt/splunk/hot-data" to "/opt/splunk/hot-data-old"
Create a new volume group and mount the new 2.5TB disk as "/opt/splunk/hot-data"
Splunk Ops
Restart Splunk service on indexer
Take Indexer Cluster out of Maintenance Mode
Review Cluster Master to confirm indexer is processing and rebalancing has started as expected
Wait a few minutes to allow for Splunk to rebalance across all indexers
(Return to top and repeat steps for next indexer)
Splunk Ops:
Validate service and perform test searches
Check CM Panel - -> Resources/usage/machine (bottom panel - IOWait Times) and monitor changes in IOWait
Event monitoring
Enable monitoring of alert on the indexers In addition, Splunk PS suggested to use - splunk offline --enforce-counts Not sure if it's the right way since it might need to migrate the ~40TB of cold data, and would slow the entire operation
... View more
- Tags:
- Maintenance mode
Labels
- Labels:
-
administration
-
upgrade
09-29-2023
11:24 AM
We ran into this known issue with the AD servers having indexing delays of a couple of days when enabling evt_resolve_ad_obj. What confuses us is the fact that a UF restart backfills days of missing security data, and since the restart, we can have a week where there are no delays. Why does the restart manage to do this backfill?
... View more
09-24-2023
12:54 PM
With MLTK, when looking at accumulated runtime, the outliers are detected cleanly (three out of three spikes), whereas with the anomaly detection app, only two of the three spikes are detected (along with one false positive, even at medium sensitivity). The code generated by the MLTK is as follows - index=_audit host=XXXXXXXX action=search info=completed
| table _time host total_run_time savedsearch_name
| eval total_run_time_mins=total_run_time/60
| convert ctime(search_*)
| eval savedsearch_name=if(savedsearch_name="","Ad-hoc",savedsearch_name)
| search savedsearch_name!="_ACCEL*" AND savedsearch_name!="Ad-hoc"
| timechart span=30m median(total_run_time_mins)
| eval "atf_hour_of_day"=strftime(_time, "%H"), "atf_day_of_week"=strftime(_time, "%w-%A"), "atf_day_of_month"=strftime(_time, "%e"), "atf_month" = strftime(_time, "%m-%B")
| eventstats dc("atf_hour_of_day"),dc("atf_day_of_week"),dc("atf_day_of_month"),dc("atf_month") | eval "atf_hour_of_day"=if('dc(atf_hour_of_day)'<2, null(), 'atf_hour_of_day'),"atf_day_of_week"=if('dc(atf_day_of_week)'<2, null(), 'atf_day_of_week'),"atf_day_of_month"=if('dc(atf_day_of_month)'<2, null(), 'atf_day_of_month'),"atf_month"=if('dc(atf_month)'<2, null(), 'atf_month') | fields - "dc(atf_hour_of_day)","dc(atf_day_of_week)","dc(atf_day_of_month)","dc(atf_month)" | eval "_atf_hour_of_day_copy"=atf_hour_of_day,"_atf_day_of_week_copy"=atf_day_of_week,"_atf_day_of_month_copy"=atf_day_of_month,"_atf_month_copy"=atf_month | fields - "atf_hour_of_day","atf_day_of_week","atf_day_of_month","atf_month" | rename "_atf_hour_of_day_copy" as "atf_hour_of_day","_atf_day_of_week_copy" as "atf_day_of_week","_atf_day_of_month_copy" as "atf_day_of_month","_atf_month_copy" as "atf_month"
| fit DensityFunction "median(total_run_time_mins)" by "atf_hour_of_day" dist=expon threshold=0.01 show_density=true show_options="feature_variables,split_by,params" into "_exp_draft_ca4283816029483bb0ebe68319e5c3e7" And the code generated by the anomaly detection app - ``` Same data as above ```
| dedup _time
| sort 0 _time
| table _time XXXX
| interpolatemissingvalues value_field="XXXX"
| fit AutoAnomalyDetection XXXX job_name=test sensitivity=1
| table _time, XXXX, isOutlier, anomConf The major code difference is that with MLTK, we use - | fit DensityFunction "median(total_run_time_mins)" by "atf_hour_of_day" dist=expon threshold=0.01 show_density=true show_options="feature_variables,split_by,params" into "_exp_draft_ca4283816029483bb0ebe68319e5c3e7" whereas with the anomaly detection app, we use - | fit AutoAnomalyDetection XXXX job_name=test sensitivity=1
| table _time, XXXX, isOutlier, anomConf Any ideas why the fit function uses DensityFunction vs AutoAnomalyDetection parameters, and why the results are different?
... View more
Labels
- Labels:
-
dashboard
09-24-2023
11:56 AM
@ljvc Thank you for the direction
... View more
09-17-2023
01:58 PM
We have a case of a delay of an hour for a certain index that happened last week, while the indexing delays are normally up to half a minute. I'm struggling with the parameters for the MLTK to capture these specific cases as outliers. Any ideas how to set it up correctly? It’s the tolerance that seems to be affected by the spike itself.
... View more
Labels
- Labels:
-
dashboard
09-06-2023
11:22 AM
@VatsalJaganiI looked in a couple of environments and I don't see it as automatic lookup. Any ideas?
... View more
09-05-2023
12:53 PM
In Step 2 "Add the Dataset" of "Create Anomaly Job" within the Splunk App for Anomaly Detection, when running the following SPL, we get the warning- index=wineventlog_security
| timechart count
"Could not load lookup=LOOKUP-HTTP_STATUS No matching fields exist." What can it be? We use the following versions - Splunk App for Anomaly Detection - 1.1.0 Python for Scientific Computing - 4.1.2 Splunk Machine Learning Toolkit - 5.4.0
... View more
Labels
- Labels:
-
lookup
07-20-2023
11:46 AM
We have a large (~500 line) report being used to calculate CVE scores and fill a summary index daily, with vulnerabilities from Qualys as the initial input that gets enriched.
One of the fields in the ouput is called ICT_ID, and this is supposed to match on the vulnerabilities from Qualys according to a lookup CSV file. If the vulnerability matches, it gets a corresponding ICT_ID, otherwise this field is NULL.
The issue is that for our lookup file, there are no unique primary keys. QID (Qualys vuln ID) is the closest thing to a PK in the lookup, but there are multiple rows with the same QID and other fields like IP and host which differ.
The requirement for matching a vulnerability to the ICT list is two-fold: 1) the QID must match, but also must match 2) *any* of the following (host, IP, app) *in that order of precedence*
-----------------
The following code was implemented, but it seems to only match on the first matching instance of QID in the lookup, which usually breaks the rest of the logic for the other fields which should match.
``` precedence: (host -> ip -> app -> assetType) && QID ```
| join type=left QID
[
| inputlookup ICT_LOOKUP.csv
| rename "Exp Date" as Exp_Date
| rename * as ICT_LOOKUP-*
| rename ICT_LOOKUP-QID as QID
]
```| rename fields from lookup as VM_ICT_Tracker-<field>```
| eval ICTNumber = case(
like(entity,'ICT_LOOKUP-host'), 'ICT_LOOKUP-ICTNumber',
like(IP,'ICT_LOOKUP-ip'), 'ICT_LOOKUP-ICTNumber',
like(app,'ICT_LOOKUP-app'), 'ICT_LOOKUP-ICTNumber',
like(assetType,'ICT_LOOKUP-assetType'), 'ICT_LOOKUP-ICTNumber',
1=1,"NULL"
)
| rename ICTNumber as ICT_ID
... View more
06-12-2023
08:43 AM
We need to call a search via the API and return a link to a report, produced by this call. Is it doable? So, I have something like the following that returns the result set as json, and the requirement is to return it as a link to a report - curl -k -u 'moogsoft_smart_triage_user:xxxxxx' https://<host>:8089/servicesNS/moogsoft_smart_triage_user/search/search/jobs/export -d search="| savedsearch smart_triage_api_test INC=INCxxxx DeviceType=TestDeviceType" -d output_mode=json -d preview=false
... View more
- Tags:
- link to a report
06-06-2023
11:46 AM
We don't like to give blind access to the MC, to temporary Splunk contractors. They need something like the following - - Search performance and distributed search framework - Indexing performance - Operating system resources usage - Splunk KV store performance - Index and volume usage - Forwarder connections and network performance Is there a way to access this information from outside the MC? API or any other way?
... View more
- Tags:
- MC data
Labels
- Labels:
-
troubleshooting
06-02-2023
12:05 PM
We have indexes with retention of a year each, and when looking at _audit, it's pretty obvious that the queries against this index are mostly either daily or weekly. We would like to present to management the "waste" of storage (and maybe potentially making a case for using cheaper storage for older data, ie. older than 3 months). Is anybody aware of any visualization that does it? We tried to combine the information from audit with the retention information that we got via REST, however we don't have a cohesive view that can be appealing to management. Any ideas?
... View more
Labels
- Labels:
-
dashboard
06-01-2023
06:26 AM
I need a query that will provide the earliest date for data within an index as well as the indexer it is stored on, specifically looking for which indexes are storing data for one year or more and on which indexers they are being stored on. Any ideas?
... View more
- Tags:
- data storage
Labels
- Labels:
-
tstats
05-09-2023
11:35 AM
A colleague of mine uses the following dedup version: | strcat entity "-" IP "-" QID "-" Port "-" Tracking_Method "-" Last_Detected Key
| dedup Key And I grew up with | dedup entity IP QID Port Tracking_Method Last_Detected One caveat is Tracking_Method doesn't always exist. So which version is better?
... View more
- Tags:
- version of dedup