All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hmm, Is your ES rule looking at All Time? If so, does it need to? This could chew up quite a bit of resource.
This one actually fixed the issue been working on this over a day without a solution
Hi Peers. How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? If we use Otel collector for data exporting to AppD..is it still required to use AppD a... See more...
Hi Peers. How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? If we use Otel collector for data exporting to AppD..is it still required to use AppD agents also? How the Licensing will work if we use Otel for data exporting to AppD? Otel collector is compatible with both on premise and SaaS Environment of AppD?. Thanks
Hi @Praz_123 , if you'r speaking of ulimit of Splunk Servers, you can use the Monitoring Console health Check. If you're speking of Forwarders (Universal or Heavy it's the same), there's no direct ... See more...
Hi @Praz_123 , if you'r speaking of ulimit of Splunk Servers, you can use the Monitoring Console health Check. If you're speking of Forwarders (Universal or Heavy it's the same), there's no direct solution and you should use the solution from @livehybrid: a shall script input (to insert in a custom add-on) that extract this value and sends it to the Indexers. Ciao. Giuseppe
Hi @raleighj , I suppose that you're using Enterprise Security, if yes, see the Manager Security Posture Dashboard to have these information. Ciao. Giuseppe
Hi @kiran_panchavat    Thanks for your response. Which server contains the `passwords.conf` file for Qualys TA (TA-QualysCloudPlatform)? I couldn't find it on the Heavy Forwarder (HF).
i having some issues to populate the traffic center dashboard in splunk ES. It's showing as "Cannot read properties of undefined (reading 'map')". anyone have any solutions?
Hi @anglewwb35 , adding only few additional information to all the ones of @livehybrid and @kiran_panchavat: The limit of a dedicated Deployment Server is 50 clients to manage: if it has to manage... See more...
Hi @anglewwb35 , adding only few additional information to all the ones of @livehybrid and @kiran_panchavat: The limit of a dedicated Deployment Server is 50 clients to manage: if it has to manage more than 50 clients it must be dedicated; in addition even if it has to manage less than 50 clients it's also relevant the load on the HF because the DS role is an heavy job for the machine and you could compromize the parsing activities done by the HF. Then, on the Deployment Server you need a license, so you should connect it to your License Manager (not using a dedicated or a Free License), anyway it doesn't consume license because it doesn't locally index nothing, infact it's a best practice to forward all internal logs of all machines of the Splunk infrastructure to the Indexers. On the Heavy Forwarder, you can use a Forwarder License (not a Free License!), but only if you perform a normal forwarding, if you need e.g. to use DB-Connect, you need a License, so you have to connect also HFs to the License Manager. Ciao. Giuseppe
Hi @rahulkumar , configurations seem to be correct, but the only effective confirmetion is your: do they run? if using your search will give you the events without json and the correct metedata hos... See more...
Hi @rahulkumar , configurations seem to be correct, but the only effective confirmetion is your: do they run? if using your search will give you the events without json and the correct metedata host, they are correct. Only one additional information: I see that you have still the original sourcetype httpevent that is not useful for your parsing rules, so I hint to add another rule to assign the correct sourcetype, always starting from json fields. e.g. if you have a field called e.g. path.file.name and when you have in it the value "/var/log" these are linux secure logs, you could use these configurations: props.conf [source::http: LogStash] sourcetype = httpevent TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_sourcetype TRANSFORMS-02 = securelog_override_raw  transforms.conf [securelog_set_default_metadata] INGEST_EVAL = host = json_extract(_raw, "host.name") [securelog_override_sourcetype] INGEST_EVAL = _raw = case(path.file.name="/var/log",json_extract(_raw, "message") [securelog_override_raw] INGEST_EVAL = _raw = json_extract(_raw, "message") in this way you assign the correct sourcetype to your logs. Obviously, you have to analyze your logs identifying all the different types of logs and the rules to identify each of them, then you can insert these rules in the case of the second transformation. It's important that all the transformations that use json fields awill be done before the final transformation of the _raw. Ciao. Giuseppe
That is fine, just set host=<yourIndexer> after index=_introspection and you should get this.
The definition of "large" in the context of data typically depends on the specific environment and use case you're considering. In Splunk, large datasets can be assessed by various metrics, including... See more...
The definition of "large" in the context of data typically depends on the specific environment and use case you're considering. In Splunk, large datasets can be assessed by various metrics, including total bytes ingested, the number of events, or records processed. I did a talk in 2020 about scaling to 7.5TB, imagine how much it has scaled since then There are many Splunk users running much much bigger instances than we had too.. https://conf.splunk.com/files/2020/slides/PLA1180C.pdf Total Bytes: In many scenarios, a dataset exceeding several terabytes can be considered large. However, this threshold can vary depending on your Splunk architecture and the capabilities of your infrastructure (e.g., indexers, storage, etc.). Number of Records: Similarly, datasets with millions to billions of records can also be categorized as large. The exact limit often depends on the performance characteristics of your Splunk deployment, such as your hardware capacity and the intended use of the data. Performance Considerations: When assessing whether a dataset is large, consider the impact on performance. Large datasets may affect indexing speed, search performance, and dashboard loading times. It's essential to monitor how your infrastructure handles data volume and adjust your architecture as necessary to ensure efficiency. Ultimately, defining "large" is subjective and should be based on specific business requirements, performance metrics, and the context of your Splunk implementation. For best practices in handling large datasets, review Splunk's documentation on scaling and optimizing your deployment.
SO i want total cpu usage for indexer only
Assuming the data starts from January 10, 2025 to January 22, N is 10, M is 4 times If a user has accessed the same account every day in the first 4 days within the past 10 days, the following will be... See more...
Assuming the data starts from January 10, 2025 to January 22, N is 10, M is 4 times If a user has accessed the same account every day in the first 4 days within the past 10 days, the following will be returned: So the latest alarm output will be on January 15th: The start time is January 6th, the end time is January 15th, and the access account is: xxxxxx01 , The number of visits is 5 The start time is January 7th, the end time is January 16th, and the access account is: xxxxxx01 , The number of visits is 5 The start time is January 8th, the end time is January 17th, and the access account is: xxxxxx01 , The number of visits is 5 The start time is January 9th, the end time is January 18th, and the access account is: xxxxxx01 , The number of visits is 5 The start time is January 10th, the end time is January 10th, and the access account is: xxxxxx01 , The number of visits is 5 Why didn't it sound an alarm afterwards? Because if the condition of M greater than 4 is not met on January 11th, it will be filtered directly and no records will be generated The start time is January 11th, the end time is January 10th, and the access account is: xxxxxx01 , The number of visits is 4
Hi @SN1  You can achieve this with the following search - please see screenshot below for actual example output too.   index="_introspection" component=Hostwide earliest=-4h host=<yourHostname> | ... See more...
Hi @SN1  You can achieve this with the following search - please see screenshot below for actual example output too.   index="_introspection" component=Hostwide earliest=-4h host=<yourHostname> | eval cpu_usage = 'data.cpu_system_pct' + 'data.cpu_user_pct' | timechart span=5m avg(cpu_usage) as avg_cpu_usage     I dont think the other answer provided would work because the REST endpoint does not output a timeseries, its a one-time view of this data. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @anglewwb35  You arent able to use the free license on a server that you plan to use as a Deployment as server as Deployment management capabilities are not available on the free license. There ... See more...
Hi @anglewwb35  You arent able to use the free license on a server that you plan to use as a Deployment as server as Deployment management capabilities are not available on the free license. There is nothing technically stopping you using your Heavy Forwarder (HF) as a Deployment Server (DS) - infact the docs state "If the deployment server has less than 50 clients, you can co-locate the deployment server on an indexer or search head, including a distributed management console." which would also cover your usecase of running the DS on your HF.  So, if you are looking to run the DS on your HF and you have less than 50 clients then you should* be okay, however you should read those pages of the docs to understand any caveats etc. Note - the number 50 isnt a hard limit as such (It wont just stop) but could introduce unknown issues if pushed further. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
@splunkermack  In Splunk, "large" can refer to total data ingestion (typically 100-150 GB per indexer per day), number of events (millions per day, but volume matters more), or individual event size... See more...
@splunkermack  In Splunk, "large" can refer to total data ingestion (typically 100-150 GB per indexer per day), number of events (millions per day, but volume matters more), or individual event size (Splunk handles up to 100,000 bytes per event with limits on segments). High ingestion rates, oversized events, and excessive indexing can impact performance. Regular monitoring and optimization are essential for efficient data management.
@SN1  You can modify your search to aggregate cpu_usage over 4-hour intervals and visualize it.   
@anglewwb35  The deployment server must be on a dedicated server.  In other words, you have to use it only to manage clients, it isn't relevant that you disable the other roles (dedicated server me... See more...
@anglewwb35  The deployment server must be on a dedicated server.  In other words, you have to use it only to manage clients, it isn't relevant that you disable the other roles (dedicated server means just this requirement, don't use it for any additional role, also forwarding!)
@anglewwb35  For a heavy forwarder (HF), you should set up one of the following options: 1) Make the HF a slave of a license master. This will give the HF all of the enterprise capabilities, and th... See more...
@anglewwb35  For a heavy forwarder (HF), you should set up one of the following options: 1) Make the HF a slave of a license master. This will give the HF all of the enterprise capabilities, and the HF will consume no license as long as it does not index data. 2) Install the forwarder license. This will give the HF many enterprise capabilities, but not all. The HF will be able to parse and forward data. However, it will not be permitted to index and it will not be able to act as a deployment server (as an example). This is the option I would usually choose. (Note that the Universal Forwarder has the forwarder license pre-installed.) I strongly discourage using either the trial license or the free license on a production forwarder. Licenses and distributed deployments - Splunk Documentation
I would like to know if it possible to use the same machine for both a Deployment Server and a Heavy Forwarder. If so, would I need two different licenses for this machine? Or can I simply use the Fo... See more...
I would like to know if it possible to use the same machine for both a Deployment Server and a Heavy Forwarder. If so, would I need two different licenses for this machine? Or can I simply use the Forwarder license while utilizing the functions of both the Deployment Server and Heavy Forwarder?Thank you so much