All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @anglewwb35 , adding only few additional information to all the ones of @livehybrid and @kiran_panchavat: The limit of a dedicated Deployment Server is 50 clients to manage: if it has to manage... See more...
Hi @anglewwb35 , adding only few additional information to all the ones of @livehybrid and @kiran_panchavat: The limit of a dedicated Deployment Server is 50 clients to manage: if it has to manage more than 50 clients it must be dedicated; in addition even if it has to manage less than 50 clients it's also relevant the load on the HF because the DS role is an heavy job for the machine and you could compromize the parsing activities done by the HF. Then, on the Deployment Server you need a license, so you should connect it to your License Manager (not using a dedicated or a Free License), anyway it doesn't consume license because it doesn't locally index nothing, infact it's a best practice to forward all internal logs of all machines of the Splunk infrastructure to the Indexers. On the Heavy Forwarder, you can use a Forwarder License (not a Free License!), but only if you perform a normal forwarding, if you need e.g. to use DB-Connect, you need a License, so you have to connect also HFs to the License Manager. Ciao. Giuseppe
Hi @rahulkumar , configurations seem to be correct, but the only effective confirmetion is your: do they run? if using your search will give you the events without json and the correct metedata hos... See more...
Hi @rahulkumar , configurations seem to be correct, but the only effective confirmetion is your: do they run? if using your search will give you the events without json and the correct metedata host, they are correct. Only one additional information: I see that you have still the original sourcetype httpevent that is not useful for your parsing rules, so I hint to add another rule to assign the correct sourcetype, always starting from json fields. e.g. if you have a field called e.g. path.file.name and when you have in it the value "/var/log" these are linux secure logs, you could use these configurations: props.conf [source::http: LogStash] sourcetype = httpevent TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_sourcetype TRANSFORMS-02 = securelog_override_raw  transforms.conf [securelog_set_default_metadata] INGEST_EVAL = host = json_extract(_raw, "host.name") [securelog_override_sourcetype] INGEST_EVAL = _raw = case(path.file.name="/var/log",json_extract(_raw, "message") [securelog_override_raw] INGEST_EVAL = _raw = json_extract(_raw, "message") in this way you assign the correct sourcetype to your logs. Obviously, you have to analyze your logs identifying all the different types of logs and the rules to identify each of them, then you can insert these rules in the case of the second transformation. It's important that all the transformations that use json fields awill be done before the final transformation of the _raw. Ciao. Giuseppe
That is fine, just set host=<yourIndexer> after index=_introspection and you should get this.
The definition of "large" in the context of data typically depends on the specific environment and use case you're considering. In Splunk, large datasets can be assessed by various metrics, including... See more...
The definition of "large" in the context of data typically depends on the specific environment and use case you're considering. In Splunk, large datasets can be assessed by various metrics, including total bytes ingested, the number of events, or records processed. I did a talk in 2020 about scaling to 7.5TB, imagine how much it has scaled since then There are many Splunk users running much much bigger instances than we had too.. https://conf.splunk.com/files/2020/slides/PLA1180C.pdf Total Bytes: In many scenarios, a dataset exceeding several terabytes can be considered large. However, this threshold can vary depending on your Splunk architecture and the capabilities of your infrastructure (e.g., indexers, storage, etc.). Number of Records: Similarly, datasets with millions to billions of records can also be categorized as large. The exact limit often depends on the performance characteristics of your Splunk deployment, such as your hardware capacity and the intended use of the data. Performance Considerations: When assessing whether a dataset is large, consider the impact on performance. Large datasets may affect indexing speed, search performance, and dashboard loading times. It's essential to monitor how your infrastructure handles data volume and adjust your architecture as necessary to ensure efficiency. Ultimately, defining "large" is subjective and should be based on specific business requirements, performance metrics, and the context of your Splunk implementation. For best practices in handling large datasets, review Splunk's documentation on scaling and optimizing your deployment.
SO i want total cpu usage for indexer only
Assuming the data starts from January 10, 2025 to January 22, N is 10, M is 4 times If a user has accessed the same account every day in the first 4 days within the past 10 days, the following will be... See more...
Assuming the data starts from January 10, 2025 to January 22, N is 10, M is 4 times If a user has accessed the same account every day in the first 4 days within the past 10 days, the following will be returned: So the latest alarm output will be on January 15th: The start time is January 6th, the end time is January 15th, and the access account is: xxxxxx01 , The number of visits is 5 The start time is January 7th, the end time is January 16th, and the access account is: xxxxxx01 , The number of visits is 5 The start time is January 8th, the end time is January 17th, and the access account is: xxxxxx01 , The number of visits is 5 The start time is January 9th, the end time is January 18th, and the access account is: xxxxxx01 , The number of visits is 5 The start time is January 10th, the end time is January 10th, and the access account is: xxxxxx01 , The number of visits is 5 Why didn't it sound an alarm afterwards? Because if the condition of M greater than 4 is not met on January 11th, it will be filtered directly and no records will be generated The start time is January 11th, the end time is January 10th, and the access account is: xxxxxx01 , The number of visits is 4
Hi @SN1  You can achieve this with the following search - please see screenshot below for actual example output too.   index="_introspection" component=Hostwide earliest=-4h host=<yourHostname> | ... See more...
Hi @SN1  You can achieve this with the following search - please see screenshot below for actual example output too.   index="_introspection" component=Hostwide earliest=-4h host=<yourHostname> | eval cpu_usage = 'data.cpu_system_pct' + 'data.cpu_user_pct' | timechart span=5m avg(cpu_usage) as avg_cpu_usage     I dont think the other answer provided would work because the REST endpoint does not output a timeseries, its a one-time view of this data. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @anglewwb35  You arent able to use the free license on a server that you plan to use as a Deployment as server as Deployment management capabilities are not available on the free license. There ... See more...
Hi @anglewwb35  You arent able to use the free license on a server that you plan to use as a Deployment as server as Deployment management capabilities are not available on the free license. There is nothing technically stopping you using your Heavy Forwarder (HF) as a Deployment Server (DS) - infact the docs state "If the deployment server has less than 50 clients, you can co-locate the deployment server on an indexer or search head, including a distributed management console." which would also cover your usecase of running the DS on your HF.  So, if you are looking to run the DS on your HF and you have less than 50 clients then you should* be okay, however you should read those pages of the docs to understand any caveats etc. Note - the number 50 isnt a hard limit as such (It wont just stop) but could introduce unknown issues if pushed further. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
@splunkermack  In Splunk, "large" can refer to total data ingestion (typically 100-150 GB per indexer per day), number of events (millions per day, but volume matters more), or individual event size... See more...
@splunkermack  In Splunk, "large" can refer to total data ingestion (typically 100-150 GB per indexer per day), number of events (millions per day, but volume matters more), or individual event size (Splunk handles up to 100,000 bytes per event with limits on segments). High ingestion rates, oversized events, and excessive indexing can impact performance. Regular monitoring and optimization are essential for efficient data management.
@SN1  You can modify your search to aggregate cpu_usage over 4-hour intervals and visualize it.   
@anglewwb35  The deployment server must be on a dedicated server.  In other words, you have to use it only to manage clients, it isn't relevant that you disable the other roles (dedicated server me... See more...
@anglewwb35  The deployment server must be on a dedicated server.  In other words, you have to use it only to manage clients, it isn't relevant that you disable the other roles (dedicated server means just this requirement, don't use it for any additional role, also forwarding!)
@anglewwb35  For a heavy forwarder (HF), you should set up one of the following options: 1) Make the HF a slave of a license master. This will give the HF all of the enterprise capabilities, and th... See more...
@anglewwb35  For a heavy forwarder (HF), you should set up one of the following options: 1) Make the HF a slave of a license master. This will give the HF all of the enterprise capabilities, and the HF will consume no license as long as it does not index data. 2) Install the forwarder license. This will give the HF many enterprise capabilities, but not all. The HF will be able to parse and forward data. However, it will not be permitted to index and it will not be able to act as a deployment server (as an example). This is the option I would usually choose. (Note that the Universal Forwarder has the forwarder license pre-installed.) I strongly discourage using either the trial license or the free license on a production forwarder. Licenses and distributed deployments - Splunk Documentation
I would like to know if it possible to use the same machine for both a Deployment Server and a Heavy Forwarder. If so, would I need two different licenses for this machine? Or can I simply use the Fo... See more...
I would like to know if it possible to use the same machine for both a Deployment Server and a Heavy Forwarder. If so, would I need two different licenses for this machine? Or can I simply use the Forwarder license while utilizing the functions of both the Deployment Server and Heavy Forwarder?Thank you so much
Hello I have this search | rest splunk_server=MSE-SVSPLUNKI01 /services/server/status/resource-usage/hostwide | eval cpu_usage = cpu_system_pct + cpu_user_pct | where cpu_usage > 10 I want to t... See more...
Hello I have this search | rest splunk_server=MSE-SVSPLUNKI01 /services/server/status/resource-usage/hostwide | eval cpu_usage = cpu_system_pct + cpu_user_pct | where cpu_usage > 10 I want to this search to give a graph visualization of total cpu_usage every 4 hours.
Try using eventstats instead of join to keep both sent and received transactions. coalesce helps handle null values. This approach avoids lookup and maintains full data visibility while ensuring the ... See more...
Try using eventstats instead of join to keep both sent and received transactions. coalesce helps handle null values. This approach avoids lookup and maintains full data visibility while ensuring the correct filtering of accounts. I work in an animation studio, and transferring large video files was always a challenge. We tried cloud storage, but it was slow and required sign-ups. Filemail solved all our problems—it’s fast, secure, and lets us send huge files without forcing the recipient to create an account. If you’re in the creative industry, this is a must-have!
Hi there, I finally found the solution! To hide the Splunk bar in the React app, you just need to pass some parameters. In my case, I added them in index.jsx (where I render all my components), and... See more...
Hi there, I finally found the solution! To hide the Splunk bar in the React app, you just need to pass some parameters. In my case, I added them in index.jsx (where I render all my components), and it worked for me: { hideChrome: true, pageTitle: "Splunk React app", theme, hideSplunkBar: true }
Its added some table like this info_max_time info_min_time info_search_time info_sid +Infinity 0.000 17398492392.991 123123412132323   Is it because min_time = 0 and max_time = +Inf... See more...
Its added some table like this info_max_time info_min_time info_search_time info_sid +Infinity 0.000 17398492392.991 123123412132323   Is it because min_time = 0 and max_time = +Infinity? And what would be the solution?
What is the definition of large? Is it measured in total bytes? Number of records? And in either case how much?
Thanks for your reply, since i don't have privilege to see that i will follow up this issue first. if it solved i will give you upvote/karma points.  Danke  Zake
@zksvc  Verify that the new user is replicated across all search heads in the cluster. You can use the splunk show shcluster-status command to check the status of your search head cluster and ensure... See more...
@zksvc  Verify that the new user is replicated across all search heads in the cluster. You can use the splunk show shcluster-status command to check the status of your search head cluster and ensure all members are in sync.  Use the Monitoring Console to view the status of your search head cluster and identify any issues with job execution.  Please check this: Solved: Why is a Search Head Cluster Member not replicatin... - Splunk Community Use the monitoring console to view search head cluster status and troubleshoot issues - Splunk Documentation Solved: Trying to run a search, why are we getting a "Queu... - Splunk Community limits.conf - Splunk Documentation