All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Split yes sorry it was 3am (one of those arr moments). So as part of this automation I need to build to grow / shrink the disk, part of this.  Is key. In an ideal world Splunk would inform the auto... See more...
Split yes sorry it was 3am (one of those arr moments). So as part of this automation I need to build to grow / shrink the disk, part of this.  Is key. In an ideal world Splunk would inform the automation I need to grow / shrink the volume on the cluster nodes. the update the automation would update splunk .conf files to set maxTotalDataSizeMB < the total disk now available in each cluster. node.  And then adjust the .conf for each index. Key to this is scan for all indexes.  Get the daily compression ration of the TXIDX.  The compression ration of the RAW data.  And the Daily data through put per index. For me I need 90 days data.  So into this build in a safety factor.
Please share some actual (anonymised) events so we can see what you are actually dealing with. Also, provide an example of the type of output you are looking for.
Hello everyone!  I'm trying to create a dashboard and set some tokens through javascript. I have some html text inputs and I want that, on the click of a button, they set the corresponding tokens t... See more...
Hello everyone!  I'm trying to create a dashboard and set some tokens through javascript. I have some html text inputs and I want that, on the click of a button, they set the corresponding tokens to the inputted value.  However, when I try to click again the button, the click event doesn't trigger. Can you help me?   require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function (_, $, mvc) { function setToken(name, value) { mvc.Components.get("default").set(name, value); mvc.Components.get('submitted', {create: true}).set(name, value); } /* ----------------------- */ let prefix = mvc.Components.get("default").get('personal_input_prefix') ?? "personal_"; // Setting tokens for Inputs with prefix ${prefix} $('#personal_submit').on('click', function(e){ e.preventDefault(); console.log("CLICKED"); let input_text = $("input[type=text]"); for (let element of input_text) { let id = element.id; if (id !== undefined && id.startsWith(prefix)){ let value = element.value; setToken(`${id}_token`, value); // <--- document.getElementById(`${id}_token_id`).innerHTML = value; // Set token ${id}_token to value ${value} } } }); });      DASHBOARD EXAMPLE:   <form version="1.1" theme="light" script="test.js"> <label>Dashboard test</label> <row> <panel> <html> <input id="personal_valueA" type="text"/> <input id="personal_valueB" type="text"/> <button id="personal_submit" class="primary-btn">Click</button> <br/> Show: <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueA_token_id">$personal_valueB_token$</p> </html> </panel> </row> </form>    
Hi there are several reasons which can cause to switch a new bucket event it's max size is reached.  When you are looking how your configuration has done. you should always use btool instead of loo... See more...
Hi there are several reasons which can cause to switch a new bucket event it's max size is reached.  When you are looking how your configuration has done. you should always use btool instead of looking those from file. Btool tolds you how splunk see those configurations as usually those are combined from several files. You both should use  splunk btool indexes list --debug lotte to see what is actual configuration for index lotte.  One reason for small bucket can be source events which contains events which have time stamps from past and future. Basically those haven't continuous increasing timestamps. When I look those smaller buckets there seem to be this kind of behavior based on those epoch times in bucket names. r. Ismo
Thank you very much for your feedback! I apologize for the anonymized query; I realize some parts were trimmed incorrectly. Regarding Point 3: I aim to have both the main search and the subsearch u... See more...
Thank you very much for your feedback! I apologize for the anonymized query; I realize some parts were trimmed incorrectly. Regarding Point 3: I aim to have both the main search and the subsearch use the same earliest and latest time fields. The idea is that the tstats command serves as a pre-filter, while the base search is used to retrieve the raw events. The query I wrote generally works as expected, but sometimes it fails to correctly use the specified earliest and latest. For instance, during one test, it returned the correct time range, but when tested an hour later, it didn’t align with the specified time. Interestingly, I noticed that tweaking the search command sometimes resolves this issue and ensures it searches within the correct time range. 
As usual this depends on your environment how you should do it.  Personally I prefer separate LM if you have distributed environment and especially if you have several clusters etc. I also avoid to ... See more...
As usual this depends on your environment how you should do it.  Personally I prefer separate LM if you have distributed environment and especially if you have several clusters etc. I also avoid to use any CM as LM. You could easily run LM as virtual node and it doesn't need almost any resources (2-4vCPU, 2-4GB memory etc.) Of course if you have lot of indexers then it should be bigger. Just configure it as an individual node and/or node which send its internal logs into some indexers/cluster (I prefer this). Usually I do this with conf files not with those commands. There is no need to configure it as a search head in cluster. Especially if you have MC on another node where you check license status. If you haven't and you have forwarded internal logs into cluster then just add it as an individual SH should installed into cluster environment.  r. Ismo Probably most important thing is that its general stanza's pass4symmKey is same than all those nodes which are connected to it (or at least that was earlier required). 
Hi @Lockie , the LM is a Splunk instance and all the Splunk Servers that need a licence point to it. the Servers that need to point to the LM are IDXs, SHs, CM, MC, SHC_D, and  HFs, UFs don't need ... See more...
Hi @Lockie , the LM is a Splunk instance and all the Splunk Servers that need a licence point to it. the Servers that need to point to the LM are IDXs, SHs, CM, MC, SHC_D, and  HFs, UFs don't need a connection. You can configurethis link manually by GUI or creating an add-on to deploy using the CM on the IDXs, the SHC-D for the SHs and the DS for the other roles, Ciao. Giuseppe  
Thank you for your reply. I understand. I tried to do this today but couldn't find a way. Is there a way to separate the license-manager? If the software itself does not support it, I won't bother wi... See more...
Thank you for your reply. I understand. I tried to do this today but couldn't find a way. Is there a way to separate the license-manager? If the software itself does not support it, I won't bother with it. In addition, please tell me how to separate the mc and how to configure it
Hi @jmartens , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Lockie , at first you don't need a dedicated server to License Manager, you can use the Cluster Manager (better) or the Monitoring Console. Anyway, there's no relations between the cluster role... See more...
Hi @Lockie , at first you don't need a dedicated server to License Manager, you can use the Cluster Manager (better) or the Monitoring Console. Anyway, there's no relations between the cluster roles and the License manager: you have only to configure the cluster componente to use the LM. In other words, in each Search Peer and in the CM, you have to configure the License Manager, manually or deploying an Add-On (on the Search Pers by CM and on the CM by Deployment Server). Ciao. Giuseppe
Thanks @gcusello. I am aware that I need to escape stuff, problem is I do not see where I might have missed one, I already escaped a lot, at least what was required on regex101. It seems your soluti... See more...
Thanks @gcusello. I am aware that I need to escape stuff, problem is I do not see where I might have missed one, I already escaped a lot, at least what was required on regex101. It seems your solution works, will continue with that. Thanks!
when ı have upgraded appdynamics controller from 24.7.3 to 24.10 onprem, which one uses from the garbage collector Cms or G1gc?
Hi @jmartens , this is a bug that I noticed to Splunk Support but they said that's ok! Anyway, when you need to escape a backslash in Splunk in a regex that runs in regex101, you have to add one ot... See more...
Hi @jmartens , this is a bug that I noticed to Splunk Support but they said that's ok! Anyway, when you need to escape a backslash in Splunk in a regex that runs in regex101, you have to add one ot two additional backslashes in Splunk every time you jave a backslash. So try <pre>User\[(?:(?<SignOffDomain>[^\\\]+)(?:\\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)</pre> Ciao. Giuseppe
谢谢。目前,假设我设置总索引大小为 500GB,实际使用了 140GB,配置的存档周期为 200 天,Hot/Arm/Guild Bucket 的最大大小设置为 auto-highvolume GB,但数据已经保留 4 年,仍然没有存档
Hello everyone, I have a question for you. In a single-site cluster, how can I configure license-manage to be a separate node (this node will not have other cluster roles except license-manage), I se... See more...
Hello everyone, I have a question for you. In a single-site cluster, how can I configure license-manage to be a separate node (this node will not have other cluster roles except license-manage), I see that cluster-config does not have a corresponding mode. ==>edit cluster-config -mode manager|peer|searchhead -<parameter_name> <parameter_value> If it is MC, how should I configure it? It would be even better if best practices could be provided
I have the following regex that I (currently) use at search time (it will be a field extraction once I get it ironed out): User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[... See more...
I have the following regex that I (currently) use at search time (it will be a field extraction once I get it ironed out): User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+) It seems to work OK on regex101: https://regex101.com/r/nGdKxQ/5 but fails when trying to parse in Splunk with the following error: Error in 'rex' command: Encountered the following error while compiling the regex 'User\[(?:(?<SignOffDomain>[^\]+)(?:\))?(?<SignOffUsername>[^\]]+)[^\[]+\["(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)': Regex: missing closing parenthesis. Any clue on what I need to escape additionally perhaps? For testing I created the following sample: | makeresults count=2 | streamstats count | eval _raw=if((count%2) == 1, "2025-01-20 08:43:11 Local0 Info 08:43:11:347 HAL-TRT-SN1701 DOMAIN\firstname0.lastname0|4832|TXA HIPAA [1m]HIPAALogging: User[DOMAIN\firstname0.lastname0], Comment[\"Successfully authenticated user with privilege: A_Dummy_Privilege\"], PatientId[PatientIdX], PlanUID[PlanLabel:PlabnLabelX,PlanInstanceUID:PlanInstanceUIDX", "2025-01-20 07:54:42 Local0 Info 07:54:41:911 HAL-TRT-SN1701 domain\firstanme2.lastname2|4832|TXA HIPAA [1m]HIPAALogging: User[firstname1.lastname1], Comment[\"Successfully authenticated user with privilege: AnotherPrivilege\"], PatientId[], PlanUID[], Right[True]") | rex field="_raw" "User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)"  
Not exactly that way. You must remember that all time based calculations has done by newest event on bucket! And you could have events e.g. within several months or even longer period (e.g. there is s... See more...
Not exactly that way. You must remember that all time based calculations has done by newest event on bucket! And you could have events e.g. within several months or even longer period (e.g. there is some reindexing for old data) in one bucket. See more from those links which I posted.
@neerajdhiman Yes, you can use it for free. Download the add-on and install it on Heavy forwarder to ingest data. While Splunk Enterprise has a free trial, its trial license typically includes limite... See more...
@neerajdhiman Yes, you can use it for free. Download the add-on and install it on Heavy forwarder to ingest data. While Splunk Enterprise has a free trial, its trial license typically includes limited ingestion capacity (500 MB/day). The AWS Add-on facilitates data ingestion from AWS services like CloudWatch, CloudTrail, S3, etc., and the volume of ingested data could exceed the free trial limit quickly.
You must also remember that all time based activities has calculated on newest event in bucket. This is usually the reason why you have lot of of old events which should be archived by time. More abou... See more...
You must also remember that all time based activities has calculated on newest event in bucket. This is usually the reason why you have lot of of old events which should be archived by time. More about this on those links which I add on another post.