All Posts

Top

All Posts

Hi @jmartens , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Lockie , at first you don't need a dedicated server to License Manager, you can use the Cluster Manager (better) or the Monitoring Console. Anyway, there's no relations between the cluster role... See more...
Hi @Lockie , at first you don't need a dedicated server to License Manager, you can use the Cluster Manager (better) or the Monitoring Console. Anyway, there's no relations between the cluster roles and the License manager: you have only to configure the cluster componente to use the LM. In other words, in each Search Peer and in the CM, you have to configure the License Manager, manually or deploying an Add-On (on the Search Pers by CM and on the CM by Deployment Server). Ciao. Giuseppe
Thanks @gcusello. I am aware that I need to escape stuff, problem is I do not see where I might have missed one, I already escaped a lot, at least what was required on regex101. It seems your soluti... See more...
Thanks @gcusello. I am aware that I need to escape stuff, problem is I do not see where I might have missed one, I already escaped a lot, at least what was required on regex101. It seems your solution works, will continue with that. Thanks!
when ı have upgraded appdynamics controller from 24.7.3 to 24.10 onprem, which one uses from the garbage collector Cms or G1gc?
Hi @jmartens , this is a bug that I noticed to Splunk Support but they said that's ok! Anyway, when you need to escape a backslash in Splunk in a regex that runs in regex101, you have to add one ot... See more...
Hi @jmartens , this is a bug that I noticed to Splunk Support but they said that's ok! Anyway, when you need to escape a backslash in Splunk in a regex that runs in regex101, you have to add one ot two additional backslashes in Splunk every time you jave a backslash. So try <pre>User\[(?:(?<SignOffDomain>[^\\\]+)(?:\\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)</pre> Ciao. Giuseppe
谢谢。目前,假设我设置总索引大小为 500GB,实际使用了 140GB,配置的存档周期为 200 天,Hot/Arm/Guild Bucket 的最大大小设置为 auto-highvolume GB,但数据已经保留 4 年,仍然没有存档
Hello everyone, I have a question for you. In a single-site cluster, how can I configure license-manage to be a separate node (this node will not have other cluster roles except license-manage), I se... See more...
Hello everyone, I have a question for you. In a single-site cluster, how can I configure license-manage to be a separate node (this node will not have other cluster roles except license-manage), I see that cluster-config does not have a corresponding mode. ==>edit cluster-config -mode manager|peer|searchhead -<parameter_name> <parameter_value> If it is MC, how should I configure it? It would be even better if best practices could be provided
I have the following regex that I (currently) use at search time (it will be a field extraction once I get it ironed out): User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[... See more...
I have the following regex that I (currently) use at search time (it will be a field extraction once I get it ironed out): User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+) It seems to work OK on regex101: https://regex101.com/r/nGdKxQ/5 but fails when trying to parse in Splunk with the following error: Error in 'rex' command: Encountered the following error while compiling the regex 'User\[(?:(?<SignOffDomain>[^\]+)(?:\))?(?<SignOffUsername>[^\]]+)[^\[]+\["(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)': Regex: missing closing parenthesis. Any clue on what I need to escape additionally perhaps? For testing I created the following sample: | makeresults count=2 | streamstats count | eval _raw=if((count%2) == 1, "2025-01-20 08:43:11 Local0 Info 08:43:11:347 HAL-TRT-SN1701 DOMAIN\firstname0.lastname0|4832|TXA HIPAA [1m]HIPAALogging: User[DOMAIN\firstname0.lastname0], Comment[\"Successfully authenticated user with privilege: A_Dummy_Privilege\"], PatientId[PatientIdX], PlanUID[PlanLabel:PlabnLabelX,PlanInstanceUID:PlanInstanceUIDX", "2025-01-20 07:54:42 Local0 Info 07:54:41:911 HAL-TRT-SN1701 domain\firstanme2.lastname2|4832|TXA HIPAA [1m]HIPAALogging: User[firstname1.lastname1], Comment[\"Successfully authenticated user with privilege: AnotherPrivilege\"], PatientId[], PlanUID[], Right[True]") | rex field="_raw" "User\[(?:(?<SignOffDomain>[^\\]+)(?:\\))?(?<SignOffUsername>[^\]]+)[^\[]+\[\"(?<SignOffComment>[^\:]+)\:\s+(?<SignOffPrivilege>[^\"]+)"  
Not exactly that way. You must remember that all time based calculations has done by newest event on bucket! And you could have events e.g. within several months or even longer period (e.g. there is s... See more...
Not exactly that way. You must remember that all time based calculations has done by newest event on bucket! And you could have events e.g. within several months or even longer period (e.g. there is some reindexing for old data) in one bucket. See more from those links which I posted.
@neerajdhiman Yes, you can use it for free. Download the add-on and install it on Heavy forwarder to ingest data. While Splunk Enterprise has a free trial, its trial license typically includes limite... See more...
@neerajdhiman Yes, you can use it for free. Download the add-on and install it on Heavy forwarder to ingest data. While Splunk Enterprise has a free trial, its trial license typically includes limited ingestion capacity (500 MB/day). The AWS Add-on facilitates data ingestion from AWS services like CloudWatch, CloudTrail, S3, etc., and the volume of ingested data could exceed the free trial limit quickly.
You must also remember that all time based activities has calculated on newest event in bucket. This is usually the reason why you have lot of of old events which should be archived by time. More abou... See more...
You must also remember that all time based activities has calculated on newest event in bucket. This is usually the reason why you have lot of of old events which should be archived by time. More about this on those links which I add on another post.
Can we use Splunk Add-on for AWS for free or it requires a license to be used with Splunk Enterprise free trial?
Thanks for the explanation, it was educational. I have corrected the plus operators where appropriate. On the other hand, I have now triple-checked, and in deed, multiple leading whitespaces are ign... See more...
Thanks for the explanation, it was educational. I have corrected the plus operators where appropriate. On the other hand, I have now triple-checked, and in deed, multiple leading whitespaces are ignored in the FORMAT string. But yes, it would seem that Splunk or whoever wrote the SC4S config assumed that they would be honored.
i have same issue, have you fixed it?
I solved the problem using REX  (\"|)Rule:\s*(?P<Rule>.*?)(?:(\"))?\  
Hi here is couple of links to old answers where we are discussed this. https://community.splunk.com/t5/Deployment-Architecture/Right-number-and-size-of-hot-warm-cold-buckets/m-p/681358 https://co... See more...
Hi here is couple of links to old answers where we are discussed this. https://community.splunk.com/t5/Deployment-Architecture/Right-number-and-size-of-hot-warm-cold-buckets/m-p/681358 https://community.splunk.com/t5/Deployment-Architecture/Hot-Warm-Cold-bucket-sizing-How-do-I-set-up-my-index-conf-with/m-p/634696 https://community.splunk.com/t5/Deployment-Architecture/Index-rolling-off-data-before-retention-age/m-p/684799 https://community.splunk.com/t5/Splunk-Enterprise/Why-do-we-have-warm-buckets/m-p/700835 Some of those are little bit out of direct scope of your question, but still those give to you better understanding how this is working. r. Ismo 
My friend and I have the same indexes.conf, but why are the bucket sizes being created different? Mine is around 1MB, but my friend's are created in 5.x MB units.. indexes.conf [volume:hot] path = /... See more...
My friend and I have the same indexes.conf, but why are the bucket sizes being created different? Mine is around 1MB, but my friend's are created in 5.x MB units.. indexes.conf [volume:hot] path = /data/HOT maxVolumeDataSizeMB = 100 [volume:cold] path = /data/COLD maxVolumeDataSizeMB = 100 [lotte] homePath = volume:hot/lotte/db coldPath = volume:cold/lotte/colddb maxDataSize = 1 maxTotalDataSizeMB = 200 thawedPath = $SPLUNK_DB/lotte/thaweddb    
On comment. You shouldn't ever use # on end of any attribute lines in any *.conf files. Splunk cannot handle those correctly! All comments must be on own lines!
Thank you for your reply. The core requirement I want to achieve is to enable the same user or department to access the same account multiple times within 7 consecutive days, with each visit counted ... See more...
Thank you for your reply. The core requirement I want to achieve is to enable the same user or department to access the same account multiple times within 7 consecutive days, with each visit counted as 1 time per day. Finally, filter out those that have been visited for more than 2 days. For example, the first query started on January 1, 2019 at 00:00:00 and ended on January 8 at 00:00:00, the second query started on January 2, 2019 at 00:00:00 and ended on January 9 at 00:00:00, and so on to achieve this requirement. The SPL submitted above is based on the core calculation in the map. The earliest time calculated in the map is based on a 7-day logic to generate periodic data.
Can you explain what you try to do in English not with SPL?