All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How can I mask the verfiication code using props/transforms? {"body": " Verification Code: 123456",   I want to mask the code using props and transforms using below format, not sure how the search... See more...
How can I mask the verfiication code using props/transforms? {"body": " Verification Code: 123456",   I want to mask the code using props and transforms using below format, not sure how the search spl regex is different than regex in transforms props.conf [source::abc] TRANSFORMS-anonymize = abc-anonymizer transforms.conf [abc-anonymizer] DEST_KEY = _raw REGEX =  FORMAT = $1######$2        
This seems to technically work, however I am left with an unwanted "count" column at the end that I don't know how to remove. As an example of what I'm after, I've included the "Target Output" below.... See more...
This seems to technically work, however I am left with an unwanted "count" column at the end that I don't know how to remove. As an example of what I'm after, I've included the "Target Output" below.  Actual Output: Target Output:  
Hello, Are there any recommendations on installation or configurations "Add-on for SharePoint API with AWS Integration". Any help will be highly appreciated.  Add-on for SharePoint API with AWS Int... See more...
Hello, Are there any recommendations on installation or configurations "Add-on for SharePoint API with AWS Integration". Any help will be highly appreciated.  Add-on for SharePoint API with AWS Integration | Splunkbase
Adding to @richgalloway 's answer - every cluster has exactly one active CM (even a multisite cluster). I can never recall the exact numbers but it scales to a range of millions buckets in your clust... See more...
Adding to @richgalloway 's answer - every cluster has exactly one active CM (even a multisite cluster). I can never recall the exact numbers but it scales to a range of millions buckets in your cluster (combined across all your indexes). The main question is why are you asking this particular thing. What issue are you trying to resolve?
1. Did you check splunk list monitor and splunk list inputstatus 2. This might not be related but batch input does not have crcSalt parameter (it makes no sense in batch input context at all). 3... See more...
1. Did you check splunk list monitor and splunk list inputstatus 2. This might not be related but batch input does not have crcSalt parameter (it makes no sense in batch input context at all). 3. Ok, so you have two separate file inputs covering the same path? That might be the problem.
That is indeed interesting because supposedly keeping track of the timezome but in the end sending the timestamp with local time but explicitly saying that's UTC is not even a mistake. It's almost a ... See more...
That is indeed interesting because supposedly keeping track of the timezome but in the end sending the timestamp with local time but explicitly saying that's UTC is not even a mistake. It's almost a crime. What ingenious piece of equipment is that if you can share this with us?
Use the filter token to classify your data (set a synthetic field to either 1 or 0 (or true/false, green/red or whatever you want) and then do a "where" command depending on the option token - match ... See more...
Use the filter token to classify your data (set a synthetic field to either 1 or 0 (or true/false, green/red or whatever you want) and then do a "where" command depending on the option token - match eithee the 0s or 1s of your synthetic field.
Is there a way of creating a search where we can have both LIKE and NOT LIKE, based on user selected option?   ie.  if $user_option_tk$ == True:         | where NOT (error_string LIKE "%$filter_t... See more...
Is there a way of creating a search where we can have both LIKE and NOT LIKE, based on user selected option?   ie.  if $user_option_tk$ == True:         | where NOT (error_string LIKE "%$filter_tk$%") else:         | where error_string LIKE "%$filter_tk$%"
I use the move_policy. I have tried the following, and it acts the same way. To monitor for log files I have this in inputs.conf [monitor://C:\Oracle\config\domains\csel\servers\...\logs\*.logs] I... See more...
I use the move_policy. I have tried the following, and it acts the same way. To monitor for log files I have this in inputs.conf [monitor://C:\Oracle\config\domains\csel\servers\...\logs\*.logs] I have tried both of the following to batch the archived files. 1st try: [batch://C:\Oracle\config\domains\csel\servers\...\DefaultAuditRecorder\[0-9]*.log] move_policy = sinkhole crcSalt = <SOURCE>   2nd try: [batch://C:\Oracle\config\domains\csel\servers\...\] whitelist = /DefaultAuditRecorder\.[0-9]+\.log$ move_policy = sinkhole crcSalt = <SOURCE>   I even tried to blacklist the monitor stanza for the files I whitelist in the batch [monitor://C:\Oracle\config\domains\csel\servers\...\] blacklist = /DefaultAuditRecorder\.[0-9]+\.log$ Splunk still seems to try and monitor these files, and not batch them.
Give these settings a go SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%D%:z MAX_TIMESTAMP_LOOKAHEAD = 30 TRUNCATE = 10000 EVENT_BREAKER_ENABLE = tru... See more...
Give these settings a go SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%D%:z MAX_TIMESTAMP_LOOKAHEAD = 30 TRUNCATE = 10000 EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)
Every indexer cluster must have at least one Cluster Manager (CM).  You can opt to have one or more redundant CMs for availability.  Note that this is optional as the indexer cluster will continue to... See more...
Every indexer cluster must have at least one Cluster Manager (CM).  You can opt to have one or more redundant CMs for availability.  Note that this is optional as the indexer cluster will continue to function normally if the CM is unavailable.  CMs do not scale based on the number of indexers in the cluster. Configuring redundant CMs is not trivial.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/CMredundancy for more information.
Dynamic notifications based on severity is looking for severity in the root of the payload. The pagerduty adddon inserts the custom_details Json object into the payload and it will not get recognized... See more...
Dynamic notifications based on severity is looking for severity in the root of the payload. The pagerduty adddon inserts the custom_details Json object into the payload and it will not get recognized.  However, you can create an event orchestration that looks for severity in the custom_details object and set the severity based on the content of the severity field. { "client": "Splunk", "client_url": "<<splunkurl>", "contexts": null, "description": "<<incident_descr>>", "event_type": "trigger", "incident_key": "<<incident_key>>", "service_key": "<<service_key>>", "details": { "LastSuccessfulCall": "Friday Dec 08, 2023 04:41:58PM", "active": "true", "custom_details": { "severity": "info" }, "field1": "value1", "field2": "value2" } }  
We are in the process of virtualizing our environments and then we are facing the question of whether to use multiple cluster masters or to have fewer cluster masters that serve more indexers each. H... See more...
We are in the process of virtualizing our environments and then we are facing the question of whether to use multiple cluster masters or to have fewer cluster masters that serve more indexers each. However, we don’t know how to go about it. Therefore the question, what are the scalability rules for a cluster master?
So interesting enough, in the router's cli:   xxxx@router:~ # date Mon Dec 11 15:55:57 EST 2023 Also in the GUI showing correctly. I think a props.conf might be the route as it doesnt know how to... See more...
So interesting enough, in the router's cli:   xxxx@router:~ # date Mon Dec 11 15:55:57 EST 2023 Also in the GUI showing correctly. I think a props.conf might be the route as it doesnt know how to translate it? Would anyone be able to help craft an example stanza for it? I just dont want to mess up the logging further
Read the inputs.spec carefuly * This stanza must include the 'move_policy = sinkhole' setting. * This input reads and indexes the files, then DELETES THEM IMMEDIATELY.
1. Generally, the JREs should be interchangeable (unless you're using some extensions specific to given JRE and not really being part of the JRE standard). So while technically possibly the Azul JDK ... See more...
1. Generally, the JREs should be interchangeable (unless you're using some extensions specific to given JRE and not really being part of the JRE standard). So while technically possibly the Azul JDK might work as well, probably nobody tested the add-on with it and it's not on the recommended list so if something breaks, you're on your own. 2. I'm not familiar with this particular add-on but you typically only need JRE on the component where you run your modular inputs. Just as with DBConnect, I'd expect it to be just the HF.
well, yes, i tried to install it on my Splunk and i get the same error.  when i searched for this error, i found this: https://community.splunk.com/t5/All-Apps-and-Add-ons/Error-when-installing-Pyt... See more...
well, yes, i tried to install it on my Splunk and i get the same error.  when i searched for this error, i found this: https://community.splunk.com/t5/All-Apps-and-Add-ons/Error-when-installing-Python-for-Scientific-Computing/m-p/638569   Could you pls try this from the CLI : cd $SPLUNK_HOME/etc/apps tar xf /tmp/python-for-scientific-computing-for-windows-64-bit_410.tgz
OK. You're overcomplicating the issue. If you're gonna do stats and throw all other stuff away, there's no point in doing streamstats which is a much "heavier" command. So what I'd do would be simp... See more...
OK. You're overcomplicating the issue. If you're gonna do stats and throw all other stuff away, there's no point in doing streamstats which is a much "heavier" command. So what I'd do would be simple index=foo | stats count as domaincount by Domain User Workstation Now you'll get your count and you can do with it whatever you please - sorting, filtering, aggregating. You name it. For example, if you want just three most often used domains per each user/workstation, just | sort User Workstation - domaincount | streamstats count by Domain User Workstation | where domaincount<=3 (writing from the top of my head so the sorting might be a bit off).
Whiile Splunk can sometimes guess the proper settings for the sourcetype (and sometimes - as shown in this case - does it quite well), as @richgalloway mentioned - it's good to have the so-called "gr... See more...
Whiile Splunk can sometimes guess the proper settings for the sourcetype (and sometimes - as shown in this case - does it quite well), as @richgalloway mentioned - it's good to have the so-called "great eight" defined for each sourcetype to make it work consistently and efficiently. Having said that - in this particular case your main issue is wrong time in your events! You have your router's time set to a wrong value. Configure it properly. Whether it's reported as UTC or your local timezone is secondary as long as the proper timezone information is supplied (and in your case it is).
Every sourcetype should have a stanza in props.conf.  Create a props.conf file if there isn't a local copy already. The stanza should contain these settings, at a minimum: SHOULD_LINEMERGE LINE_BRE... See more...
Every sourcetype should have a stanza in props.conf.  Create a props.conf file if there isn't a local copy already. The stanza should contain these settings, at a minimum: SHOULD_LINEMERGE LINE_BREAKER TIME_PREFIX TIME_FORMAT MAX_TIMESTAMP_LOOKAHEAD TRUNCATE EVENT_BREAKER_ENABLE EVENT_BREAKER