All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That is indeed interesting because supposedly keeping track of the timezome but in the end sending the timestamp with local time but explicitly saying that's UTC is not even a mistake. It's almost a ... See more...
That is indeed interesting because supposedly keeping track of the timezome but in the end sending the timestamp with local time but explicitly saying that's UTC is not even a mistake. It's almost a crime. What ingenious piece of equipment is that if you can share this with us?
Use the filter token to classify your data (set a synthetic field to either 1 or 0 (or true/false, green/red or whatever you want) and then do a "where" command depending on the option token - match ... See more...
Use the filter token to classify your data (set a synthetic field to either 1 or 0 (or true/false, green/red or whatever you want) and then do a "where" command depending on the option token - match eithee the 0s or 1s of your synthetic field.
Is there a way of creating a search where we can have both LIKE and NOT LIKE, based on user selected option?   ie.  if $user_option_tk$ == True:         | where NOT (error_string LIKE "%$filter_t... See more...
Is there a way of creating a search where we can have both LIKE and NOT LIKE, based on user selected option?   ie.  if $user_option_tk$ == True:         | where NOT (error_string LIKE "%$filter_tk$%") else:         | where error_string LIKE "%$filter_tk$%"
I use the move_policy. I have tried the following, and it acts the same way. To monitor for log files I have this in inputs.conf [monitor://C:\Oracle\config\domains\csel\servers\...\logs\*.logs] I... See more...
I use the move_policy. I have tried the following, and it acts the same way. To monitor for log files I have this in inputs.conf [monitor://C:\Oracle\config\domains\csel\servers\...\logs\*.logs] I have tried both of the following to batch the archived files. 1st try: [batch://C:\Oracle\config\domains\csel\servers\...\DefaultAuditRecorder\[0-9]*.log] move_policy = sinkhole crcSalt = <SOURCE>   2nd try: [batch://C:\Oracle\config\domains\csel\servers\...\] whitelist = /DefaultAuditRecorder\.[0-9]+\.log$ move_policy = sinkhole crcSalt = <SOURCE>   I even tried to blacklist the monitor stanza for the files I whitelist in the batch [monitor://C:\Oracle\config\domains\csel\servers\...\] blacklist = /DefaultAuditRecorder\.[0-9]+\.log$ Splunk still seems to try and monitor these files, and not batch them.
Give these settings a go SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%D%:z MAX_TIMESTAMP_LOOKAHEAD = 30 TRUNCATE = 10000 EVENT_BREAKER_ENABLE = tru... See more...
Give these settings a go SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%D%:z MAX_TIMESTAMP_LOOKAHEAD = 30 TRUNCATE = 10000 EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)
Every indexer cluster must have at least one Cluster Manager (CM).  You can opt to have one or more redundant CMs for availability.  Note that this is optional as the indexer cluster will continue to... See more...
Every indexer cluster must have at least one Cluster Manager (CM).  You can opt to have one or more redundant CMs for availability.  Note that this is optional as the indexer cluster will continue to function normally if the CM is unavailable.  CMs do not scale based on the number of indexers in the cluster. Configuring redundant CMs is not trivial.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/CMredundancy for more information.
Dynamic notifications based on severity is looking for severity in the root of the payload. The pagerduty adddon inserts the custom_details Json object into the payload and it will not get recognized... See more...
Dynamic notifications based on severity is looking for severity in the root of the payload. The pagerduty adddon inserts the custom_details Json object into the payload and it will not get recognized.  However, you can create an event orchestration that looks for severity in the custom_details object and set the severity based on the content of the severity field. { "client": "Splunk", "client_url": "<<splunkurl>", "contexts": null, "description": "<<incident_descr>>", "event_type": "trigger", "incident_key": "<<incident_key>>", "service_key": "<<service_key>>", "details": { "LastSuccessfulCall": "Friday Dec 08, 2023 04:41:58PM", "active": "true", "custom_details": { "severity": "info" }, "field1": "value1", "field2": "value2" } }  
We are in the process of virtualizing our environments and then we are facing the question of whether to use multiple cluster masters or to have fewer cluster masters that serve more indexers each. H... See more...
We are in the process of virtualizing our environments and then we are facing the question of whether to use multiple cluster masters or to have fewer cluster masters that serve more indexers each. However, we don’t know how to go about it. Therefore the question, what are the scalability rules for a cluster master?
So interesting enough, in the router's cli:   xxxx@router:~ # date Mon Dec 11 15:55:57 EST 2023 Also in the GUI showing correctly. I think a props.conf might be the route as it doesnt know how to... See more...
So interesting enough, in the router's cli:   xxxx@router:~ # date Mon Dec 11 15:55:57 EST 2023 Also in the GUI showing correctly. I think a props.conf might be the route as it doesnt know how to translate it? Would anyone be able to help craft an example stanza for it? I just dont want to mess up the logging further
Read the inputs.spec carefuly * This stanza must include the 'move_policy = sinkhole' setting. * This input reads and indexes the files, then DELETES THEM IMMEDIATELY.
1. Generally, the JREs should be interchangeable (unless you're using some extensions specific to given JRE and not really being part of the JRE standard). So while technically possibly the Azul JDK ... See more...
1. Generally, the JREs should be interchangeable (unless you're using some extensions specific to given JRE and not really being part of the JRE standard). So while technically possibly the Azul JDK might work as well, probably nobody tested the add-on with it and it's not on the recommended list so if something breaks, you're on your own. 2. I'm not familiar with this particular add-on but you typically only need JRE on the component where you run your modular inputs. Just as with DBConnect, I'd expect it to be just the HF.
well, yes, i tried to install it on my Splunk and i get the same error.  when i searched for this error, i found this: https://community.splunk.com/t5/All-Apps-and-Add-ons/Error-when-installing-Pyt... See more...
well, yes, i tried to install it on my Splunk and i get the same error.  when i searched for this error, i found this: https://community.splunk.com/t5/All-Apps-and-Add-ons/Error-when-installing-Python-for-Scientific-Computing/m-p/638569   Could you pls try this from the CLI : cd $SPLUNK_HOME/etc/apps tar xf /tmp/python-for-scientific-computing-for-windows-64-bit_410.tgz
OK. You're overcomplicating the issue. If you're gonna do stats and throw all other stuff away, there's no point in doing streamstats which is a much "heavier" command. So what I'd do would be simp... See more...
OK. You're overcomplicating the issue. If you're gonna do stats and throw all other stuff away, there's no point in doing streamstats which is a much "heavier" command. So what I'd do would be simple index=foo | stats count as domaincount by Domain User Workstation Now you'll get your count and you can do with it whatever you please - sorting, filtering, aggregating. You name it. For example, if you want just three most often used domains per each user/workstation, just | sort User Workstation - domaincount | streamstats count by Domain User Workstation | where domaincount<=3 (writing from the top of my head so the sorting might be a bit off).
Whiile Splunk can sometimes guess the proper settings for the sourcetype (and sometimes - as shown in this case - does it quite well), as @richgalloway mentioned - it's good to have the so-called "gr... See more...
Whiile Splunk can sometimes guess the proper settings for the sourcetype (and sometimes - as shown in this case - does it quite well), as @richgalloway mentioned - it's good to have the so-called "great eight" defined for each sourcetype to make it work consistently and efficiently. Having said that - in this particular case your main issue is wrong time in your events! You have your router's time set to a wrong value. Configure it properly. Whether it's reported as UTC or your local timezone is secondary as long as the proper timezone information is supplied (and in your case it is).
Every sourcetype should have a stanza in props.conf.  Create a props.conf file if there isn't a local copy already. The stanza should contain these settings, at a minimum: SHOULD_LINEMERGE LINE_BRE... See more...
Every sourcetype should have a stanza in props.conf.  Create a props.conf file if there isn't a local copy already. The stanza should contain these settings, at a minimum: SHOULD_LINEMERGE LINE_BREAKER TIME_PREFIX TIME_FORMAT MAX_TIMESTAMP_LOOKAHEAD TRUNCATE EVENT_BREAKER_ENABLE EVENT_BREAKER  
Hello Splunkers, is there anyone on the community that would be willing to talk with me about Splunk use cases for measuring, analyzing and communicating large amounts of data for carbon impact simi... See more...
Hello Splunkers, is there anyone on the community that would be willing to talk with me about Splunk use cases for measuring, analyzing and communicating large amounts of data for carbon impact similar to the SAP/NHL Venue Metrics Platform? Mick11
So this is a new install and new source.  In the splunk server there is no props.conf file. I assume I have to create it?  
At first glance it looks relatively OK. You have your inputs matching your outputs. Check your splunkd.log on the sending UF and the receiving HF. There should be hints as to the reason for lack of ... See more...
At first glance it looks relatively OK. You have your inputs matching your outputs. Check your splunkd.log on the sending UF and the receiving HF. There should be hints as to the reason for lack of connectivity. If nothing else helps - try to tcpdump the traffic and see what's going on there. EDIT: OK, your initial post says that you get "Connection reset by peer" but it's a bit unclear which side this is from.
Please share the props.conf stanza for that sourcetype.  It looks like the TIME_FORMAT string may be incorrect.
Not sure if this all of them, but I think it should cover any command defined in a searchbnf.conf file. | rest splunk_server=local /servicesNS/-/-/configs/conf-searchbnf | fields + title, sh... See more...
Not sure if this all of them, but I think it should cover any command defined in a searchbnf.conf file. | rest splunk_server=local /servicesNS/-/-/configs/conf-searchbnf | fields + title, shortdesc, description, eai:acl.app, eai:acl.sharing, usage | search title="*-command" | eval command=replace(title, "-command$", "") | fields + command, shortdesc, description