All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You said S3, but is this AWS S3 or some other S3ish from other vendor?
We have S3. Currently, we are using NFS bridge to mount to the server and send the cold buckets there. It planned to change to SmarStore.
Hi @Mallika1217 , as also @inventsekar  said, you don't need a LinkedIn account to access Splunk downloads, you have only to register your account on Splunk site and then you can download all Splunk... See more...
Hi @Mallika1217 , as also @inventsekar  said, you don't need a LinkedIn account to access Splunk downloads, you have only to register your account on Splunk site and then you can download all Splunk updates and apps (except Premium Apps) you want (Get a Splunk.com Account | Splunk). Ciao. Giuseppe
Hi @fabiyogo , why you should use different sourcetypes for different severity levels? All the Splunk parsing rules are usually related to sourcetype, this means that using three sourcetypes, you h... See more...
Hi @fabiyogo , why you should use different sourcetypes for different severity levels? All the Splunk parsing rules are usually related to sourcetype, this means that using three sourcetypes, you have to create more parsing rules for the same data. Instead you could use the same sourcetype (so you can create only one set of parsing rules) and tag different severity events using eventtypes and tags. Ciao. Giuseppe
Hi All,  We have created a table viz containing 2 level of dropdowns which has the same index and sourcetype. While implementing the Row Expansion JScript in the dashboard, we are getting the result... See more...
Hi All,  We have created a table viz containing 2 level of dropdowns which has the same index and sourcetype. While implementing the Row Expansion JScript in the dashboard, we are getting the results in 2 levels, however, the second level expansion will get exit abruptly.    Also, we could notice that the pagination only works in the first level table (Inner child table row expansion) for the initial row we select and only once.  If we select the second row/entry in the same parent table, the Inner child table pagination will be in a freeze state. We need to reload the dashboard everytime to fix this. 
Wait a second. "We have our virtual environment and S3 as well" - does that mean that you're using smartstore or this S3 is unrelated to Splunk?
Why you want to put those HFs between sources and indexers? Usually it's better without those? Almost only reason why you need those is that there is security policy which needs isolated security zon... See more...
Why you want to put those HFs between sources and indexers? Usually it's better without those? Almost only reason why you need those is that there is security policy which needs isolated security zones and you must use IHFs as gateways/proxies between those zones. Or was it so, that currently you have some modular inputs or other is this standalone instance? In that case your plan is correct. You should set up needed amount of HFs to handle those, but just for manage those inputs. Please remember that almost all inputs are not HA aware and you cannot run those parallel in several HFs at same time. Are those buckets just frozen storage where you get those if needed and just thawed those into use or are those already used as smart store storage? If I understand right 1st option is currently in use? If so, then you just keep those as currently or put those into some other storage. If I recall right you cannot restore (thawed) those into smartstore enabled cluster index?  Anyhow as those are standalone bucket I just propose to use individual all in one box or indexer to restore those if/when needed. Other time that box can be down. And with smartstore, especially in onprem, you must ensure and test that you have enough throughput between nodes and S3 storage! 
Hello, I am looking to configure POST request using webhook as an Alert action.But i can't see any authentication  How i add authentication in webhook
There are two possible approaches: 1. Migrate existing installation to a new OS and then "upgrade" it to cluster. 2. Do the cluster first adding new components with a new OS. Both have their pros ... See more...
There are two possible approaches: 1. Migrate existing installation to a new OS and then "upgrade" it to cluster. 2. Do the cluster first adding new components with a new OS. Both have their pros and cons. Theoretically, Splunk advises that all components run "the same OS and version" whatever that means. Practically, of course that's impossible to keep this requirement throughout the whole system lifecycle if not for any other reason, just because mid-upgrade you have some part of your cluster with an earlier version, some with later. Also since Splunk doesn't really rely on too many components of the OS itself, it shouldn't matter that much as long as you're in the supported OS range. (but yes, it can cause issues with support involvement should you need their help). OTOH, if you try to fiddle with your only existing installation (as in "migrate first, clusterize later") you have of course additional risks from manipulating your only instance. If you feel confident in your backups that might be better from the supportability point of view. But of course it involves possibly longer downtimes.
Here is one old post which is discussing this issue https://community.splunk.com/t5/Splunk-Search/How-to-find-computers-which-stopped-sending-logs/m-p/694544. It contains one example and several link... See more...
Here is one old post which is discussing this issue https://community.splunk.com/t5/Splunk-Search/How-to-find-computers-which-stopped-sending-logs/m-p/694544. It contains one example and several links to other resources and apps to handle this.
Good morning ITWhisperer, Many thanks for the suggestion. This might actually work. I could first evaluate number of fields and then use mvindex. Will try that. Again, many thank. Kind Regards, ... See more...
Good morning ITWhisperer, Many thanks for the suggestion. This might actually work. I could first evaluate number of fields and then use mvindex. Will try that. Again, many thank. Kind Regards, Mike.
Field names are case sensitive - try using productId rather than productid
Hey Guys, I have a input that is monitoring a log from syslog. In this file theres data of multiple severity, its bad, but I was thinking I could use a transform to set sourcetype in props that I ... See more...
Hey Guys, I have a input that is monitoring a log from syslog. In this file theres data of multiple severity, its bad, but I was thinking I could use a transform to set sourcetype in props that I could use to format data. So I did this in inputs.conf: [udp://x.x.x.x:5514] index=cisco_asa sourcetype=cisco_firewall disabled=false and this logs from cisco asa Sep 20 15:36:41 10.10.108.122 %ASA-4-106023: Deny tcp src inside:x.x.x.x/xxxx dst outside:x.x.x.x/xxxx by access-group "Inside_access_in" [0x51fd3ce2, 0x0] Sep 20 15:36:37 10.10.108.122 %ASA-5-746015: user-identity: [FQDN] go.microsoft.com resolved x.x.x.x Sep 20 15:36:37 10.10.108.122 %ASA-6-302021: Teardown ICMP connection for faddr x.x.x.x/x gaddr x.x.x.x/x laddr x.x.x.x/x type 8 code 0 then I created a transforms.conf [set_log_type_critical] source_key = _raw regex = .*%ASA-4 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:alert [set_log_type_error] source_key = _raw regex = .*%ASA-5 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:critical [set_log_type_warnig] source_key = _raw regex = .*%ASA-6 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:error I also have a props that looks like [cisco:firewall] TRANSFORMS-setlogtype_alert=set_log_tyoe_critical TRANSFORMS-setlogtype_critical=set_log_tyoe_error TRANSFORMS-setlogtype_error=set_log_tyoe_warning My question is this: after all that I configured it, but the sourcetype separation is still not possible Do transforms and props look correct? Im testing locally so I can break things all day long. thanks for the assistance
mvcount will give you the number of values in a multivalue field - does that help?
Yes, I also agree with @PickleRick, but sadly I need to cook with what I currently have... We have an on-Prem Standalone. OS must be replaced (Centos7) and even the hardware warranty will expire at ... See more...
Yes, I also agree with @PickleRick, but sadly I need to cook with what I currently have... We have an on-Prem Standalone. OS must be replaced (Centos7) and even the hardware warranty will expire at the end of this year. We have our virtual environment and S3 as well. (I have system architect colleagues, but they are not "Spunk-related" ones.) I have similar plans as you describe. There is only one major difference. I plan to set up 2-3 heavy forwarders and migrate the current inputs. I can do this one by one and fast, without a huge outage. I will set up the new deployment parallelly, and when everything looks okay, I will redirect the HFs to the new deployment. Only the cold buckets are "problematic" now. But we still can keep the old environment without new input, and we can search historical data if needed, once it expires we stop the Standalone... Thank you for the insights!
Thank you!
Do you have any code example based on your explanation?  That would really help me
Hi @Mallika1217 , linkedin profile? ... not able to understand your question.. pls some more details..  do you want to get notified about a particular splunk app/add-on?
Hi @mayurr98,  Is it ingesting exactly 10,000 events per hour or is that figure rounded?  The query is set to run four times per hour. Does it return 2,500 events per run, or does the first run ret... See more...
Hi @mayurr98,  Is it ingesting exactly 10,000 events per hour or is that figure rounded?  The query is set to run four times per hour. Does it return 2,500 events per run, or does the first run return 10,000 and the rest 0?  Are there any error messages in the internal logs? index=_internal sourcetype=dbx_* Also, how much more than 10,000 events are we talking here? There could be a few different causes depending on the answers to the above questions. First check there are no other random db_inputs.conf files overwriting your above settings. Perhaps use btool to check this. Next, as DB Connect v3 and above uses HEC under the hood, use btool to check limits.conf to see if the max_content_length setting is too restrictive.  Depending on what you find in the logs, you might also consider reducing the max_rows and/or fetch_size settings and/or decreasing the interval. Counterintuitive as it sounds, this might actually improve number of events returned, if there is some performance bottleneck preventing the query from completing.  Regards, K
Hi @JWGOV , As per the documentation here, for Oracle database inputs you may use any of varchar, int, float, real, bigint, number, timestamp, datetime, or date for the rising column.  You shouldn't ... See more...
Hi @JWGOV , As per the documentation here, for Oracle database inputs you may use any of varchar, int, float, real, bigint, number, timestamp, datetime, or date for the rising column.  You shouldn't really have to do any other formatting of data so long as the rising column is in one of those types, but... Without seeing the actual data causing the ORA-01843 errors it's hard to say where the root cause lies, but consider using TO_NUMBER to convert the EVENT_TIMESTAMP field from a timestamp to a number, which would at least avoid this specific error.  Although, you still might have problems if some bad data lands in the EVENT_TIMESTAMP field and can't be converted to number, or if it is converted to a number from a different timestamp format which might result in a checksum error.  Are you able to show an example when one of those errors occurred?