All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @Gustavo.Marconi , Sure I'm checking on running the curl command to check the connectivity issue. I remember asking you to update the controller url as https instead of http ? Have you made t... See more...
Hello @Gustavo.Marconi , Sure I'm checking on running the curl command to check the connectivity issue. I remember asking you to update the controller url as https instead of http ? Have you made these change and still facing the same issue ? Also can you please confirm if you have any proxy inbetween ? Best Regards, Rajesh Ganapavarapu
Use the eval command to replace the Error_Code value with the desired text.   index=testindex source=application.logs | rex "ErrorCode\:\[?<Error_Code>\d+]" | search Error_Code IN (200, 500, 400, 5... See more...
Use the eval command to replace the Error_Code value with the desired text.   index=testindex source=application.logs | rex "ErrorCode\:\[?<Error_Code>\d+]" | search Error_Code IN (200, 500, 400, 505) | stats count by Error_Code | eval Error_Code = "Application received with errorcode " + Error_Code | where count > 5    
Could you post the sanitized search from that panel? It likely has a broken reference to a lookup and/or an eventtype.
Ok, as for why your joins wouldn't work - you tried to join on a field host but your subsearch returned fields Host_Name and IP - there was no way to match that result set because it didn't have the ... See more...
Ok, as for why your joins wouldn't work - you tried to join on a field host but your subsearch returned fields Host_Name and IP - there was no way to match that result set because it didn't have the host field.
Sorry, I am too used to Linux. I believe the equivalent btool command on windows is: $splunkhome$/bin/splunk.exe btool web list settings | FINDSTR max_upload  
Let's start with 3 - we don't know your data and, frankly, since there are better ways to do it, people aren't very eager to troubleshoot join searches. Ok. Having that out of the way... 1. There i... See more...
Let's start with 3 - we don't know your data and, frankly, since there are better ways to do it, people aren't very eager to troubleshoot join searches. Ok. Having that out of the way... 1. There is a huge difference between lookup and doing a join on inputlookup-generated search. Lookup is a distributable streaming command which means that if you have a bigger environment as long as it's early enough in your search pipeline it can be performed by each indexer separately which parallelizes this work among search peers. If you do join, Splunk has to first run first search completely, return the results to the search-head running your main search and then do the join on the whole result set returned by your main search up to this point. It might not be a big difference in this particular case but in general there is a huge conceptual difference in how both those commands work and subsequently this may have a very big performance impact on your search. 2. If you want to do a lookup on either of two fields you simply do two lookups | lookup whatever.csv field1 output something as something1 | lookup whatever.csv field2 output something as something2 This way you'll get two fields on which you can do whatever you want. You can coalesce them, you can append one to the other in case both return results. You can also use this technique to filter events so that you only get those with matching entries from the lookup. <your base search> | lookup yourlookup.csv IP output IP | lookup yourlookup.csv Host_Name output Host_Name | search IP=* OR Host_Name=* This search also uses a neat trick of overwriting the field with its own contents matched from the lookup. This way if a match is found in a lookup field's value is retained otherwise field is getting cleared. This lets you filter easily in the last step to only retain those values which have a value in either of those fields. If you don't want this field cleaning to happen, return fields with another names. For example | lookup yourlookup.csv IP output IP as matched_IP And adjust the last search command accordingly
index=testindex source=application.logs |rex "ErrorCode\:\[?<Error_Code>\d+]" |search Error_Code IN(200, 500, 400, 505, 500) |stats count by Error_Code |Where count > 5 output: Error_Code count... See more...
index=testindex source=application.logs |rex "ErrorCode\:\[?<Error_Code>\d+]" |search Error_Code IN(200, 500, 400, 505, 500) |stats count by Error_Code |Where count > 5 output: Error_Code count 200 20 500 100 400 40 505 45 500 32 Instead of Errorcodes we want to display a custom text  as shown below. How can we do this?? Expected output: Error_Code count Application received with errorcode 200 20 Application received with errorcode 500 100 Application received with errorcode 400 40 Application received with errorcode 505 45 Application received with errorcode 500 32  
To make your app CIM-compliant, you should do the following: 1. Use EventTypes to apply the tags to the events so they end up in the correct data model. E.g. tag "network" and "communicate" to put i... See more...
To make your app CIM-compliant, you should do the following: 1. Use EventTypes to apply the tags to the events so they end up in the correct data model. E.g. tag "network" and "communicate" to put it in the NetworkTraffic data model. 2. Add field extractions, calculated fields, lookups, etc, to get values for the fields listed in the CIM model. The vladiator app is useful for this purpose: https://splunkbase.splunk.com/app/2968 Ref: https://docs.splunk.com/Documentation/CIM/5.3.2/User/NetworkTraffic
small note to add, since v9.x the password complexity is enforced in the user-seed.conf file as well.  So be sure the new password is at least 8ch long or whatever your complexity requirements are.  ... See more...
small note to add, since v9.x the password complexity is enforced in the user-seed.conf file as well.  So be sure the new password is at least 8ch long or whatever your complexity requirements are.  If the new etc/passwd file is not created, then check splunkd.log file for the failure reason. 
That sounds like a good recipe for indexer confusion, as they both assume that the buckets on the share are only managed by themselves. On the off-chance that your SAN supports a S3 API, you may be ... See more...
That sounds like a good recipe for indexer confusion, as they both assume that the buckets on the share are only managed by themselves. On the off-chance that your SAN supports a S3 API, you may be able to set up SmartStore, which would put the data redundancy and availability responsibilities on the SAN instead of the indexers. https://docs.splunk.com/Documentation/Splunk/9.2.2/Indexer/AboutSmartStore
It would be nice if you wrote what solved your problem. My suspicion is that either you had your setting on a wrong component or you referenced wrong source (with WEF-forwarded events the addon does... See more...
It would be nice if you wrote what solved your problem. My suspicion is that either you had your setting on a wrong component or you referenced wrong source (with WEF-forwarded events the addon does rewrite of the source from the ForwardedEvents eventlog to the original eventlog the event was forwarded fron)
Thanks for the response and the suggestion.  I want to match on either "Host_Name" or "IP" from the lookup table. The splunk "host" field in our configuration (misconfiguration) returns DNS name for... See more...
Thanks for the response and the suggestion.  I want to match on either "Host_Name" or "IP" from the lookup table. The splunk "host" field in our configuration (misconfiguration) returns DNS name for some hosts and IP address for others. (Long Story.) The lookup table has maybe 150 rows, so the subsearch isn't that resource-intensive. Using lookup instead of inputlookup seems like it would be clunky but at this point I just want to make something work, so I'm all ears. To respond to your suggestions/comments:  1. "Join" seems the intuitive, clean option for my use case. The lookup table is small. 2. I don't want to "enrich" my events/logs, I want to filter  them to only return logs based on the Splunk host field matching with Host_Name OR IP fields from the lookup. (But at this point, I will take matching just one of the fields from the lookup table.) Can we construct a query that accomplishes this filter without using join and inputlookup? 3. Can anyone tell me why my query won't work????
1. Don't use the join command unless there is absolutely no other way. Usually there is. Yes, there are some valid use cases for join but typically the same result can be achieved much "splunkier" in... See more...
1. Don't use the join command unless there is absolutely no other way. Usually there is. Yes, there are some valid use cases for join but typically the same result can be achieved much "splunkier" in a different way. 2. If you have a lookup, use the lookup command instead of doing some strange tricks with join <your initial search> | lookup System_Hosts.csv Host_Name IP And you're good to go - you'll get your events enriched with the lookup contents. One thing though - are you sure you want to match on both IP and hostname? Unless you have some strange environments with duplicate network addresses IP alone should identify the host.
 I've been debugging my inner join query for hours, and that's why I'm here with my first question for this community. We have a csv lookup table with fields "Host_Name", "IP", and others, based on o... See more...
 I've been debugging my inner join query for hours, and that's why I'm here with my first question for this community. We have a csv lookup table with fields "Host_Name", "IP", and others, based on our known hosts that should be reporting. Note: in our Splunk logs, for some hosts the splunk "host" field matches the lookup table "Host_Name" field, and some hosts match the "IP" field. For this reason, when we add a new host, we add 2 rows to the lookup, and place the host name and the IP in both fields of the lookup. (Long story.) Our Lookup ("System_Hosts.csv") looks like this: Host_Name          IP Foo Bar ServerA 123.45.6.7 xyz abc 123.45.6.7 ServerA def ghi ServerB ...and so on       Queries that don't work. (This is a very oversimplified stub of the query, but I'm debugging and brought it down to the smallest code that doesn't function): index=myindex | join type=inner host [|inputlookup System_Hosts.csv | fields Host_Name, IP] | table host (Removing one of the fields from the lookup, just in case I don't understand inner join, and the splunk host has to match both "Host_Name" and "IP" lookup fields to return results): index=myindex | join type=inner host [|inputlookup System_Hosts.csv | fields Host_Name] (Removing "type=inner" optional parameter also doesn't work as expected. Inner is default type.)  Queries that DO work: (To verify logs and hosts exist, and visually match the hosts to lookup table:) index=myindex | table host (To verify lookup is accessible, fields and syntax are accurate:) index=myindex | inputlookup System_Hosts.csv | fields Host_Name, IP | table Host_Name, IP (To make me crazy? Outer join works. But this just returns all hosts from every log.) index=myindex | join type=outer host [|inputlookup System_Hosts.csv | fields Host_Name, IP | table host  So these have been verified: spelling of the lookup spelling of the lookup fields permission to access the lookup syntax of the entire query without the "type=inner" optional argument  From my understanding, when this works, the query will return a table with hosts that match entries in the "Host_Name" OR "IP" fields from the lookup. If I don't understand inner join please tell me, but this is secondary to making inner join work at all, because as you can see above, I try to match only the "Host_Name" field with no success. I'm pulling my hair out! Please help!
https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers Generally. Unless you're using httpout, you're good whatever combination of versio... See more...
https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers Generally. Unless you're using httpout, you're good whatever combination of versions from not-so-ancient releases you choose (actually I ran forwarders as farback as 6.6 with 9.0 indexers but that's not officially supported). The only annoyance is with 9.x UF and pre-9.0 indexers because 9.x UFs generate internal events for an indexer non-existent on pre-9.0 indexers so you end up with either warnings in splunk logs or events in your last-resort index.
Check the usage by sourcetype, index... Then check what kind of logs these are. We don't know yohr environment, we don't know your data.
I'm currently running Universal Forwarders with version 9.0.0 and 9.0.1.  These UFs were flagged for vulnerabilities associated with (SVD-2023-0809).  The fix would be to upgrade to 9.0.6.  I'm runni... See more...
I'm currently running Universal Forwarders with version 9.0.0 and 9.0.1.  These UFs were flagged for vulnerabilities associated with (SVD-2023-0809).  The fix would be to upgrade to 9.0.6.  I'm running Splunk Enterprise 9.0.5.  I need to know if UF 9.0.6 will be compatible with Splunk ES 9.0.5.
Yes. The license is generally version-agnostic.
@richgalloway , thank you so much it worked 
We have it set to f5:bigip:syslog