All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Previously i had setup Single cluster with below requirements. Indexer Cluster with 3 machines. Search head with 3 machines. manager, Monitoring Console & Sh Deployer with 1 machine. Now I need t... See more...
Previously i had setup Single cluster with below requirements. Indexer Cluster with 3 machines. Search head with 3 machines. manager, Monitoring Console & Sh Deployer with 1 machine. Now I need to setup a multisite cluster with below requirements. Site1 Indexers: 3 Search head: 2 Manager: 1 Site 2  Indexers: 3 Search head: 2  
While I try to download the extension manager am unable to access the page or unable to download it. Can you please help me download the appdynamics-extensionmanage? r.zip file for windows 
Hi @Somesh , could you better describe your requirement? SHC-Deployer is a management system that must be configured for a Search Head Cluster in one of the two sites. It doesn't require a seconda... See more...
Hi @Somesh , could you better describe your requirement? SHC-Deployer is a management system that must be configured for a Search Head Cluster in one of the two sites. It doesn't require a secondary copy in the secondary site because the Search Head Cluster continues to work also without the Deployer, the only limitation is that you cannot deploy a new app until the Deployer will be again available. At the same time, you can have one Monitoring Console, you have to configure using the documentation at https://docs.splunk.com/Documentation/Splunk/9.3.1/DMC/DMCoverview You could also create a secondary server in the secondary site, but it isn't required for the activity. Ciao. Giuseppe
I have seen the splunk documentation for setting up Splunk Multisite Cluster but I have not seen anything related to Monitoring Console & SH Deployer. Can some one suggest on how to setup these two ?
Hi, I am having hard time extracting multi value fields present in an event using transforms mv_add=true, it seems to be partially working by just extracting the first and third value present in t... See more...
Hi, I am having hard time extracting multi value fields present in an event using transforms mv_add=true, it seems to be partially working by just extracting the first and third value present in the event but skipping the second and the fourth value. The regex which i am using seems to be perfectly matching for all the values in regex101 but not sure why Splunk is unable to capture it all. Following is the sample event and regex I am using - Event - postreport=test_west_policy\;passed\;(first_post:status:passed:pass_condition[clear]:fail_condition[]:skip_condition[]\;second_post:status:skipped:pass_condition[clear]:fail_condition[]:skip_condition[timed_out]\;third_post:status:failed:pass_condition[]:fail_condition[error]:skip_condition[]\;fourth_post:status:passed:pass_condition[clear]:fail_condition[]:skip_condition[]) Regex - https://regex101.com/r/r66eOz/1  (?<=\(|]\\;)(?<post>[^:]+):status:(?<status>[^:]*):pass_condition\[(?<passed_condition>[^\]]*)\]:fail_condition\[(?<failed_condition>[^\]]*)\]:skip_condition\[(?<skipped_condition>[^\]]*)\] so splunk is just matching all values for first_post and third_post in above event and skipping the second_post & fourth_post.. the same regex i tried with rex command and in that it just matches first_post field values  - |rex field=raw_msg max_match=0 "(?<=\(|]\\;)(?<post>[^:]+):status:(?<status>[^:]*):pass_condition\[(?<passed_condition>[^\]]*)\]:fail_condition\[(?<failed_condition>[^\]]*)\]:skip_condition\[(?<skipped_condition>[^\]]*)\]" Can someone please help me figure if i am missing something here. Thanks.
Hi All, i am using mvzip while working with JSON file. Now in the new Splunk dashboards seems like mvzip command is depricated. Is there any way to extract values from nested JSON apart from mvzip?
Hi @Dabbsy , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Is it possible that the token you use is created for a user that does not have permission to list other people's jobs?
I agree with @PickleRick that using lookup might be more performant if the lookup file is not very large and there are not many matches.  If the lookup is very large, you can eliminate one subsearch ... See more...
I agree with @PickleRick that using lookup might be more performant if the lookup file is not very large and there are not many matches.  If the lookup is very large, you can eliminate one subsearch because there is only one lookup. index=network_traffic NOT [inputlookup devices.csv | stats values(ip_address) AS src_ip | eval dest_ip = src_ip]  
You can try to re-install app it will resolve this issue.
Facing similar issue with Alien Vault threat feed ,increased the max size still it fails with error as " Exception when polling TAXII feed. Any saved documents will be discarded" and "The downloaded ... See more...
Facing similar issue with Alien Vault threat feed ,increased the max size still it fails with error as " Exception when polling TAXII feed. Any saved documents will be discarded" and "The downloaded taxii intelligence has a size that exceeds the configured max_size and will be discarded. Has anyone able to resolve this ?
LINE_BREAKER does not extract timestamps.  That's what TIME_FORMAT does. There seems to be two issues here: Lines not breaking correctly when the date is a single digit. Timestamps not extrac... See more...
LINE_BREAKER does not extract timestamps.  That's what TIME_FORMAT does. There seems to be two issues here: Lines not breaking correctly when the date is a single digit. Timestamps not extracted correctly when the date is a single digit. There's a separate setting for each. The line breaker is telling Splunk the next event must have a two-digit year so that is what Splunk does.  To make the date 1 or 2 digits, modify the regex: ([\r\n])\[\w{3}\s\w{3}\s\d{1,2}\s\d{2}:\d{2}: or keep it simple with ([\r\n])\[\w{3}\s\w{3} The latter regex has the advantage of handling any time zone. The %d formatting variable accepts both one- and two-digit dates.  I believe the problem with the formatting string is the extra space after "%d". Here's the final set of props.     [sql:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n])\[\w{3}\s\w{3}\s\d{1,2}\s\d{2}:\d{2}:\d{2}\s\w{3}\s\d{4}\] TIME_PREFIX=\{ TIME_FORMAT=%a %b %d %H:%M:%S %Z %Y        
Props used:     [sql:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n])\[\w{3}\s\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sEDT\s\d{4}\] TIME_PREFIX=\{ TIME_FORMAT=%a %b %d %H:%M:%S EDT %Y   While using a... See more...
Props used:     [sql:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n])\[\w{3}\s\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sEDT\s\d{4}\] TIME_PREFIX=\{ TIME_FORMAT=%a %b %d %H:%M:%S EDT %Y   While using above props only 2 digits date( Aug 25, Aug 28) is getting extracted, but not single digit date(Aug 2, Aug 5). How to modify line breaker so that it can take both the logs. Any help would be appreciated.    [Mon Aug 5 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 2 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 25 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 28 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found.    
Thanks! While all of your other suggestions were valid and useful, it was your first suggestion about using the final cert that allowed me to create the connection.  I appreciate your help.
Try something along these lines | eval keep_xyz=if(searchmatch("A") AND field1="xyz","true",null()) | eval keep_abc=if(field1="abc","true",null()) | eventstats values(keep_abc) as keep_abc values(ke... See more...
Try something along these lines | eval keep_xyz=if(searchmatch("A") AND field1="xyz","true",null()) | eval keep_abc=if(field1="abc","true",null()) | eventstats values(keep_abc) as keep_abc values(keep_xyz) as keep_xyz by field2 | where keep_xyz="true" AND keep_abc="true"
Hi, Unfortunately, the error message isn't very informative. I see that you pasted your custom certificate authority cert. Can you try pasting the final cert that you signed using that CA cert/key?... See more...
Hi, Unfortunately, the error message isn't very informative. I see that you pasted your custom certificate authority cert. Can you try pasting the final cert that you signed using that CA cert/key? (this is the one that you can view/export in your web browser at https://<ip-of-splunk-enterprise-instance>:8089) Here is a quick list of other sanity checks that might help: Were you certain to specify the public IP (not a private one) of your ec2 where you see <ip-of-splunk-enterprise-instance> ? Does your AWS security policy allow incoming traffic on port 8089 from the O11y realm you're using? (e.g. us1, us0, eu0...) If you load https://<ip-of-splunk-enterprise-instance>:8089 in your browser and export the certificate, is it the same one you pasted in to the connection? (it should be) Can you login to the Splunk Enterprise instance with your service account to verify username/password is valid? Be sure the target in your LOC connection should use https and port 8089 The service account must have a role that includes capabilities "search" and "edit_tokens_own" The service account role should have a reasonable limit for searches (a multiple of 4 such as 40).
inputs.conf is used for the incoming data. It's the chain: inputs, props and transforms to syslog described  here  It would be great to transform to RFC5424 this way but i doubt it's possible or is... See more...
inputs.conf is used for the incoming data. It's the chain: inputs, props and transforms to syslog described  here  It would be great to transform to RFC5424 this way but i doubt it's possible or is it?
Hello, I'm trying to write a query where I provide a key identifier (say "A"), and the query both finds matching results, but also uses a field from those results as a filter to another query that ... See more...
Hello, I'm trying to write a query where I provide a key identifier (say "A"), and the query both finds matching results, but also uses a field from those results as a filter to another query that provides additional data that's needed.   Obfuscating some things, this is the idea, and the closest I've gotten: index=ind1 earliest=-1d field1=abc | append [search index=ind1 earliest=-1d "A" field1=xyz | rename field2 as f2] | where field2=f2 OR field1="xyz" The idea is that results where field1=xyz and contain "A" have another field, "field2", that is present and has a matching value when field1=xyz or field1=abc.  So I want to be able to search based "A" and get back results where field1=xyz or field1=abc where field2 matches between those 2 sets. I do think a join would probably work here, but I've heard there can be performance issues with that so I was trying to avoid that.  It seems to me that I can't use "where field2=f2", and it also seems the parent search is pulling in a lot of data because of the generally broad terms (I suppose because the piped where command is applied after the fact).  Any ideas of how to write this performantly?
I used this: echo -e "# ssl-extensions-x509.cnf\n[v3_ca]\nbasicConstraints = CA:FALSE\nkeyUsage = digitalSignature, keyEncipherment\nsubjectAltName = IP:<ip-of-splunk-enterprise-instance>" > ssl-ext... See more...
I used this: echo -e "# ssl-extensions-x509.cnf\n[v3_ca]\nbasicConstraints = CA:FALSE\nkeyUsage = digitalSignature, keyEncipherment\nsubjectAltName = IP:<ip-of-splunk-enterprise-instance>" > ssl-extensions-x509.cnf