All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I agree with @PickleRick that using lookup might be more performant if the lookup file is not very large and there are not many matches.  If the lookup is very large, you can eliminate one subsearch ... See more...
I agree with @PickleRick that using lookup might be more performant if the lookup file is not very large and there are not many matches.  If the lookup is very large, you can eliminate one subsearch because there is only one lookup. index=network_traffic NOT [inputlookup devices.csv | stats values(ip_address) AS src_ip | eval dest_ip = src_ip]  
You can try to re-install app it will resolve this issue.
Facing similar issue with Alien Vault threat feed ,increased the max size still it fails with error as " Exception when polling TAXII feed. Any saved documents will be discarded" and "The downloaded ... See more...
Facing similar issue with Alien Vault threat feed ,increased the max size still it fails with error as " Exception when polling TAXII feed. Any saved documents will be discarded" and "The downloaded taxii intelligence has a size that exceeds the configured max_size and will be discarded. Has anyone able to resolve this ?
LINE_BREAKER does not extract timestamps.  That's what TIME_FORMAT does. There seems to be two issues here: Lines not breaking correctly when the date is a single digit. Timestamps not extrac... See more...
LINE_BREAKER does not extract timestamps.  That's what TIME_FORMAT does. There seems to be two issues here: Lines not breaking correctly when the date is a single digit. Timestamps not extracted correctly when the date is a single digit. There's a separate setting for each. The line breaker is telling Splunk the next event must have a two-digit year so that is what Splunk does.  To make the date 1 or 2 digits, modify the regex: ([\r\n])\[\w{3}\s\w{3}\s\d{1,2}\s\d{2}:\d{2}: or keep it simple with ([\r\n])\[\w{3}\s\w{3} The latter regex has the advantage of handling any time zone. The %d formatting variable accepts both one- and two-digit dates.  I believe the problem with the formatting string is the extra space after "%d". Here's the final set of props.     [sql:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n])\[\w{3}\s\w{3}\s\d{1,2}\s\d{2}:\d{2}:\d{2}\s\w{3}\s\d{4}\] TIME_PREFIX=\{ TIME_FORMAT=%a %b %d %H:%M:%S %Z %Y        
Props used:     [sql:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n])\[\w{3}\s\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sEDT\s\d{4}\] TIME_PREFIX=\{ TIME_FORMAT=%a %b %d %H:%M:%S EDT %Y   While using a... See more...
Props used:     [sql:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n])\[\w{3}\s\w{3}\s\d{2}\s\d{2}:\d{2}:\d{2}\sEDT\s\d{4}\] TIME_PREFIX=\{ TIME_FORMAT=%a %b %d %H:%M:%S EDT %Y   While using above props only 2 digits date( Aug 25, Aug 28) is getting extracted, but not single digit date(Aug 2, Aug 5). How to modify line breaker so that it can take both the logs. Any help would be appreciated.    [Mon Aug 5 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 2 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 25 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found. [Mon Aug 28 12:18:04 EDT 2024] - Sql error code and message returned from store procideure: No SQL error found.    
Thanks! While all of your other suggestions were valid and useful, it was your first suggestion about using the final cert that allowed me to create the connection.  I appreciate your help.
Try something along these lines | eval keep_xyz=if(searchmatch("A") AND field1="xyz","true",null()) | eval keep_abc=if(field1="abc","true",null()) | eventstats values(keep_abc) as keep_abc values(ke... See more...
Try something along these lines | eval keep_xyz=if(searchmatch("A") AND field1="xyz","true",null()) | eval keep_abc=if(field1="abc","true",null()) | eventstats values(keep_abc) as keep_abc values(keep_xyz) as keep_xyz by field2 | where keep_xyz="true" AND keep_abc="true"
Hi, Unfortunately, the error message isn't very informative. I see that you pasted your custom certificate authority cert. Can you try pasting the final cert that you signed using that CA cert/key?... See more...
Hi, Unfortunately, the error message isn't very informative. I see that you pasted your custom certificate authority cert. Can you try pasting the final cert that you signed using that CA cert/key? (this is the one that you can view/export in your web browser at https://<ip-of-splunk-enterprise-instance>:8089) Here is a quick list of other sanity checks that might help: Were you certain to specify the public IP (not a private one) of your ec2 where you see <ip-of-splunk-enterprise-instance> ? Does your AWS security policy allow incoming traffic on port 8089 from the O11y realm you're using? (e.g. us1, us0, eu0...) If you load https://<ip-of-splunk-enterprise-instance>:8089 in your browser and export the certificate, is it the same one you pasted in to the connection? (it should be) Can you login to the Splunk Enterprise instance with your service account to verify username/password is valid? Be sure the target in your LOC connection should use https and port 8089 The service account must have a role that includes capabilities "search" and "edit_tokens_own" The service account role should have a reasonable limit for searches (a multiple of 4 such as 40).
inputs.conf is used for the incoming data. It's the chain: inputs, props and transforms to syslog described  here  It would be great to transform to RFC5424 this way but i doubt it's possible or is... See more...
inputs.conf is used for the incoming data. It's the chain: inputs, props and transforms to syslog described  here  It would be great to transform to RFC5424 this way but i doubt it's possible or is it?
Hello, I'm trying to write a query where I provide a key identifier (say "A"), and the query both finds matching results, but also uses a field from those results as a filter to another query that ... See more...
Hello, I'm trying to write a query where I provide a key identifier (say "A"), and the query both finds matching results, but also uses a field from those results as a filter to another query that provides additional data that's needed.   Obfuscating some things, this is the idea, and the closest I've gotten: index=ind1 earliest=-1d field1=abc | append [search index=ind1 earliest=-1d "A" field1=xyz | rename field2 as f2] | where field2=f2 OR field1="xyz" The idea is that results where field1=xyz and contain "A" have another field, "field2", that is present and has a matching value when field1=xyz or field1=abc.  So I want to be able to search based "A" and get back results where field1=xyz or field1=abc where field2 matches between those 2 sets. I do think a join would probably work here, but I've heard there can be performance issues with that so I was trying to avoid that.  It seems to me that I can't use "where field2=f2", and it also seems the parent search is pulling in a lot of data because of the generally broad terms (I suppose because the piped where command is applied after the fact).  Any ideas of how to write this performantly?
I used this: echo -e "# ssl-extensions-x509.cnf\n[v3_ca]\nbasicConstraints = CA:FALSE\nkeyUsage = digitalSignature, keyEncipherment\nsubjectAltName = IP:<ip-of-splunk-enterprise-instance>" > ssl-ext... See more...
I used this: echo -e "# ssl-extensions-x509.cnf\n[v3_ca]\nbasicConstraints = CA:FALSE\nkeyUsage = digitalSignature, keyEncipherment\nsubjectAltName = IP:<ip-of-splunk-enterprise-instance>" > ssl-extensions-x509.cnf
Hi, Since you’re using an IP address for the common name, did you specify IP instead of DNS in this step?
Looking for help with Log Observer Connect. I've got Splunk on AWS EC2. In 0lly, I've followed the Set Up Service Account instructions and have secured the web and management port with a self-signe... See more...
Looking for help with Log Observer Connect. I've got Splunk on AWS EC2. In 0lly, I've followed the Set Up Service Account instructions and have secured the web and management port with a self-signed cert. This part is working. I have confirmed the details regarding the indexes, capabilities (including indexes_list_all), resources and tokens. When I attempt the Set Up Observability Cloud step, I fill in the account details and upload or paste the first cert in the chain (myCACertificate.pem, based on the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.1/Security/ConfigureandinstallcertificatesforLogObserver?ref=hk), but I get: Unable to create Splunk Enterprise client. The Splunk Exnterprise _internal index doesn't appear to show any errors related to this. I've been very specific about the details; the only odd aspect is the IP address as the CN for the cert. Any ideas would be greatly appreciated.  
Exactly what I needed and all I had to do was substitute my field names.  Worked like a charm.  Karma for you.  Thanks !
Developer guide does not provide any evidence of loading different icons based on selected theme.  The documentation states the order the Icons are loaded which are triggered based on quality of disp... See more...
Developer guide does not provide any evidence of loading different icons based on selected theme.  The documentation states the order the Icons are loaded which are triggered based on quality of display and location in the app. https://dev.splunk.com/enterprise/docs/developapps/createapps#Add-icons-to-your-app File name Image size (pixels) Placement Example image appIcon_2x.png 72 x 72 App list on the Splunk Web home page for high-resolution displays   appIcon.png 36 x 36 App list on the Splunk Web home page for standard-resolution displays   appIconAlt_2x.png 72 x 72 App menus for Splunk bar and search bar for high-resolution displays   appIconAlt.png 36 x 36 App menus for Splunk bar and search bar for standard-resolution displays   appLogo.png 160 x 40 (maximum size) App bar on standard-resolution displays — appLogo_2x.png 320 x 80 (maximum size) App bar on high-resolution displays —
 Hello AppDynamics Community Team.  I'm trying to disable the Appd Agent to get more licenses free. Our team's strategy is removing the APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY so that the agent canno... See more...
 Hello AppDynamics Community Team.  I'm trying to disable the Appd Agent to get more licenses free. Our team's strategy is removing the APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY so that the agent cannot connect with AppD UI, avoiding license consumption. Is that a good approach? Is it work?  The idea is to get more licenses to disable applications that for now are not necessary de AppD. When these licenses are free, we use them to enable AppD for the application we need AppD instrumented. Thanks! 
The documentation for outputs.conf.spec specifically mentions RFC3164 so I don't believe Splunk has the configuration to support RFC5424 the way you want. Honestly I would look at a packet capture a... See more...
The documentation for outputs.conf.spec specifically mentions RFC3164 so I don't believe Splunk has the configuration to support RFC5424 the way you want. Honestly I would look at a packet capture at the destination from a good source and a relayed via Splunk HF to confirm.  However, I wouldn't get your hopes up that Splunk will relay the way you want.
How are you getting the original data? Splunk has many great features but it's not a fully-blown syslog receiver/processor. If you're receiving them using syslog as well it would probably be better t... See more...
How are you getting the original data? Splunk has many great features but it's not a fully-blown syslog receiver/processor. If you're receiving them using syslog as well it would probably be better to use rsyslog/syslog-ng to receive the event and forward it from there (and send another copy to Splunk over HEC)
Which version were you upgrading from?