All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The savedsearch command has a method for passing variables to the search.  That should make it possible to pass different values for earliest and latest.  See the Search Reference manual for details.
The Splunk dev team is not here.  This is a Splunk community (user) site. The term 'search.log' is correct.  These files are not indexed, but are accessible via the Job Inspector. The cited docs li... See more...
The Splunk dev team is not here.  This is a Splunk community (user) site. The term 'search.log' is correct.  These files are not indexed, but are accessible via the Job Inspector. The cited docs links says that searches.log is no longer used.
OK. Please describe your ingestion process. Where do the events come from? How are they received/pulled? On which component? Where does the event stream go to from there? What components are involv... See more...
OK. Please describe your ingestion process. Where do the events come from? How are they received/pulled? On which component? Where does the event stream go to from there? What components are involved and in what order? Where are you putting your settings?
There are generally two methods - either add metadata during ingestion or dynamically clasify the sources during searching. The latter approach is usually easiest done with a lookup as @PaulPanther ... See more...
There are generally two methods - either add metadata during ingestion or dynamically clasify the sources during searching. The latter approach is usually easiest done with a lookup as @PaulPanther wrote. This way you don't have to touch your sources at all, you just have to keep the lookup contents up to date. The downside is that... you have to keep the lookup contents up to date and have it's desirable that you have a 1-1 mapping between an existing field (like host) and the information you're looking up. Otherwise you need to do some calculated fields evaluated conditionally... and it's getting messy and hard to maintain. Another approach is to add an indexed field directly at the source or at an intermediate forwarder. The IF approach of course requires you to have that intermediate HF to add a field with a different value for each group. (you can also do it directly at the receiving indexers but then you'd have a hard time differentiating between source - ingest-time lookup? Ugh). You might as well add the _meta setting to a particular input on the source forwarder. For example _meta = source_env::my_lab This will give each event ingested via input with this setting an additional indexed field called source_env with a value of "my_lab". It's convenient and it's very useful if you want to do some summaries with tstats but it also has downsides - you need to define it separately at each source (or at least in the [default] stanza for input as well as at [wineventlog] general stanza - default settings are not inherited to wineventlog!). And it must have a static value(s) defined. There is no way to define this setting dynamically like "source_hostname::$hostname" or something like that - you must define it explicitly. Another thing is that you can only have one _meta entry. Since there are no multiple _meta-something settings, multiple _meta definitions will overwrite each other according to normal precedence rules. So while you can define multiple indexed fields with _meta = app_name::testapp1 env::test_env But you can't define _meta = app_name::testapp1 in one config file and _meta = env::test_env in another. Only one of those fields would be defined (depending on the files precedence).
Thank you for the fast reply! This is rather unfortunate. Are there any aspirations to allow the use of variables in other actions in the future?
Unfortunately for browser tests you can only use the global variables to fill any fields on a web page with the "Fill in Field" action .  Feel free to create an idea as a feature enhancement request... See more...
Unfortunately for browser tests you can only use the global variables to fill any fields on a web page with the "Fill in Field" action .  Feel free to create an idea as a feature enhancement request at https://ideas.splunk.com/
Yes. Web interface is the only "standard" (not including any unpredictable things done by add-on developers) component which behaves differently. While all other "areas of activity" (inputs, outputs... See more...
Yes. Web interface is the only "standard" (not including any unpredictable things done by add-on developers) component which behaves differently. While all other "areas of activity" (inputs, outputs, inter-splunkd connections) require certs in a single-file form (from the top - subject cert, private key, certificate chain), web interface requires two separate files - one with the private key and another with the chained subject certificate. And TLS-protecting your web interface while desired as a general rule has nothing to do with inputs and outputs.
Thank you for the idea; this is a pretty easy solution. Obviously less complicated than I came up with =D
The documentation is not correct. You have to create two separate certificate files because the Splunk Web certificate must not contain the private key. web certificate format: -----BEGIN CERTIFIC... See more...
The documentation is not correct. You have to create two separate certificate files because the Splunk Web certificate must not contain the private key. web certificate format: -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the intermediate certificate)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the root certificate for the CA)... -----END CERTIFICATE----- server certificate format: -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- ...<Server Private Key – Passphrase protected> -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the intermediate certificate)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the certificate authority certificate)... -----END CERTIFICATE----- Check out: Configure and install certificates in Splunk Enterprise for Splunk Log Observer Connect - Splunk Documentation Final configuration must look like: web.conf [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/mySplunkWebPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/mySplunkWebCertificate.pem sslPassword = <priv_key_passwd> server.conf [sslConfig] serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem requireClientCert = false sslPassword = <priv_key_passwd>  
Hi @isoutamo ,   I followed a similar strategy and I'll list it below: Set up new CM, and turn on indexer clustering. Turn off the instance and copy  'etc/master-apps' and 'etc/manager-apps' Se... See more...
Hi @isoutamo ,   I followed a similar strategy and I'll list it below: Set up new CM, and turn on indexer clustering. Turn off the instance and copy  'etc/master-apps' and 'etc/manager-apps' Setup server.conf in the new instance with relevant changes ( pass4symkey in string format) Start the new CM Change the URL of the indexer cluster one by one. Finally, change the URL in the search head after the indexers are migrated. This had worked in the test environment but when it was time for the production setup, the indexers failed to connect and would keep stopping after changing to new CM.   Regards, Pravin  
Have you already checked followings docs? Troubleshoot the Splunk OpenTelemetry Collector — Splunk Observability Cloud documentation
You could create a lookup where you group each instance and add information about the responsible colleagues and their mail addresses. Then you configure it as an automatic lookup and you would have... See more...
You could create a lookup where you group each instance and add information about the responsible colleagues and their mail addresses. Then you configure it as an automatic lookup and you would have the benefit that you only create one alert because you could iterate over the mail addresses (e.g. $result.recipient$) in your alert. Define an automatic lookup in Splunk Web - Splunk Documentation  
This shouldn't be the case. With such a simple pattern there is not much backtracking. It would be important if there were wildcards, alternatives and such. With a pretty straightforward match it's n... See more...
This shouldn't be the case. With such a simple pattern there is not much backtracking. It would be important if there were wildcards, alternatives and such. With a pretty straightforward match it's not it.
Parameter DEPTH_LIMIT must be set in transforms.conf  DEPTH_LIMIT = <integer> * Only set in transforms.conf for REPORT and TRANSFORMS field extractions. For EXTRACT type field extractions, set th... See more...
Parameter DEPTH_LIMIT must be set in transforms.conf  DEPTH_LIMIT = <integer> * Only set in transforms.conf for REPORT and TRANSFORMS field extractions. For EXTRACT type field extractions, set this in props.conf. * Optional. Limits the amount of resources that are spent by PCRE when running patterns that do not match. * Use this to limit the depth of nested backtracking in an internal PCRE function, match(). If set too low, PCRE might fail to correctly match a pattern. * Default: 1000 transforms.conf - Splunk Documentation
Remove the collect command in your search query. The enabled summary indexing is enough to fill the summary index.
It depends on what information you have ingested into your Splunk environment. Splunk is "just" a data processing tool. You have to feed it with data. If you have your AD logs in Splunk, you can sea... See more...
It depends on what information you have ingested into your Splunk environment. Splunk is "just" a data processing tool. You have to feed it with data. If you have your AD logs in Splunk, you can search them but while there might be some people around here who have more experience with MS systems, it's generally more of a AD-related question how to find that info than it is a Splunk Question. You must know what to look for. If your data is properly onboarded and CIM-compliant, you can look through Change datamodel (if I remember the syntax correctly) | datamodel Change Account_Management.Locked_Accounts | search user="whatever"  I'm not sure though if it will only find the lockout event as such or will it contain the reason as well.
OK, so what information do you have in Splunk?
Hello all, I have the following case: Splunk accessible on https://dh2.mydomain.com/sendemail931 with "enable_spotlight_search = true" in web-features.conf. If I search for anything and a result/mat... See more...
Hello all, I have the following case: Splunk accessible on https://dh2.mydomain.com/sendemail931 with "enable_spotlight_search = true" in web-features.conf. If I search for anything and a result/match is shown upon clicking I get "The requested URL was not found on this server.", because the root_endpoint is being removed from the URL. Splunk is behind a reverse proxy (httpd) and an applicaiton load balancer. So upon clicking on the result, I'm being redirected to: https://dh2.mydomain.com/manager/launcher/admin/alert_actions/email?action=edit, but it should be  https://dh2.mydomain.com/sendemail931/en-US/manager/launcher/admin/alert_actions/email?action=edit I'm pretty sure that the redirect is happening internally, because I cannot see any relevant logs on the apache. I've tried to add the following to the web.conf, but the result is the same:     tools.proxy.base = https://dh2.mydomain.com/sendemail931/ tools.proxy.on = true     This is the only case were root_endpoint is not preserved. I've tried to reverse-engineer why this could happen and found that the request is handled by common.min.js, I guess somewhere here: {title:(0,r._)("Alert actions"),id:"/servicesNS/nobody/search/data/ui/manager/alert_actions",description:(0,r._)("Review and manage available alert actions"),url:"/manager/".concat(e,"/alert_actions"),keywords:[]}  + here: {var o=m.default.getSettingById(r);if(void 0===o)return;return(0,b.default)(o.title,o.url,O.length),n=o.url,void(window.location.href=n)  
Hi, I believe the problem is not on AppDynamics side. I just tested and inserted an email address into the string field and it shows correctly Here is my sample Curl populating the data, whi... See more...
Hi, I believe the problem is not on AppDynamics side. I just tested and inserted an email address into the string field and it shows correctly Here is my sample Curl populating the data, which works. Can you try and manually post to analytics and see if it works, might be power automate which doesn't format the email address correctly perhaps? curl -X POST "https://xxxxxxx/events/publish/TEST" -H "X-Events-API-AccountName:xxxxxxxx" -H "X-Events-API-Key:xxxxxx" -H "Content-type: application/vnd.appd.events+json;v=2" -d '[{"expirationDateTime": 1597135561333, "appleId": "test@test.com", "DaysLeft": "176"}]'
Its null. even if i click on event it shows Null value