All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The documentation is not correct. You have to create two separate certificate files because the Splunk Web certificate must not contain the private key. web certificate format: -----BEGIN CERTIFIC... See more...
The documentation is not correct. You have to create two separate certificate files because the Splunk Web certificate must not contain the private key. web certificate format: -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the intermediate certificate)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the root certificate for the CA)... -----END CERTIFICATE----- server certificate format: -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- ...<Server Private Key – Passphrase protected> -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the intermediate certificate)... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... (the certificate authority certificate)... -----END CERTIFICATE----- Check out: Configure and install certificates in Splunk Enterprise for Splunk Log Observer Connect - Splunk Documentation Final configuration must look like: web.conf [settings] enableSplunkWebSSL = true privKeyPath = /opt/splunk/etc/auth/mycerts/mySplunkWebPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/mySplunkWebCertificate.pem sslPassword = <priv_key_passwd> server.conf [sslConfig] serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem requireClientCert = false sslPassword = <priv_key_passwd>  
Hi @isoutamo ,   I followed a similar strategy and I'll list it below: Set up new CM, and turn on indexer clustering. Turn off the instance and copy  'etc/master-apps' and 'etc/manager-apps' Se... See more...
Hi @isoutamo ,   I followed a similar strategy and I'll list it below: Set up new CM, and turn on indexer clustering. Turn off the instance and copy  'etc/master-apps' and 'etc/manager-apps' Setup server.conf in the new instance with relevant changes ( pass4symkey in string format) Start the new CM Change the URL of the indexer cluster one by one. Finally, change the URL in the search head after the indexers are migrated. This had worked in the test environment but when it was time for the production setup, the indexers failed to connect and would keep stopping after changing to new CM.   Regards, Pravin  
Have you already checked followings docs? Troubleshoot the Splunk OpenTelemetry Collector — Splunk Observability Cloud documentation
You could create a lookup where you group each instance and add information about the responsible colleagues and their mail addresses. Then you configure it as an automatic lookup and you would have... See more...
You could create a lookup where you group each instance and add information about the responsible colleagues and their mail addresses. Then you configure it as an automatic lookup and you would have the benefit that you only create one alert because you could iterate over the mail addresses (e.g. $result.recipient$) in your alert. Define an automatic lookup in Splunk Web - Splunk Documentation  
This shouldn't be the case. With such a simple pattern there is not much backtracking. It would be important if there were wildcards, alternatives and such. With a pretty straightforward match it's n... See more...
This shouldn't be the case. With such a simple pattern there is not much backtracking. It would be important if there were wildcards, alternatives and such. With a pretty straightforward match it's not it.
Parameter DEPTH_LIMIT must be set in transforms.conf  DEPTH_LIMIT = <integer> * Only set in transforms.conf for REPORT and TRANSFORMS field extractions. For EXTRACT type field extractions, set th... See more...
Parameter DEPTH_LIMIT must be set in transforms.conf  DEPTH_LIMIT = <integer> * Only set in transforms.conf for REPORT and TRANSFORMS field extractions. For EXTRACT type field extractions, set this in props.conf. * Optional. Limits the amount of resources that are spent by PCRE when running patterns that do not match. * Use this to limit the depth of nested backtracking in an internal PCRE function, match(). If set too low, PCRE might fail to correctly match a pattern. * Default: 1000 transforms.conf - Splunk Documentation
Remove the collect command in your search query. The enabled summary indexing is enough to fill the summary index.
It depends on what information you have ingested into your Splunk environment. Splunk is "just" a data processing tool. You have to feed it with data. If you have your AD logs in Splunk, you can sea... See more...
It depends on what information you have ingested into your Splunk environment. Splunk is "just" a data processing tool. You have to feed it with data. If you have your AD logs in Splunk, you can search them but while there might be some people around here who have more experience with MS systems, it's generally more of a AD-related question how to find that info than it is a Splunk Question. You must know what to look for. If your data is properly onboarded and CIM-compliant, you can look through Change datamodel (if I remember the syntax correctly) | datamodel Change Account_Management.Locked_Accounts | search user="whatever"  I'm not sure though if it will only find the lockout event as such or will it contain the reason as well.
OK, so what information do you have in Splunk?
Hello all, I have the following case: Splunk accessible on https://dh2.mydomain.com/sendemail931 with "enable_spotlight_search = true" in web-features.conf. If I search for anything and a result/mat... See more...
Hello all, I have the following case: Splunk accessible on https://dh2.mydomain.com/sendemail931 with "enable_spotlight_search = true" in web-features.conf. If I search for anything and a result/match is shown upon clicking I get "The requested URL was not found on this server.", because the root_endpoint is being removed from the URL. Splunk is behind a reverse proxy (httpd) and an applicaiton load balancer. So upon clicking on the result, I'm being redirected to: https://dh2.mydomain.com/manager/launcher/admin/alert_actions/email?action=edit, but it should be  https://dh2.mydomain.com/sendemail931/en-US/manager/launcher/admin/alert_actions/email?action=edit I'm pretty sure that the redirect is happening internally, because I cannot see any relevant logs on the apache. I've tried to add the following to the web.conf, but the result is the same:     tools.proxy.base = https://dh2.mydomain.com/sendemail931/ tools.proxy.on = true     This is the only case were root_endpoint is not preserved. I've tried to reverse-engineer why this could happen and found that the request is handled by common.min.js, I guess somewhere here: {title:(0,r._)("Alert actions"),id:"/servicesNS/nobody/search/data/ui/manager/alert_actions",description:(0,r._)("Review and manage available alert actions"),url:"/manager/".concat(e,"/alert_actions"),keywords:[]}  + here: {var o=m.default.getSettingById(r);if(void 0===o)return;return(0,b.default)(o.title,o.url,O.length),n=o.url,void(window.location.href=n)  
Hi, I believe the problem is not on AppDynamics side. I just tested and inserted an email address into the string field and it shows correctly Here is my sample Curl populating the data, whi... See more...
Hi, I believe the problem is not on AppDynamics side. I just tested and inserted an email address into the string field and it shows correctly Here is my sample Curl populating the data, which works. Can you try and manually post to analytics and see if it works, might be power automate which doesn't format the email address correctly perhaps? curl -X POST "https://xxxxxxx/events/publish/TEST" -H "X-Events-API-AccountName:xxxxxxxx" -H "X-Events-API-Key:xxxxxx" -H "Content-type: application/vnd.appd.events+json;v=2" -d '[{"expirationDateTime": 1597135561333, "appleId": "test@test.com", "DaysLeft": "176"}]'
Its null. even if i click on event it shows Null value
His AD account , windows system
That's strange Can you run the query "select appleid from intune_vpp1"  does it show null values as well? Also double click on any of the events and check if the email value is shown there in the p... See more...
That's strange Can you run the query "select appleid from intune_vpp1"  does it show null values as well? Also double click on any of the events and check if the email value is shown there in the popup screen or are they all null? I know some characters cause the main view to show null event though there is a value in them, have not populated it with an email address, will do a test on my side as well
What information do you have in Splunk? Which system is the user locked out of?
Please share your two searches (in code blocks)
I am pushing values from power automate to appd schema. All values getting captured under APPd schema but not AppleId which is email ID which i defined as string here appleId value is null ... See more...
I am pushing values from power automate to appd schema. All values getting captured under APPd schema but not AppleId which is email ID which i defined as string here appleId value is null why is it not capturing the value
there is a user , he is saying his account is locked i want to check using splunk what is the cause how can i do that
Hi Uma What do you mean the value is not getting updated, if you run the query in the query browser does it return the correct value? Is it only in the calculated metric where it doesn't return the... See more...
Hi Uma What do you mean the value is not getting updated, if you run the query in the query browser does it return the correct value? Is it only in the calculated metric where it doesn't return the correct value?
What information do you have available to you to help you determine this?