All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  It looks like the certificate is good for either client or server authentication.      
Thanks for the explanations. I have made an updated version where there are no need of spesial characters. Also cleaned up the code some.
Found out why: Release v5.1.0 · splunk/contentctl · GitHub The latest release give an Error instead of a warning for bad DataSource. Since it juste release, the latest version of Splunk ESCU was si... See more...
Found out why: Release v5.1.0 · splunk/contentctl · GitHub The latest release give an Error instead of a warning for bad DataSource. Since it juste release, the latest version of Splunk ESCU was simply build with an older version and had a pile of non blocking Warning.
I haven’t use slack alert action, so I just give general hints. Usually alert actions are written some log what happened into _internal index you should try to found something which is related to it.
We are using the Splunk Add-On for GWS Version3.0.3 for Splunk Cloud and receiving this error when attempting to pull in the (user) identities portion. I have tried both 'admin_view' and 'domain_publ... See more...
We are using the Splunk Add-On for GWS Version3.0.3 for Splunk Cloud and receiving this error when attempting to pull in the (user) identities portion. I have tried both 'admin_view' and 'domain_public' in the Inputs config with same error. All other functions are working fine. I need to bring in this sourcetype "gws_users_identity" to populate our identities lookup. Has anyone else encountered this? Maybe you found a "fix"?   ERROR pid=<redacted> tid=MainThread file=log.py:log_exception:351 | exc_l="User Identity Error" Exception raised while ingesting data for users: <HttpError 400 when requesting https[:]//admin.googleapis.com/admin/directory/v1/users?customer=<redacted>&orderBy=email&maxResults=500&viewType=domain_public&alt=json returned "Bad Request". Details: "[{'message': 'Bad Request', 'domain': 'global', 'reason': 'badRequest'}]">. Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/gws_user_identity.py", line 139, in stream_events service.users()  
@marnall After replacing the package, the files mentioned in the error message are deleted. So as expected, no mentions of the older package version appear in our code after the older version is dele... See more...
@marnall After replacing the package, the files mentioned in the error message are deleted. So as expected, no mentions of the older package version appear in our code after the older version is deleted & replaced. We're not sure why we're still facing this issue, and I'm wondering if this issue can be attributed to AppInspect in some way?
It's the calling shell that does the file expansion first so disabling globbing inside the function (which runs in a subshell) will not work.   Here's an example that hopefully demonstrates this more... See more...
It's the calling shell that does the file expansion first so disabling globbing inside the function (which runs in a subshell) will not work.   Here's an example that hopefully demonstrates this more clearly ... $ mkdir empty $ mv test.func empty/.test.func $ cd empty $ ls # no files $ ls -a # globbing ignores hidden files . .. .test.func $ . .test.func $ test * 2 3 # no files so no globbling and * works opt=x**x#x file=* stansa=2 search=3 $ touch newfile $ ls newfile $ test * 2 3 opt=x**x#x file=newfile stansa=2 search=3 $ test \* 2 3 opt=x**x#x file=* stansa=2 search=3 $ set -f $ test * 2 3 opt=x**x#x file=* stansa=2 search=3 $  Agree, that using -a switch may be a cleaner way to represent all files though. 
Sorry to be a bother, but what if there is a special char like = involved. I can't add the equal sign into my search query.   | eval msxxxt="*Action=GexxxxdledxxxxReport Duration=853*" | rex "Dura... See more...
Sorry to be a bother, but what if there is a special char like = involved. I can't add the equal sign into my search query.   | eval msxxxt="*Action=GexxxxdledxxxxReport Duration=853*" | rex "Duration (<?Duration>\d+)" | timechart span=1h avg(Duration) AS avg_response by msxxxt   Thanks again for your help
What is the best practice to have a Splunk heavy forwarder call out to a third party API and pull logs into Splunk. Most of the solutions I use have apps on Splunk base but this one does not. Do I ha... See more...
What is the best practice to have a Splunk heavy forwarder call out to a third party API and pull logs into Splunk. Most of the solutions I use have apps on Splunk base but this one does not. Do I have to build a custom add-on using something like the add-on builder? 
I took a look at our existing servercert .pem file in vi. It did not contain the private key; it did include the root and intermediate certs   I copied the contents of our private key .pem file to th... See more...
I took a look at our existing servercert .pem file in vi. It did not contain the private key; it did include the root and intermediate certs   I copied the contents of our private key .pem file to the location you suggested. mainCert/private key/intermediate cert/root cert I saved the new .pem file with a new name and put it in a new location under /opt/splunk/etc/auth/newssl and updated the inputs.conf file (below) at system/local. disabled = false connection_host=ip index =main [tcp:514] disabled = false connection_host=ip index =main [udp://514] index = main sourcetype=syslog disabled = no [tcp-ssl:6514] sourcetype = syslog index=syslog disabled = 0 [sslConfig] sslPassword = $7$pZd1k8bLJzFgGDno3jU7PQ4lAIFBoUbdhOAaFDZojyT1H6DGb5RdRA== serverCert = /opt/splunk/etc/auth/newssl/prcertkey.pem requireClientCert = false However, when testing the connection with openssl,  I get the same behavior, a tcp connection is made, but no certificate activity.  I get a CONNECTED(00000148) message which hasn't led me to anything specific. I'm still missing something. peter  
Thank you for the edit, I got it to work after adding a : after usage as without it nothing was generating. Thank you for your assistance index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log... See more...
Thank you for the edit, I got it to work after adding a : after usage as without it nothing was generating. Thank you for your assistance index=”main” source=”C:\\Admin\StorageLogs\storage_usage.log” | rex "usage: (?<usage>[^%]+)% used" | where usage >= 75
Hello Ismo, I am able to create an alert, but it does not send the alerts to Slack. I did check that the Slack Alert Setup has an updated "Slack App OAuth Token". Are there any steps I am missing?... See more...
Hello Ismo, I am able to create an alert, but it does not send the alerts to Slack. I did check that the Slack Alert Setup has an updated "Slack App OAuth Token". Are there any steps I am missing? (By the way, if I chose email instead of Slack the alerts go through)
It is highly reliant on what your servers are like, but here is a google search that might help you to install Java on various systems for Splunk: https://www.google.com/search?q=site%3Asplunk.com... See more...
It is highly reliant on what your servers are like, but here is a google search that might help you to install Java on various systems for Splunk: https://www.google.com/search?q=site%3Asplunk.com+install+java&sca_esv=2e83ef3dd22d1d30&sxsrf=AHTn8zrseyxi7n8sOS4aReBlluQXY9Begg%3A1741116613013&source=hp&ei=xFTHZ_O3O8fJkPIPl5aKkAM&iflsig=ACkRmUkAAAAAZ8di1WdaKSpaLHxDPruYGfC6ofRv9ytT&ved=0ahUKEwjzqerplPGLAxXHJEQIHReLAjIQ4dUDCBo&uact=5&oq=site%3Asplunk.com+install+java&gs_lp=Egdnd3Mtd2l6IhxzaXRlOnNwbHVuay5jb20gaW5zdGFsbCBqYXZhSM1uUL0LWMo0cAF4AJABAJgBVqABtwyqAQIyOLgBA8gBAPgBAZgCBqACiAOoAgrCAgcQIxgnGOoCwgINECMY8AUYJxjJAhjqAsICChAjGIAEGCcYigXCAgQQIxgnwgIREC4YgAQYsQMY0QMYgwEYxwHCAg4QABiABBixAxiDARiKBcICCxAAGIAEGLEDGIMBwgIOEC4YgAQYsQMYgwEY1ALCAggQABiABBixA8ICCxAuGIAEGLEDGNQCwgIFEC4YgATCAg4QLhiABBixAxjRAxjHAcICDhAuGIAEGMcBGI4FGK8BwgIIEC4YgAQYsQPCAgsQLhiABBjHARivAcICCxAuGIAEGNEDGMcBwgIFEAAYgATCAgQQABgDmAMF8QWTnhX-_AYE35IHATagB6tQ&sclient=gws-wiz
I have a file I'm monitoring that changes several times a day. It is likely that sometimes the file contents will be the same as a previous iteration, but not guaranteed (the file name name does not ... See more...
I have a file I'm monitoring that changes several times a day. It is likely that sometimes the file contents will be the same as a previous iteration, but not guaranteed (the file name name does not change). The file is in text format and is a few dozen lines long. I want to process the file every time the modtime changes, even if the content is 100% the same, and I want to create a single event with the contents each time. props.conf: [my_sourcetype] DATETIME_CONFIG = current BREAK_ONLY_AFTER = nevereverbreak [source::/path/to/file-to-be-read] CHECK_METHOD = modtime sourcetype = my_sourcetype inputs.conf: [monitor:///path/to/file-to-be-read] disabled = 0 sourcetype = my_sourcetype crcSalt = some_random_value_to_try_to_make_it_always_read   If I update file-to-be-read manually by adding new lines to the end, it gets read in immediately and I get an event just like I want. But when the automated process creates the file (with an updated modtime), Splunk seems not to be interested in it. Perms are correct and splunkd.log reflects that the modtime is different and it's re-reading the file... but it doesn't create a new event. I'm sure I'm missing something obvious, but I'd appreciate any advice. Cheers.  
@coreyCLI, thank you for this. Adding "flex-basis" resolved the issue for me.
@kiran_panchavat in Akamai docs, it is given Akamai Splunk Connector requires Java 8 (JRE 1.8) or above.  But here you have give JDK. Is it fine to install JDK instead of JRE? is it the same?
@kiran_panchavat thank you. In EC2 instance which path I need to run all these commands?
Hi @livehybrid  Thanks for your response, below is a sample log file names server.log.20250303.1 server.log.20250303.10 server.log.20250303.11 server.log.20250303.12 server.log.20250303.13 ser... See more...
Hi @livehybrid  Thanks for your response, below is a sample log file names server.log.20250303.1 server.log.20250303.10 server.log.20250303.11 server.log.20250303.12 server.log.20250303.13 server.log.20250303.14 server.log.20250303.15
Are you using the API to dispatch and retrieve the results of a search? If so, does the search take roughly the same amount of time on its own?
Hello, I'll ask around, but I imagine looking at Splunk/AppDynamics pages on LinkedIn should show you open jobs. https://www.splunk.com/en_us/careers.html