Alerting

Splunk Enterprise 8.0.4.1 on Ubuntu 20.04LTS single user instance: Emails fail with 'ResourceNotFound' error on alerts.

dkozinn
Explorer
I'm running 8.0.4.1 on Ubuntu 20.04LTS. I am using an Enterprise dev/test license (single user) for this instance. Any attempt to send email results in the following in python.log:

2020-07-07 21:45:15,136 +0000 ERROR     sendemail:1435 - [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/_new?output_mode=json
Traceback (most recent call last):
File "/opt/splunk/etc/apps/search/bin/sendemail.py", line 1428, in <module>
results = sendEmail(results, settings, keywords, argvals)
File "/opt/splunk/etc/apps/search/bin/sendemail.py", line 261, in sendEmail
responseHeaders, responseBody = simpleRequest(uri, method='GET', getargs={'output_mode':'json'}, sessionKey=sessionKey)
File "/opt/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 577, in simpleRequest
raise splunk.ResourceNotFound(uri)
ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/_new?output_mode=json
The mailserver is on the local host, TLS isn't in use. I've verified that the Splunk user can send email both via the mail command as well as calling sendmail directly.  The error happens if I use an alert with email as well as using |sendemail from a search. 
 
As this instance is limited to a single user the user that this runs under has the admin role. For logging preferences, I set EmailSender to DEBUG but I'm not seeing anything useful.
Labels (2)

jorjiana88
Path Finder

Hi,

This is likely a local issue so I would start with the most basic checks.

First try telnet 127.0.0.1 8089 and telnet localhost 8089 to see if the management port of splunk is accessible, and check the IP binding settings, and adjust if necesary.

https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/BindSplunktoanIP

You can also try to access that resource with CURL and with proper server IP to see if you still get 404, something like this..probably...

 

dkozinn
Explorer

Hi @jorjiana88 ,

The splunk management port is accessible on 8089. Using curl, I get this:

{"messages":[{"type":"ERROR","text":"Action forbidden."}]}

 To verify that the credentials were correct, I intentionally used the wrong password and then got this:

{"messages":[{"type":"ERROR","text":"Unauthorized"}]}

 

jorjiana88
Path Finder

Were you able to reproduce the 404 with curl in any way? for example when using the ip 127.... ? 

The issue is that splunk is not able to access that URL, but with curl you were able to.. how? did you write different ip or used another user than what splunk uses, with different permissions? 

These errors action forbidden and unauthorized when using curl are normal, we just wanted to find out if you get 404. 

dkozinn
Explorer

I was not able to get the 404 error. I used the exact curl command you suggested, going to 127.0.0.1 using the same userid/password that I use to log in.  Splunk itself runs as root on my machine, so I can't use that for credentials, but the account that I did use does have full admin rights. 

I know very little about how things work via the API, should there always be something there to retrieve or does that only happen for some period after the search/schedule report runs?

 

thambisetty
Super Champion

I understand from the debug logs is that the result file which will be created by Splunk after search is complete is not available that’s why you are getting an error.  Splunk enforced their commands to use python3 after Splunk Enterprise version 8 but I still see your sendemail command using python2.7.

Let me know if you have still problem. I can simulate the issue.

————————————
If this helps, give a like below.

dkozinn
Explorer

I do still have the issue @thambisetty

@peterm30 mentioned that it might be a permissions issue, so I checked the directory where Splunk is installed. After a little digging, it looks there are directories with results under $HOME/var/run/splunk/dispatch that are owned by root with permissions of 700, meaning the splunk user wouldn't be able to see the contents. I noticed files named results.csv.gz and looking at them (as root), these seem to be  the output of the searches. 

If this is the issue, what is causing those to be written by root instead of splunk? Note that $HOME/var/run/splunk/dispatch is owned by splunk (not root).

Thanks to both of you for getting back to me.

isoutamo
SplunkTrust
SplunkTrust
Usually this this means that splunk has some times started as root or it was updated by package manager (at least this happens regularly in RedHat, I don’t know is this same in Debian).
Just stop splunk, crown -R splunk: splunk /opt/splunk or whatever your installation directory was and then start splunk as user splunk unless you are using systemd.
r. Ismo
0 Karma

peterm30
Path Finder

I've tried this as a solution without any success.

0 Karma

dkozinn
Explorer

There are a large number of files under $SPLUNK_HOME owned by root, though not all. 

On my machine, splunkd is running through systemd as root. Would I still need to change the ownership of those files?

0 Karma

peterm30
Path Finder

My understanding is that Splunk 8 still uses Python 2 for some features, including those involved in this error (specifically, alert actions). https://docs.splunk.com/Documentation/Splunk/8.0.5/Python3Migration/ChangesEnterprise

That said, I'm no Splunk developer and might be interpreting that incorrectly. In the interest of troubleshooting I did change the default interpreter from 2.7 to 3.7 by setting "python.version = python3" in /opt/splunk/etc/system/local/server.conf and restarting Splunk. Re-running the same command resulted in nearly the same error, only with python3.7 being referenced.

 

2020-08-01 08:51:01,165 -0400 ERROR	sendemail:1454 - [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/_new?output_mode=json
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/search/bin/sendemail.py", line 1447, in <module>
    results = sendEmail(results, settings, keywords, argvals)
  File "/opt/splunk/etc/apps/search/bin/sendemail.py", line 262, in sendEmail
    responseHeaders, responseBody = simpleRequest(uri, method='GET', getargs={'output_mode':'json'}, sessionKey=sessionKey)
  File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 577, in simpleRequest
    raise splunk.ResourceNotFound(uri)
splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/_new?output_mode=json

 

I do see that it's looking for a results file that is not available. However, I don't understand how this could occur or where to start looking for the issue.

If I had to make a guess, my gut feeling is that it's a Linux file permissions issue somewhere. Splunk is trying to create a file in a location that it does not have write access to. The file is never created, and a subsequent attempt to read the file fails. I have absolutely nothing to base that on though, it's just the only reason I can think of that Splunk is unable to find a file that it should be creating.

peterm30
Path Finder

I don't know if you've figured this out yet, but I'm running into the exact same problem, and your thread seems to be the only one with the exact same error message. I'm also using a dev/test license, however, I'm trying to send through a third party mail provider (I tried two in fact, in case that was the issue). I'm running Splunk 8.0.5 on Ubuntu 18.04.4

 

 

2020-07-31 22:38:49,185 -0400 ERROR	sendemail:1454 - [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/_new?output_mode=json
Traceback (most recent call last):
  File "/opt/splunk/etc/apps/search/bin/sendemail.py", line 1447, in <module>
    results = sendEmail(results, settings, keywords, argvals)
  File "/opt/splunk/etc/apps/search/bin/sendemail.py", line 262, in sendEmail
    responseHeaders, responseBody = simpleRequest(uri, method='GET', getargs={'output_mode':'json'}, sessionKey=sessionKey)
  File "/opt/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 577, in simpleRequest
    raise splunk.ResourceNotFound(uri)
ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/_new?output_mode=json

 

 

benhooper
Communicator

Yes, I have a similar scenario / environment (Ubuntu Server 20.04, Splunk Enterprise 8.0.6, dev / test license, single user, etc) and problem too.

0 Karma

benhooper
Communicator

This is what I've found so far.

https://docs.splunk.com/Documentation/Splunk/8.0.6/Alert/Emailnotification constantly says that alerts should be created "in the Search and Reporting app" so I tried recreating mine there but doing so didn't make a difference.

According to https://docs.splunk.com/Documentation/Splunk/8.0.6/RESTUM/RESTusing#Namespace, the URI format is https://<host>:<mPort>/servicesNS/<user>/<app>/* and supports wildcards ("-").

In my case:

  1. I confirmed that command curl -k -u "<username>:<password>" https://127.0.0.1:8089/servicesNS/-/TA-<app name>/saved/searches/<alert name>?output_mode=json completed successfully, due to the wildcard.
  2. That explains the error because I don't have a user named admin so the URI literally doesn't exist. But why is that being used in the first place?

I read through /opt/splunk/etc/apps/search/bin/sendemail.py and, from lines 161 to 187, found that the URI is effectively built by splunk.entity.buildEndpoint(entityClass, namespace=settings['namespace'], owner=settings['owner']) for which I can't really find any documentation.

Modifying the Python script is far from ideal because (1) it causes file integrity check failures and (2) it'll probably get overwritten by updates in the future. However, for short-term diagnostic purposes, I added some logging to the script and found that, on line 1437, splunk.Intersplunk.getOrganizedResults() is executed which sets the value of settings (alert data) which includes 'owner': 'admin'. This explains the URI but is strange because the actual owner of the alert is me@mydomain.com

I tried effecting this behaviour by modifying /opt/splunk/etc/apps/TA-<app name>/metadata/local.meta[savedsearches/<alert name>]owner:

  1. From me@mydomain.com to admin. This caused the alert to become orphaned.
  2. Removing it completely. This caused the owner to become nobody but did not change settings or, therefore, the generated URI.

It occurred to me that this could be a limitation of the dev / test license considering Splunk's default username is admin and the license only allows one user but https://www.splunk.com/en_us/resources/personalized-dev-test-licenses/faq.html → "What are the software feature and deployment limitations in the personalized Dev/Test License?" doesn't mention anything about alerts or emailing.

I'll be continuing my investigation tomorrow and I'll post back here with my findings.

benhooper
Communicator

I have found that these errors can be resolved by modifying file /opt/splunk/etc/apps/search/bin/sendemail.py and adding line settings["owner"] = "-" between lines 1437 (results, dummyresults, settings = splunk.Intersplunk.getOrganizedResults()) and 1438 (try:). This is good to know but, as previously mentioned, isn't a long-term fix.

Given that the value of settings is set by splunk.Intersplunk.getOrganizedResults(), I looked for its file and found /opt/splunk/lib/python3.7/site-packages/splunk/Intersplunk.py but the function getOrganizedResults literally contains the line settings = {} and a comment saying that "settings" is always an empty dict:

2020 ∕ 10 ∕ 21 11꞉14꞉55 - root@splunk-test_~.png

Using command grep --include=\*.* --exclude=\*.{log,js} -RsnI /opt/splunk/etc/apps/TA-<app name>/ -e "owner\|admin", I found that file /opt/splunk/etc/apps/TA-<app name>/local/restmap.conf contained line [admin:TA_<app name>] so I modified it replacing admin with me@mydomain.com but that just seemed to break it even more.

So, I'm still no closer to understanding where the values of settings / an owner of admin comes from.

0 Karma

benhooper
Communicator

I tried upgrading to Splunk v8.1.0 (latest as of writing) and found that:

  1. It did overwrite file /opt/splunk/etc/apps/search/bin/sendemail.py, thereby removing the workaround.
  2. It didn't resolve the problem either.
0 Karma

dkozinn
Explorer

Regarding your earlier comments about the hard-coded username in the code, on my system I've changed the default name of the administrative user. To be honest, I don't recall if I was prompted to do that or if I did it for some other reason.

Could that be somehow involved?

0 Karma

benhooper
Communicator

It could very well be. I'm currently setting up a new virtual instance to test that theory.

benhooper
Communicator

FYI: There seems to be no default username - you're required to enter one during the initial set-up.

2020 ∕ 10 ∕ 21 16꞉44꞉52 - root@ubuntuserver_~.png

Anyway, I set up the following new environment:

  • OS: Ubuntu Server 20.04
  • Splunk: Enterprise (full 2-month trial) 8.1.0
  • Username: admin
  • Alert owner: nobody
  • Email server: smtp.office365.com:587

I have not yet seen the aforementioned errors.

I re-added the logging which reported a URI of uri: /servicesNS/splunk-system-user/TA-<app name>/saved/searches/<alert name>

Why this is different and works, I'm not sure at this point but I'll look into it further tomorrow.

isoutamo
SplunkTrust
SplunkTrust
That’s true. Splunk removed default admin user at 7.2(?) and after that you should create it when you install splunk at first time.
@dkozinn can you still remember if you have admin user or not or reconstruct this?
r. Ismo
0 Karma

dkozinn
Explorer

Thanks for digging into this @benhooper . I'd started digging into this and got stuck due to the same lack of documentation that you mentioned.

0 Karma
Did you miss .conf21 Virtual?

Good news! The event's keynotes and many of its breakout sessions are now available online, and still totally FREE!