All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Sorru, I jumped the gun a bit there... This seems to be a problem with the manual execution of house_keeping.py, there is a shebang there which should use the correct version #!/usr/bin/env python3... See more...
Sorru, I jumped the gun a bit there... This seems to be a problem with the manual execution of house_keeping.py, there is a shebang there which should use the correct version #!/usr/bin/env python3.7 So that is most likely just a problem with the manual execution of the script. Then we're back to checking the "permissions to execute scripts". That may still be the issue
Same problem. But on the other server it has the same permissions and it works without any problems. -rwxr--r-- 1 root root  5743 Jul 30 14:26 alertqueue_consumer.py -rwxr--r-- 1 root root 10515 ... See more...
Same problem. But on the other server it has the same permissions and it works without any problems. -rwxr--r-- 1 root root  5743 Jul 30 14:26 alertqueue_consumer.py -rwxr--r-- 1 root root 10515 Jul 30 14:26 command_ameenrich.py -rwxr--r-- 1 root root  4633 Jul 30 14:26 command_ameevents.py -rwxr--r-- 1 root root 11458 Jul 30 14:26 create_alert.py -rwxr--r-- 1 root root  1207 Jul 30 14:26 _env.py -rwxr--r-- 1 root root  6027 Jul 30 14:26 handler_license.py -rwxr--r-- 1 root root  4405 Jul 30 14:26 handler_logging.py -rwxr--r-- 1 root root  3192 Jul 30 14:26 handler_minit.py -rwxr--r-- 1 root root  3384 Jul 30 14:26 handler_proxy.py -rwxr--r-- 1 root root   441 Jul 30 14:26 handler.py -rwxr--r-- 1 root root  3578 Jul 30 14:26 handler_role_utils.py -rwxr--r-- 1 root root  5497 Jul 30 14:26 house_keeping.py -rwxr--r-- 1 root root   273 Jul 30 14:26 __init__.py -rwxr--r-- 1 root root  3832 Jul 30 14:26 notificationqueue_consumer.py drwxr-xr-x 2 root root  4096 Jul 30 14:26 persistconn drwx--x--- 2 root root  4096 Jul 30 14:26 __pycache__ -rwxr--r-- 1 root root  3615 Jul 30 14:26 tag_keeping.py We will restore a backup as we want the system to be up again. I will try an upgrade in a few weeks. If anything changes, I will update the post. Thank you for your help!
So I think I figured it out. /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme Traceback (most recent call last): File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/house_keeping.p... See more...
So I think I figured it out. /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme Traceback (most recent call last): File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/house_keeping.py", line 21, in <module> from datapunctum.factory_logger import Logger File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/datapunctum/factory_logger.py", line 10, in <module> from pydantic import BaseModel File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic/__init__.py", line 372, in __getattr__ module = import_module(module_name, package=package) File "/opt/splunk/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic/main.py", line 11, in <module> import pydantic_core File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic_core/__init__.py", line 6, in <module> from ._pydantic_core import ( ModuleNotFoundError: No module named 'pydantic_core._pydantic_core' However /opt/splunk/bin/splunk cmd python3.7 house_keeping.py --scheme <scheme><title>AME house keeping</title><description>This task does everything, that has to be done in the background, like mapping users and checking the events time to live.</description><use_external_validation>true</use_external_validation><use_single_instance>false</use_single_instance><streaming_mode>xml</streaming_mode><endpoint><args><arg name="default"><description>Unused Default Argument</description><data_type>string</data_type><required_on_edit>false</required_on_edit><required_on_create>true</required_on_create></arg></args></endpoint></scheme>splunk@gdovm:~/etc/apps/alert_manager_enterprise/bin$ And then there is this: https://docs.splunk.com/Documentation/Splunk/9.3.0/ReleaseNotes/MeetSplunk n this release, the default Python interpreter is set to Python version 3.9. The Python.Version settings has been updated so that the parameter is set to value of force_python3, this forces all Python extension points to use Python 3.9 including overriding any application specified settings. "This is designed to be secure-by-default for new customers. If the value is set to python3.9, the default interpreter is set to Python 3.9 but applications can choose to use a different value. Python 3.7 continues to be available in the build for customers' private apps." So the problem is a change in the Python version handling when upgrading to Splunk 9.3.0 Either you can try to figure out how to force the app to revert to python 3.7 or you can get a hold of the developers and inform them the app may not be working with 9.3.0 anymore. Sorry, not fun to have to be the bearer of bad news
Hold on a second, there is no permission to execute any of the python scripts, right? Might be something I'm missing but I suspect that might be a problem at least: -rwxrw-rw- 1 splunk splunk 5,7K ... See more...
Hold on a second, there is no permission to execute any of the python scripts, right? Might be something I'm missing but I suspect that might be a problem at least: -rwxrw-rw- 1 splunk splunk 5,7K jun 28 11:32 alertqueue_consumer.py -rwxrw-rw- 1 splunk splunk 11K jun 28 11:32 command_ameenrich.py -rwxrw-rw- 1 splunk splunk 4,6K jun 28 11:32 command_ameevents.py -rwxrw-rw- 1 splunk splunk 12K jun 28 11:32 create_alert.py -rwxrw-rw- 1 splunk splunk 1,2K jun 28 11:32 _env.py -rwxrw-rw- 1 splunk splunk 5,9K jun 28 11:32 handler_license.py -rwxrw-rw- 1 splunk splunk 4,4K jun 28 11:32 handler_logging.py -rwxrw-rw- 1 splunk splunk 3,2K jun 28 11:32 handler_minit.py -rwxrw-rw- 1 splunk splunk 3,4K jun 28 11:32 handler_proxy.py -rwxrw-rw- 1 splunk splunk 441 jun 28 11:32 handler.py -rwxrw-rw- 1 splunk splunk 3,5K jun 28 11:32 handler_role_utils.py -rwxrw-rw- 1 splunk splunk 5,4K jun 28 11:32 house_keeping.py -rwxrw-rw- 1 splunk splunk 273 jun 28 11:32 __init__.py -rwxrw-rw- 1 splunk splunk 3,8K jun 28 11:32 notificationqueue_consumer.py drwxrwxrwx 2 splunk splunk 4,0K jun 28 11:32 persistconn -rwxrw-rw- 1 splunk splunk 3,6K jun 28 11:32 tag_keeping.py  So maybe a chmod u+x may solve your problems?
I see. So for Splunk 0.3 - 0.1 equals 0.19999999999 instead of 0.2. Do you know how can I work around this in my example?
OK, assuming you are running Splunk as root that seems to check out. Splunk python cannot itself import pydantic, though it seems to be bundled with the app so maybe that is just how the command is ... See more...
OK, assuming you are running Splunk as root that seems to check out. Splunk python cannot itself import pydantic, though it seems to be bundled with the app so maybe that is just how the command is executed (which is pretty bad, but still). I see a bunch of "from pathlib import Path" so I'm assuming this is to import from localy bundled versions. Not sure what the issue is then, I'll see if I can recreate the error on a local box  
I think it is working but unfortunately I get: The URL you clicked cannot open as it is invalid and might contain malicious code. Change the URL to a relative or absolute URL, such as /app/search/... See more...
I think it is working but unfortunately I get: The URL you clicked cannot open as it is invalid and might contain malicious code. Change the URL to a relative or absolute URL, such as /app/search/datasets or https://www.splunk.com.
Anyone? Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\USBSTOR.  I want to monitor when the Start value changes from a 3 or 4 and what machine, user, etc that made the change.  I am w... See more...
Anyone? Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\USBSTOR.  I want to monitor when the Start value changes from a 3 or 4 and what machine, user, etc that made the change.  I am working on the Splunk_TA_Windows and go to the inputs.conf file and put in the input but I am not sure if that is the correct place or why it doesn’t report the change in search in Splunk.  It does create an event code in Windows for the change so I think it is a matter of getting the input worded correctly or something.
We have 2 servers, they look the same at the "not upgrade" server. -rw-r--r-- 1 root root 5743 Jul 30 14:26 alertqueue_consumer.py -rw-r--r-- 1 root root 10515 Jul 30 14:26 command_ameenrich.py ... See more...
We have 2 servers, they look the same at the "not upgrade" server. -rw-r--r-- 1 root root 5743 Jul 30 14:26 alertqueue_consumer.py -rw-r--r-- 1 root root 10515 Jul 30 14:26 command_ameenrich.py -rw-r--r-- 1 root root 4633 Jul 30 14:26 command_ameevents.py -rw-r--r-- 1 root root 11458 Jul 30 14:26 create_alert.py -rw-r--r-- 1 root root 1207 Jul 30 14:26 _env.py -rw-r--r-- 1 root root 6027 Jul 30 14:26 handler_license.py -rw-r--r-- 1 root root 4405 Jul 30 14:26 handler_logging.py -rw-r--r-- 1 root root 3192 Jul 30 14:26 handler_minit.py -rw-r--r-- 1 root root 3384 Jul 30 14:26 handler_proxy.py -rw-r--r-- 1 root root 441 Jul 30 14:26 handler.py -rw-r--r-- 1 root root 3578 Jul 30 14:26 handler_role_utils.py -rw-r--r-- 1 root root 5497 Jul 30 14:26 house_keeping.py -rw-r--r-- 1 root root 273 Jul 30 14:26 __init__.py -rw-r--r-- 1 root root 3832 Jul 30 14:26 notificationqueue_consumer.py drwxr-xr-x 2 root root 4096 Jul 30 14:26 persistconn drwx--x--- 2 root root 4096 Jul 30 14:26 __pycache__ -rw-r--r-- 1 root root 3615 Jul 30 14:26 tag_keeping.py
I have a saved search which is scheduled for every 17mins with time range of last 7 days. instead of getting results with last 7 days of data, can i get only last 15 or 20mins data and i should not c... See more...
I have a saved search which is scheduled for every 17mins with time range of last 7 days. instead of getting results with last 7 days of data, can i get only last 15 or 20mins data and i should not change the time range in saved search from last 7 days . 
Hm, so it seems that pydantic (or at least pydantic_core) is not available for import. Not sure that house_keeping.py suffers the same problems as tag_keeping.py, but it is a problem for sure. Did t... See more...
Hm, so it seems that pydantic (or at least pydantic_core) is not available for import. Not sure that house_keeping.py suffers the same problems as tag_keeping.py, but it is a problem for sure. Did the file permissions look OK?
Hi @kp_pl , as @ITWhisperer said, you must include -time in the stats command, so you can use it in timechart: (index="o_a_p") OR ( index="o_d_p" ) | eval ca = substr(c_u,2,length(c_u)) ``` tran... See more...
Hi @kp_pl , as @ITWhisperer said, you must include -time in the stats command, so you can use it in timechart: (index="o_a_p") OR ( index="o_d_p" ) | eval ca = substr(c_u,2,length(c_u)) ``` transformation of oap index`` | eval e_d = mvindex(split(ed, ","), 0) ``` transformation of odp index``` | eval cd = mvindex(split(Rr, "/") ,0) | eval AAA=c_e.":".ca | eval DDD=e_d.":".cd | eval join=if(index="o_a_p",AAA,DDD) ``` join field``` | stats dc(index) AS count_index values(Op) AS OP values(t_t) AS TT earliest(_time) AS _time BY join | where count_index=2 | timechart count Ciao. Giuseppe
This solution still works for most cases, however if you need an alert where number of events is 0 then this solution will not work, not "as is" at least. As a search for, lets say a problem with lo... See more...
This solution still works for most cases, however if you need an alert where number of events is 0 then this solution will not work, not "as is" at least. As a search for, lets say a problem with logshipping, should alert on 0 returned events. Then there is no way to hitch on a hidden field to anything as there are no results. So to fill "all my needs" here, I would have to come up with something completely different, it would need to become a feature in Splunk or I have to sort out an manage a number of group recipients in exchange.
root@dc-splunk01://opt/splunk/etc/apps/alert_manager_enterprise/bin# /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme Traceback (most recent call last): File "/opt/splunk/etc/apps/alert... See more...
root@dc-splunk01://opt/splunk/etc/apps/alert_manager_enterprise/bin# /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme Traceback (most recent call last): File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/house_keeping.py", line 21, in <module> from datapunctum.factory_logger import Logger File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/datapunctum/factory_logger.py", line 10, in <module> from pydantic import BaseModel File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic/__init__.py", line 372, in __getattr__ module = import_module(module_name, package=package) File "/opt/splunk/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic/main.py", line 11, in <module> import pydantic_core File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic_core/__init__.py", line 6, in <module> from ._pydantic_core import ( ModuleNotFoundError: No module named 'pydantic_core._pydantic_core' New install of the app gave the same result.
While every sourcetype should have props defined, this may be beyond what transforms can do.  Timestamp extraction happens before transforms are applied, which is why I suggested an input script do t... See more...
While every sourcetype should have props defined, this may be beyond what transforms can do.  Timestamp extraction happens before transforms are applied, which is why I suggested an input script do the work.
That is a really great explanation, thank you! In other words, there would be little to no gain by using my suggested base search as it it would retain a lot of excess data from entire events. What... See more...
That is a really great explanation, thank you! In other words, there would be little to no gain by using my suggested base search as it it would retain a lot of excess data from entire events. What I could, in theory, do would be to run a basesearch keeping only the 3-4 fields all subsequent panels would use. This would however put a strain on the SH cluster. So for any real gain here, I would need to rewrite all panels that could use an effective base search to work with something like calculated daily averages and process these for each panel. However, and I'm sorry for being a stickler, this does not really answer the question regarding using a base search with subserches. I can run the base search and have a panel use that base with a query. But can you reference a base search withing the query using the base search? The example below is pretty crappy but hopefully a bit clearer then in my initial post? <search id="base_search"> <query> index="_internal" | stats count by <something> </query> </search> ... ... <search base="base_search"> <query> search <field>=<value> | join type=outer _time [ <search referencing the same base_search> | stats count something ] </query> </search> In a search which uses a base search (an effective one ) can I reference the same (or another ) base search inside a "subsearch"/"nested search"?
You need to include _time in your by clause of the stats, perhaps doing a bin command on it first to put it into buckets. It might be more profitable if you describe what it is you are trying to achi... See more...
You need to include _time in your by clause of the stats, perhaps doing a bin command on it first to put it into buckets. It might be more profitable if you describe what it is you are trying to achieve (in non-Splunk terms), and provide some sample (anonymised) representative events, and an example of your expected output.
Hi, I have a group field "bin" and a query that takes index=myindex response_code!=00. I'm not sure how to create an alert to warn when there is an x percentage increase from day to day on any of th... See more...
Hi, I have a group field "bin" and a query that takes index=myindex response_code!=00. I'm not sure how to create an alert to warn when there is an x percentage increase from day to day on any of the bins. I tried something along these lines, but could not get the prev_error_count to populate:       index=myindex sourcetype=trans response_code!=00 | bin _time span=1d as day | stats count as error_count by day, bin | streamstats current=f window=2 last(error_count) as prev_error_count by bin | eval perc_increase = error_count / prev_error_count)*100, 2) | table perc_increase        
I have a set of data which comes from two indexes . It looks more or less like below: (index="o_a_p") OR ( index="o_d_p" ) ```a ``` | eval ca = substr(c_u,2,length(c_u))    ``` transformation of... See more...
I have a set of data which comes from two indexes . It looks more or less like below: (index="o_a_p") OR ( index="o_d_p" ) ```a ``` | eval ca = substr(c_u,2,length(c_u))    ``` transformation of oap index`` ```d ``` | eval e_d = mvindex(split(ed, ","), 0)  ``` transformation of odp index``` | eval cd = mvindex(split(Rr, "/") ,0) | eval AAA=c_e.":".ca | eval DDD=e_d.":".cd | eval join=if(index="o_a_p",AAA,DDD)  ``` join field``` | stats dc(index) AS count_index values(Op) as OP values(t_t) as TT BY join | where count_index=2 so now , how to create timechart based on fields which comes from stats ? There is no _time field there K.  
Have just done a fresh install of Splunk 9.3.0 with Security Essentials. I'm getting the following message Error in 'sseidenrichment' command: (AttributeError) module 'time' has no attribute 'clock... See more...
Have just done a fresh install of Splunk 9.3.0 with Security Essentials. I'm getting the following message Error in 'sseidenrichment' command: (AttributeError) module 'time' has no attribute 'clock' Can you help?