All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please confirm the "bin" field is present in the index.  It is not created by the bin command. If the 'bin' field is null or not present then the stats command will return no results and so the stre... See more...
Please confirm the "bin" field is present in the index.  It is not created by the bin command. If the 'bin' field is null or not present then the stats command will return no results and so the streamstats command will have nothing to evaluate.
You can use the savedsearch command to run a saved search in your query.  If you use the time picker to specify a time range other than All Time then the saved search will use your selected time rang... See more...
You can use the savedsearch command to run a saved search in your query.  If you use the time picker to specify a time range other than All Time then the saved search will use your selected time range; otherwise, the time range in the saved search will be used.
@nivets- Question has paradox. Do you want to change the time-range or not change the timerange?
Sorry I could not be of more assistance. If file permissions are not the culprit I still think there might be some issue with the python version handling. Just cant figure ut what that would be thou... See more...
Sorry I could not be of more assistance. If file permissions are not the culprit I still think there might be some issue with the python version handling. Just cant figure ut what that would be though, sorry.
I'm running a RHEL8 on the latest version. We've been down the long road with Splunk support and have confirmed exhaustively that systemd is hanging on processes that aren't there. And until systemd ... See more...
I'm running a RHEL8 on the latest version. We've been down the long road with Splunk support and have confirmed exhaustively that systemd is hanging on processes that aren't there. And until systemd times out (360 seconds by default), it won't actually return to you. And when Splunk does return as "stopped", it didn't actually stop, the command just timed out (journalctl -f --unit <Splunk service file>). We're working with our Linux teams and likely Red Hat Support to figure out why.
We will continue the restore for now, as our security team is pushing me. I will check your information when I try another upgrade. I really appreciate your help! Thank you!
well, my question concerned general idea of using timechart when joining indexes.  Not ready to prepare ready to analyze example. Anyway your hint was valuable as well . Especially  using BIN comman... See more...
well, my question concerned general idea of using timechart when joining indexes.  Not ready to prepare ready to analyze example. Anyway your hint was valuable as well . Especially  using BIN command and baskets could be very useful in my queries .  I am going to read more about it and I guess will ask more question about BIN  soon   thank you @ITWhisperer 
Sorru, I jumped the gun a bit there... This seems to be a problem with the manual execution of house_keeping.py, there is a shebang there which should use the correct version #!/usr/bin/env python3... See more...
Sorru, I jumped the gun a bit there... This seems to be a problem with the manual execution of house_keeping.py, there is a shebang there which should use the correct version #!/usr/bin/env python3.7 So that is most likely just a problem with the manual execution of the script. Then we're back to checking the "permissions to execute scripts". That may still be the issue
Same problem. But on the other server it has the same permissions and it works without any problems. -rwxr--r-- 1 root root  5743 Jul 30 14:26 alertqueue_consumer.py -rwxr--r-- 1 root root 10515 ... See more...
Same problem. But on the other server it has the same permissions and it works without any problems. -rwxr--r-- 1 root root  5743 Jul 30 14:26 alertqueue_consumer.py -rwxr--r-- 1 root root 10515 Jul 30 14:26 command_ameenrich.py -rwxr--r-- 1 root root  4633 Jul 30 14:26 command_ameevents.py -rwxr--r-- 1 root root 11458 Jul 30 14:26 create_alert.py -rwxr--r-- 1 root root  1207 Jul 30 14:26 _env.py -rwxr--r-- 1 root root  6027 Jul 30 14:26 handler_license.py -rwxr--r-- 1 root root  4405 Jul 30 14:26 handler_logging.py -rwxr--r-- 1 root root  3192 Jul 30 14:26 handler_minit.py -rwxr--r-- 1 root root  3384 Jul 30 14:26 handler_proxy.py -rwxr--r-- 1 root root   441 Jul 30 14:26 handler.py -rwxr--r-- 1 root root  3578 Jul 30 14:26 handler_role_utils.py -rwxr--r-- 1 root root  5497 Jul 30 14:26 house_keeping.py -rwxr--r-- 1 root root   273 Jul 30 14:26 __init__.py -rwxr--r-- 1 root root  3832 Jul 30 14:26 notificationqueue_consumer.py drwxr-xr-x 2 root root  4096 Jul 30 14:26 persistconn drwx--x--- 2 root root  4096 Jul 30 14:26 __pycache__ -rwxr--r-- 1 root root  3615 Jul 30 14:26 tag_keeping.py We will restore a backup as we want the system to be up again. I will try an upgrade in a few weeks. If anything changes, I will update the post. Thank you for your help!
So I think I figured it out. /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme Traceback (most recent call last): File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/house_keeping.p... See more...
So I think I figured it out. /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme Traceback (most recent call last): File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/house_keeping.py", line 21, in <module> from datapunctum.factory_logger import Logger File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/datapunctum/factory_logger.py", line 10, in <module> from pydantic import BaseModel File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic/__init__.py", line 372, in __getattr__ module = import_module(module_name, package=package) File "/opt/splunk/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic/main.py", line 11, in <module> import pydantic_core File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic_core/__init__.py", line 6, in <module> from ._pydantic_core import ( ModuleNotFoundError: No module named 'pydantic_core._pydantic_core' However /opt/splunk/bin/splunk cmd python3.7 house_keeping.py --scheme <scheme><title>AME house keeping</title><description>This task does everything, that has to be done in the background, like mapping users and checking the events time to live.</description><use_external_validation>true</use_external_validation><use_single_instance>false</use_single_instance><streaming_mode>xml</streaming_mode><endpoint><args><arg name="default"><description>Unused Default Argument</description><data_type>string</data_type><required_on_edit>false</required_on_edit><required_on_create>true</required_on_create></arg></args></endpoint></scheme>splunk@gdovm:~/etc/apps/alert_manager_enterprise/bin$ And then there is this: https://docs.splunk.com/Documentation/Splunk/9.3.0/ReleaseNotes/MeetSplunk n this release, the default Python interpreter is set to Python version 3.9. The Python.Version settings has been updated so that the parameter is set to value of force_python3, this forces all Python extension points to use Python 3.9 including overriding any application specified settings. "This is designed to be secure-by-default for new customers. If the value is set to python3.9, the default interpreter is set to Python 3.9 but applications can choose to use a different value. Python 3.7 continues to be available in the build for customers' private apps." So the problem is a change in the Python version handling when upgrading to Splunk 9.3.0 Either you can try to figure out how to force the app to revert to python 3.7 or you can get a hold of the developers and inform them the app may not be working with 9.3.0 anymore. Sorry, not fun to have to be the bearer of bad news
Hold on a second, there is no permission to execute any of the python scripts, right? Might be something I'm missing but I suspect that might be a problem at least: -rwxrw-rw- 1 splunk splunk 5,7K ... See more...
Hold on a second, there is no permission to execute any of the python scripts, right? Might be something I'm missing but I suspect that might be a problem at least: -rwxrw-rw- 1 splunk splunk 5,7K jun 28 11:32 alertqueue_consumer.py -rwxrw-rw- 1 splunk splunk 11K jun 28 11:32 command_ameenrich.py -rwxrw-rw- 1 splunk splunk 4,6K jun 28 11:32 command_ameevents.py -rwxrw-rw- 1 splunk splunk 12K jun 28 11:32 create_alert.py -rwxrw-rw- 1 splunk splunk 1,2K jun 28 11:32 _env.py -rwxrw-rw- 1 splunk splunk 5,9K jun 28 11:32 handler_license.py -rwxrw-rw- 1 splunk splunk 4,4K jun 28 11:32 handler_logging.py -rwxrw-rw- 1 splunk splunk 3,2K jun 28 11:32 handler_minit.py -rwxrw-rw- 1 splunk splunk 3,4K jun 28 11:32 handler_proxy.py -rwxrw-rw- 1 splunk splunk 441 jun 28 11:32 handler.py -rwxrw-rw- 1 splunk splunk 3,5K jun 28 11:32 handler_role_utils.py -rwxrw-rw- 1 splunk splunk 5,4K jun 28 11:32 house_keeping.py -rwxrw-rw- 1 splunk splunk 273 jun 28 11:32 __init__.py -rwxrw-rw- 1 splunk splunk 3,8K jun 28 11:32 notificationqueue_consumer.py drwxrwxrwx 2 splunk splunk 4,0K jun 28 11:32 persistconn -rwxrw-rw- 1 splunk splunk 3,6K jun 28 11:32 tag_keeping.py  So maybe a chmod u+x may solve your problems?
I see. So for Splunk 0.3 - 0.1 equals 0.19999999999 instead of 0.2. Do you know how can I work around this in my example?
OK, assuming you are running Splunk as root that seems to check out. Splunk python cannot itself import pydantic, though it seems to be bundled with the app so maybe that is just how the command is ... See more...
OK, assuming you are running Splunk as root that seems to check out. Splunk python cannot itself import pydantic, though it seems to be bundled with the app so maybe that is just how the command is executed (which is pretty bad, but still). I see a bunch of "from pathlib import Path" so I'm assuming this is to import from localy bundled versions. Not sure what the issue is then, I'll see if I can recreate the error on a local box  
I think it is working but unfortunately I get: The URL you clicked cannot open as it is invalid and might contain malicious code. Change the URL to a relative or absolute URL, such as /app/search/... See more...
I think it is working but unfortunately I get: The URL you clicked cannot open as it is invalid and might contain malicious code. Change the URL to a relative or absolute URL, such as /app/search/datasets or https://www.splunk.com.
Anyone? Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\USBSTOR.  I want to monitor when the Start value changes from a 3 or 4 and what machine, user, etc that made the change.  I am w... See more...
Anyone? Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\USBSTOR.  I want to monitor when the Start value changes from a 3 or 4 and what machine, user, etc that made the change.  I am working on the Splunk_TA_Windows and go to the inputs.conf file and put in the input but I am not sure if that is the correct place or why it doesn’t report the change in search in Splunk.  It does create an event code in Windows for the change so I think it is a matter of getting the input worded correctly or something.
We have 2 servers, they look the same at the "not upgrade" server. -rw-r--r-- 1 root root 5743 Jul 30 14:26 alertqueue_consumer.py -rw-r--r-- 1 root root 10515 Jul 30 14:26 command_ameenrich.py ... See more...
We have 2 servers, they look the same at the "not upgrade" server. -rw-r--r-- 1 root root 5743 Jul 30 14:26 alertqueue_consumer.py -rw-r--r-- 1 root root 10515 Jul 30 14:26 command_ameenrich.py -rw-r--r-- 1 root root 4633 Jul 30 14:26 command_ameevents.py -rw-r--r-- 1 root root 11458 Jul 30 14:26 create_alert.py -rw-r--r-- 1 root root 1207 Jul 30 14:26 _env.py -rw-r--r-- 1 root root 6027 Jul 30 14:26 handler_license.py -rw-r--r-- 1 root root 4405 Jul 30 14:26 handler_logging.py -rw-r--r-- 1 root root 3192 Jul 30 14:26 handler_minit.py -rw-r--r-- 1 root root 3384 Jul 30 14:26 handler_proxy.py -rw-r--r-- 1 root root 441 Jul 30 14:26 handler.py -rw-r--r-- 1 root root 3578 Jul 30 14:26 handler_role_utils.py -rw-r--r-- 1 root root 5497 Jul 30 14:26 house_keeping.py -rw-r--r-- 1 root root 273 Jul 30 14:26 __init__.py -rw-r--r-- 1 root root 3832 Jul 30 14:26 notificationqueue_consumer.py drwxr-xr-x 2 root root 4096 Jul 30 14:26 persistconn drwx--x--- 2 root root 4096 Jul 30 14:26 __pycache__ -rw-r--r-- 1 root root 3615 Jul 30 14:26 tag_keeping.py
I have a saved search which is scheduled for every 17mins with time range of last 7 days. instead of getting results with last 7 days of data, can i get only last 15 or 20mins data and i should not c... See more...
I have a saved search which is scheduled for every 17mins with time range of last 7 days. instead of getting results with last 7 days of data, can i get only last 15 or 20mins data and i should not change the time range in saved search from last 7 days . 
Hm, so it seems that pydantic (or at least pydantic_core) is not available for import. Not sure that house_keeping.py suffers the same problems as tag_keeping.py, but it is a problem for sure. Did t... See more...
Hm, so it seems that pydantic (or at least pydantic_core) is not available for import. Not sure that house_keeping.py suffers the same problems as tag_keeping.py, but it is a problem for sure. Did the file permissions look OK?
Hi @kp_pl , as @ITWhisperer said, you must include -time in the stats command, so you can use it in timechart: (index="o_a_p") OR ( index="o_d_p" ) | eval ca = substr(c_u,2,length(c_u)) ``` tran... See more...
Hi @kp_pl , as @ITWhisperer said, you must include -time in the stats command, so you can use it in timechart: (index="o_a_p") OR ( index="o_d_p" ) | eval ca = substr(c_u,2,length(c_u)) ``` transformation of oap index`` | eval e_d = mvindex(split(ed, ","), 0) ``` transformation of odp index``` | eval cd = mvindex(split(Rr, "/") ,0) | eval AAA=c_e.":".ca | eval DDD=e_d.":".cd | eval join=if(index="o_a_p",AAA,DDD) ``` join field``` | stats dc(index) AS count_index values(Op) AS OP values(t_t) AS TT earliest(_time) AS _time BY join | where count_index=2 | timechart count Ciao. Giuseppe
This solution still works for most cases, however if you need an alert where number of events is 0 then this solution will not work, not "as is" at least. As a search for, lets say a problem with lo... See more...
This solution still works for most cases, however if you need an alert where number of events is 0 then this solution will not work, not "as is" at least. As a search for, lets say a problem with logshipping, should alert on 0 returned events. Then there is no way to hitch on a hidden field to anything as there are no results. So to fill "all my needs" here, I would have to come up with something completely different, it would need to become a feature in Splunk or I have to sort out an manage a number of group recipients in exchange.