All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

root@dc-splunk01://opt/splunk/etc/apps/alert_manager_enterprise/bin# /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme Traceback (most recent call last): File "/opt/splunk/etc/apps/alert... See more...
root@dc-splunk01://opt/splunk/etc/apps/alert_manager_enterprise/bin# /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme Traceback (most recent call last): File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/house_keeping.py", line 21, in <module> from datapunctum.factory_logger import Logger File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/datapunctum/factory_logger.py", line 10, in <module> from pydantic import BaseModel File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic/__init__.py", line 372, in __getattr__ module = import_module(module_name, package=package) File "/opt/splunk/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic/main.py", line 11, in <module> import pydantic_core File "/opt/splunk/etc/apps/alert_manager_enterprise/bin/../lib/pydantic_core/__init__.py", line 6, in <module> from ._pydantic_core import ( ModuleNotFoundError: No module named 'pydantic_core._pydantic_core' New install of the app gave the same result.
While every sourcetype should have props defined, this may be beyond what transforms can do.  Timestamp extraction happens before transforms are applied, which is why I suggested an input script do t... See more...
While every sourcetype should have props defined, this may be beyond what transforms can do.  Timestamp extraction happens before transforms are applied, which is why I suggested an input script do the work.
That is a really great explanation, thank you! In other words, there would be little to no gain by using my suggested base search as it it would retain a lot of excess data from entire events. What... See more...
That is a really great explanation, thank you! In other words, there would be little to no gain by using my suggested base search as it it would retain a lot of excess data from entire events. What I could, in theory, do would be to run a basesearch keeping only the 3-4 fields all subsequent panels would use. This would however put a strain on the SH cluster. So for any real gain here, I would need to rewrite all panels that could use an effective base search to work with something like calculated daily averages and process these for each panel. However, and I'm sorry for being a stickler, this does not really answer the question regarding using a base search with subserches. I can run the base search and have a panel use that base with a query. But can you reference a base search withing the query using the base search? The example below is pretty crappy but hopefully a bit clearer then in my initial post? <search id="base_search"> <query> index="_internal" | stats count by <something> </query> </search> ... ... <search base="base_search"> <query> search <field>=<value> | join type=outer _time [ <search referencing the same base_search> | stats count something ] </query> </search> In a search which uses a base search (an effective one ) can I reference the same (or another ) base search inside a "subsearch"/"nested search"?
You need to include _time in your by clause of the stats, perhaps doing a bin command on it first to put it into buckets. It might be more profitable if you describe what it is you are trying to achi... See more...
You need to include _time in your by clause of the stats, perhaps doing a bin command on it first to put it into buckets. It might be more profitable if you describe what it is you are trying to achieve (in non-Splunk terms), and provide some sample (anonymised) representative events, and an example of your expected output.
Hi, I have a group field "bin" and a query that takes index=myindex response_code!=00. I'm not sure how to create an alert to warn when there is an x percentage increase from day to day on any of th... See more...
Hi, I have a group field "bin" and a query that takes index=myindex response_code!=00. I'm not sure how to create an alert to warn when there is an x percentage increase from day to day on any of the bins. I tried something along these lines, but could not get the prev_error_count to populate:       index=myindex sourcetype=trans response_code!=00 | bin _time span=1d as day | stats count as error_count by day, bin | streamstats current=f window=2 last(error_count) as prev_error_count by bin | eval perc_increase = error_count / prev_error_count)*100, 2) | table perc_increase        
I have a set of data which comes from two indexes . It looks more or less like below: (index="o_a_p") OR ( index="o_d_p" ) ```a ``` | eval ca = substr(c_u,2,length(c_u))    ``` transformation of... See more...
I have a set of data which comes from two indexes . It looks more or less like below: (index="o_a_p") OR ( index="o_d_p" ) ```a ``` | eval ca = substr(c_u,2,length(c_u))    ``` transformation of oap index`` ```d ``` | eval e_d = mvindex(split(ed, ","), 0)  ``` transformation of odp index``` | eval cd = mvindex(split(Rr, "/") ,0) | eval AAA=c_e.":".ca | eval DDD=e_d.":".cd | eval join=if(index="o_a_p",AAA,DDD)  ``` join field``` | stats dc(index) AS count_index values(Op) as OP values(t_t) as TT BY join | where count_index=2 so now , how to create timechart based on fields which comes from stats ? There is no _time field there K.  
Have just done a fresh install of Splunk 9.3.0 with Security Essentials. I'm getting the following message Error in 'sseidenrichment' command: (AttributeError) module 'time' has no attribute 'clock... See more...
Have just done a fresh install of Splunk 9.3.0 with Security Essentials. I'm getting the following message Error in 'sseidenrichment' command: (AttributeError) module 'time' has no attribute 'clock' Can you help?
The seems to be a python script "alert_manager_enterprise/bin/tag_keeping.py" which fails to run/execute causing the error. The script offers a way to test output through: /opt/splunk/bin/splunk cmd... See more...
The seems to be a python script "alert_manager_enterprise/bin/tag_keeping.py" which fails to run/execute causing the error. The script offers a way to test output through: /opt/splunk/bin/splunk cmd python3 house_keeping.py --scheme I don't use this app hence this is more "in general" and "best guess", initially I would check that the script has the correct permissions set to be run. Then (assuming it is safe to run the "test command" above) see if you can manually execute the command. It's possible that the update caused some problem with permissions for script execution, or (have not checked) there was an update to the python version which now is incompatible with the script bundled with the alert manager app (probably less likely though).  
As a rule of thumb, the base search should be a transforming search (i.e. containing stats command or timechart). You can get away with non-transforming search but you should explicitly list the fiel... See more...
As a rule of thumb, the base search should be a transforming search (i.e. containing stats command or timechart). You can get away with non-transforming search but you should explicitly list the fields which you want to retain from your base search for later use by postprocess searches. And you definitely don't want too much data returned from the base search (a SH will have to keep this result set for post-processing after all). So it kinda depends on your whole picture because that's not always about the common denominator. For example if you have one search index=a | stats count by fieldb and another one index=a | stats count by fieldc The best base search would be not index=a | fields fieldb fieldc But rather index=a | stats count by fieldb fieldc And your post-process searches would just do | stats sum(count) by fieldb and | stats sum(count) by fieldc respectively.
OK. Where are you exporting this from? Splunk should enclose the values in double quotes (and use double double quotes for the double quotes within a field value) when exporting search results. So w... See more...
OK. Where are you exporting this from? Splunk should enclose the values in double quotes (and use double double quotes for the double quotes within a field value) when exporting search results. So where are you exporting from/to?
Hi all, Today I've updated Splunk from version 9.2.2 to 9.3.0. All seems to be good except Alert Manager Enterprise 3.0.8 ( is not working anymore.) I'm kinda new into splunk, so I don't know wher... See more...
Hi all, Today I've updated Splunk from version 9.2.2 to 9.3.0. All seems to be good except Alert Manager Enterprise 3.0.8 ( is not working anymore.) I'm kinda new into splunk, so I don't know where to start. The error we got is:  Unable to initialize modular input "tag_keeping" defined in the app "alert_manager_enterprise": Introspecting scheme=tag_keeping: script running failed (PID 4085525 exited with code1) Please help me Kind regards, Glenn
Because... it's Splunk math (I suppose it has something to do with float handling underneath). See this run-anywhere example | makeresults count=10 | streamstats count | map search="|makeresults... See more...
Because... it's Splunk math (I suppose it has something to do with float handling underneath). See this run-anywhere example | makeresults count=10 | streamstats count | map search="|makeresults count=$count$| streamstats count as count2 | eval count=$count$" | eval count=count/10, count2=count2/10 | eval diff=count-count2 | table count count2 diff  
Hello, I have a app and this is its default XML <nav search_view="search" color="#65A637"> <view name="search" default="true"/> <view name="securehr"/> <view name="secure_group_members"/> <view... See more...
Hello, I have a app and this is its default XML <nav search_view="search" color="#65A637"> <view name="search" default="true"/> <view name="securehr"/> <view name="secure_group_members"/> <view name="changes_on_defined_secure_groups"/> <view name="group_membership"/> <view name="group_membership_v2"/> <collection label="Reports"> <view name="SecureFolder-Report for APT" >Report for APT</view> <view name="SecureFolder-Report for Betriebsrat" default="true">Report for Betriebsrat</view> <view name="SecureFolder-Report for CMA" default="true">Report for CMA</view> <view name="SecureFolder-Report for HR" default="true">Report for HR</view> <view name="SecureFolder-Report for IT" default="true">Report for IT</view> <view name="SecureFolder-Report for QUE" default="true">Report for QUE</view> <view name="SecureFolder-Report for Vorstand" default="true"> Report for Vorstand</view> </collection> </nav> Now from this xml ir should show 7 sections on navigation panel. But it does not show report section , can anyone help?
Hi,   This thing is getting me crazy. I am running Splunk 9.2.1 and I have the following table: amount compare frac_type fraction integer 0.41 F Number 0.41 0 4.18 F Number 0.... See more...
Hi,   This thing is getting me crazy. I am running Splunk 9.2.1 and I have the following table: amount compare frac_type fraction integer 0.41 F Number 0.41 0 4.18 F Number 0.18 4 0.26 F Number 0.26 0 0.34 F Number 0.34 0 10.60 F Number 0.60 10 0.11 F Number 0.11 0 2.00 F Number 0.00 2 3.49 F Number 0.49 3 10.58 F Number 0.58 10 2.00 F Number 0.00 2 1.02 F Number 0.02 1 15.43 F Number 0.43 15 1.17 F Number 0.17 1   And these are the evals I used to calculate the fields: | eval integer = floor(amount) | eval fraction = amount - floor(amount) | eval frac_type = typeof(fraction) | eval compare = if(fraction = 0.6, "T", "F")   Now, I really can't understand how the "compare" field is always false.... I was expecting it to output TRUE on row 5 with amount = 10.60, which means fraction = 0.6, but it does not. What am I doing wrong here? Why "compare" evaluates to FALSE on row 5? I tried to change 0.6 with 0.60 (you never know), but no luck.   If you want you can try this run anywhere search, which gives me the same result:   | makeresults | eval amount = 10.6 | eval integer = floor(amount) | eval fraction = amount - floor(amount) | eval frac_type = typeof(fraction) | eval compare = if(fraction = 0.6, "T", "F")   Can you help me?     Thank you in advance, Tommaso
OK. Let me quote from the OpenSSL vulnerability description. "Impact summary: A buffer overread can have a range of potential consequences such as unexpected application beahviour or a crash. In par... See more...
OK. Let me quote from the OpenSSL vulnerability description. "Impact summary: A buffer overread can have a range of potential consequences such as unexpected application beahviour or a crash. In particular this issue could result in up to 255 bytes of arbitrary private data from memory being sent to the peer leading to a loss of confidentiality. However, only applications that directly call the SSL_select_next_proto function with a 0 length list of supported client protocols are affected by this issue. This would normally never be a valid scenario and is typically not under attacker control but may occur by accident in the case of a configuration or programming error in the calling application." Read the last sentence. Over and over again. If unsure - verify if you can exploit this potential vulnerability. Otherwise, stop worrying about this.
Splunk 9.3.0 has the fix.
Hi Based on a Multiselect  reading from   index="pm-azlm_internal_prod_events" sourcetype="azlm"   I define a token with the name   opc_t     This token can be used without any problems to ... See more...
Hi Based on a Multiselect  reading from   index="pm-azlm_internal_prod_events" sourcetype="azlm"   I define a token with the name   opc_t     This token can be used without any problems to filter further down in the dashboard data read from the same index (top 3 lines in the code below).    <query>index="pm-azlm_internal_prod_events" sourcetype="azlm" $opc_t$ $framenum$ | strcat opc "_" frame_num UNIQUE_ID | dedup _time UNIQUE_ID | append [ search index="pm-azlm_internal_dev_events" sourcetype="azlm-dev" ocp=$opc_t|s$ | strcat ocp "-j_" fr as UNIQUE_ID | dedup UNIQUE_ID] | timechart span=12h aligntime=@d limit=0 count by UNIQUE_ID | sort by _time DESC </query>   BUT and here's my problem: using the same token on a different index (used in the append above) will provide no results at all.  One (nasty) detail, the field names in both Indexes are slightly different. In    index="pm-azlm_internal_prod_events"   the field name I need to filter on ist called    opc     In the second index   pm-azlm_internal_dev_events   the field name is   ocp     Dear Experts: what do I need to change on the 2nd query, to be able to use the same token for filtering?  
I see this behaviour, too, also for another process coming from the ITSI app:   /opt/splunk/etc/apps/SA-ITOA/bin/command_health_monitor.py Besides killing processes or restarting splunk as a workar... See more...
I see this behaviour, too, also for another process coming from the ITSI app:   /opt/splunk/etc/apps/SA-ITOA/bin/command_health_monitor.py Besides killing processes or restarting splunk as a workaround, do you know whether there are efforts to finally resolve this bug? Thanks, Jan  
We are also flagged by this Patch Vulnerability by our Tenable Scanning Results on Compliance Portal.   We were under an assumption that the Splunk Universal Forwarder release of Version 9.2.2 will... See more...
We are also flagged by this Patch Vulnerability by our Tenable Scanning Results on Compliance Portal.   We were under an assumption that the Splunk Universal Forwarder release of Version 9.2.2 will have this fix incorporated, but apparently seems like that is not the case.   Any idea when could we expect a fix for this as the due date for this exposure has already passed (July 28th 2024)?   Thanks, Vishwa
88