All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How can I configure a CRON expression such that an alert was sent each 2hours in a day, and every day in weeks. ?? Many thanks !!!
For fun to learn golang I made utility code to execute the query from a text file. It will use credentials or an auth token. Results are simply written to a file as unmodified JSON.  it’s not an S... See more...
For fun to learn golang I made utility code to execute the query from a text file. It will use credentials or an auth token. Results are simply written to a file as unmodified JSON.  it’s not an SDK  but serviceable code   https://github.com/georgestarcher/querysplunk 
Hi, I need to call the result value as a filter. like this table below, the second value on column RecipientDomain will call on search as filter. become this query: index=sec_office365_dlp sourcety... See more...
Hi, I need to call the result value as a filter. like this table below, the second value on column RecipientDomain will call on search as filter. become this query: index=sec_office365_dlp sourcetype=sec_office365_dlp RecipientDomain=@yahoo.com | stats count by xxx     Help please..  
i have been using this query but couldn't be able to remove null rows, please help me index=Window_wash | rex field=_raw "TIME.TAKEN.FOR.(?<Vendor>\w+)" | rex field=_raw (?<Time_MS>\d+).ms | timec... See more...
i have been using this query but couldn't be able to remove null rows, please help me index=Window_wash | rex field=_raw "TIME.TAKEN.FOR.(?<Vendor>\w+)" | rex field=_raw (?<Time_MS>\d+).ms | timechart span=1m max(Time_MS) as Time_MS | outlier Time_MS    
Hi, I have a Splunk event "Application -> start of the log". When I try to search for this log using the exact text then I do not get any results. I see that the greater than symbol is causing the... See more...
Hi, I have a Splunk event "Application -> start of the log". When I try to search for this log using the exact text then I do not get any results. I see that the greater than symbol is causing the problem. When I split my search command as "Application -" AND "start of the log" then splunk was able to find the event. So how do I escape the greater than symbol in splunk search command? I tried &gt; which didn't work. (I also want to use this search text in "transaction startswith=" command so I cannot use `AND` condition)
Hi, I collected the cisco deviceslog with "Cisco Networks Add-on for Splunk Enterprise". And install  "Cisco Networks App for Splunk Enterprise" on my search heads. Now I want to know which user at w... See more...
Hi, I collected the cisco deviceslog with "Cisco Networks Add-on for Splunk Enterprise". And install  "Cisco Networks App for Splunk Enterprise" on my search heads. Now I want to know which user at what time execute what commands. In the App, Audit,  "Configuration change transactions"  shows me the time and user but for the "cmd" (command) just shows "!exec: enable" . It just shows the main command that enables user to enter other commands. Is it something wrong with my configs in Splunk or it's just about log level on Cisco devices? Tanks in advance
tried a lot of things but no joy - recommendations would be helpful: 9v agent upgrades automation in silent more not workingD msiexec.exe /i splunkforwarder-9.0.2-17e00c557dc1-x64-release.msi DEPLO... See more...
tried a lot of things but no joy - recommendations would be helpful: 9v agent upgrades automation in silent more not workingD msiexec.exe /i splunkforwarder-9.0.2-17e00c557dc1-x64-release.msi DEPLOYMENT_SERVER="10.X.X.67:8089" LAUNCHSPLUNK=1 SPLUNKUSERNAME=Admin SPLUNKPASSWORD=S@v3MY@55!!  AGREETOLICENSE=Yes /quiet   This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n]
Hi All, We are working in Splunk Cloud environment, I want to deploy custom the TIME_PREFIX configuration for one of the log source, we have tested the configuration in Splunk stand alone box, it wo... See more...
Hi All, We are working in Splunk Cloud environment, I want to deploy custom the TIME_PREFIX configuration for one of the log source, we have tested the configuration in Splunk stand alone box, it works fine, we have deployed the same configuration on Splunk Cloud indexers and IDM node, but it's not working. Below is my props configuration. TIME_PREFIX = "time":\s*" MAX_TIMESTAMP_LOOKAHEAD = 128  sample value: , "time": "2022-12-24T02:36:55.9183013Z", My question is do I need to deploy this configuration on search heads as well.   Thanks, Bhaskar    
Hi, If anyone can help it'll be greatly appreciated and ill pass on the karma. I keep getting this error when trying to add a search pair in the GUI: {"status": "ERROR", "msg": "Encountered the fol... See more...
Hi, If anyone can help it'll be greatly appreciated and ill pass on the karma. I keep getting this error when trying to add a search pair in the GUI: {"status": "ERROR", "msg": "Encountered the following error while trying to save: Peer with server name 192.168.118.128 conflicts with this server's name."}   When I add the search peer in the CLI it doesn't appear in the Search Peer section in the GUI either. Any help is gladly appreci
After I perform a search and click the "Format" Icon above the search results, there is an option for "Wrap Results". I check this and it does an attempt at wrapping results, but I still have to scro... See more...
After I perform a search and click the "Format" Icon above the search results, there is an option for "Wrap Results". I check this and it does an attempt at wrapping results, but I still have to scroll incredibly far to the right to see each result. It seems to just pick some arbitrary length and wraps it there. I should note, in the events there are plenty of spaces that could be used as clean wrap points but the system just seems to not use them. Is there some sort of config setting that controls wrap behaviour that I could tweak? Here is the event: {"cf_app_id":"uuid","cf_app_name":"app-name","deployment":"cf","event_type":"LogMessage","info_splunk_index":"splunk-index","ip":"ipaddr","message_type":"OUT","msg":"2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : {\"data\":{\"fields\":[{\"__typename\":\"name\",\"field\":\"value\",\"field2\":\"value2\",\"field3\":\"value 3\",\"field4\":\"value4\",\"field5\":\"value5\",\"field6\":\"value6\",\"field7\":\"value7\",\"field8\":null,\"field9\":\"value9\",\"field10\":null,\"field11\":111059.0,\"field12\":111059.0,\"field13\":null,\"field14\":\"value14\",\"field15\":\"2018-10-01\",\"field16\":null,\"field17\":false,\"field18\":{\"field19\":\"value19\",\"fieldl20\":\"value20\",\"field21\":2.6,\"field22\":\"2031-10-31\",\"field23\":\"2017-11-06\"},\"field24\":{\"field25\":\"\",\"field26\":\"\"},\"field27\":{\"field28\":{\"field29\":0.0,\"field30\":0.0,\"field31\":240.63,\"field32\":\"2022-12-31\",\"field33\":0.0,\"field34\":\"9999-10-31\"}},\"field35\":[{\"field36\":{\"field37\":\"value37\"}},{\"field38\":{\"field39\":\"value39\"}}],\"field40\":{\"__typename\":\"value40\",\"field41\":\"value41\",\"field42\":\"value 42\",\"field43\":111059.0,\"field44\":\"2031-04-01\",\"field45\":65204.67,\"field46\":null,\"field47\":\"value47\",\"field48\":\"value48\",\"field49\":null,\"field50\":\"value50\",\"field51\":null,\"field52\":null}},{\"__typename\":\"value53\",\"field54\":\"value54\",\"field55\":\"value55\",\"field56\":\"value56\",\"field57\":\"value57\",\"field58\":\"value58\",\"field59\":\"9\",\"field60\":\"value60\",\"field61\":null,\"field62\":\"value62\",\"field63\":null,\"field64\":88841.0,\"field65\":38841.0,\"field66\":null,\"field67\":\"value67\",\"field68\":\"2018-10-01\",\"field69\":null,\"field70\":false,\"field71\":{\"field72\":\"value72\",\"field73\":\"value73\",\"field74\":2.6,\"field75\":\"2031-10-31\",\"field76\":\"2017-11-06\"},\"field77\":{\"field78\":\"\",\"field79\":\"\"},\"field80\":{\"field81\":{\"field82\":0.0,\"field83\":0.0,\"field84\":84.16,\"field85\":\"2022-12-31\",\"field86\":0.0,\"field87\":\"9999-10-31\"}},\"field88\":[{\"field89\":{\"field90\":\"value90\"}},{\"field91\":{\"field92\":\"value92\"}}],\"field93\":null},{\"__typename\":\"value94\",\"field95\":\"value95\",\"field96\":\"value96\",\"field97\":\"value97\",\"field98\":\"value98\",\"field99\":\"value99\",\"field100\":\"1\",\"field101\":\"value101\",\"field102\":null,\"field103\":\"value103\",\"field104\":\"359\",\"field105\":88025.0,\"field106\":79316.87,\"field107\":\"309\",\"field108\":\"value108\",\"field109\":\"2018-10-01\",\"field110\":\"2048-09-30\",\"field111\":false,\"field112\":{\"field113\":\"value113\",\"field114\":\"value114\",\"field115\":2.35,\"field116\":\"2031-10-31\",\"field117\":\"2017-11-06\"},\"field118\":{\"field119\":\"\",\"field120\":\"\"},\"field121\":{\"field122\":{\"field123\":341.58,\"field124\":0.0,\"field125\":155.33,\"field126\":\"2022-12-31\",\"field127\":186.25,\"field128\":\"2022-12-31\"}},\"field129\":[{\"field130\":{\"field131\":\"value131\"}},{\"field132\":{\"field133\":\"value133\"}}],\"field134\":null}]}}","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","timestamp":1671732690243306564} Here is a screenshot: wrap toggle is enabled wrap not working as you can see the horizontal scrolbar
I like to create alert for agent failure. The Alert has to be triggered when any of the splunk-Otel agent is failed to run in any of the host for particular time grain. Also need to create dashboards... See more...
I like to create alert for agent failure. The Alert has to be triggered when any of the splunk-Otel agent is failed to run in any of the host for particular time grain. Also need to create dashboards shows list of running and not running host name and agent version.
I am using the Python SDK to add the allow_skew setting to savedsearches.  See the generalised code snippet below:    import splunklib.client as client splunk_svc = client.connect(host="localhost... See more...
I am using the Python SDK to add the allow_skew setting to savedsearches.  See the generalised code snippet below:    import splunklib.client as client splunk_svc = client.connect(host="localhost", port=8089, username="admin", password="******") savedsearch = splunk_svc.saved_searches["alert-splnk-test_email_v1"] new_skew = "5m" kwargs = {"allow_skew": new_skew} savedsearch.update(**kwargs).refresh()   This code works and adds 'allow_skew = 5m' to the specific savedsearch stanzas in {app/local OR system/local} / savedsearches.conf / [alert-splnk-test_email_v1] The code can also be extended to more/all savedsearches on the platform. It also replicates the changes in a SH Cluster, as expected. I want to have a reliable way to remove/erase the allow_skew setting from specific savedsearches, preferably using the Python SDK. The setting needs to be removed from the stanza, so that the allow_skew setting from system / local / savedsearches.conf / [default] is picked up. The only other ways I could think about are:  Using the class splunklib.client.Stanza(service, path, **kwargs) somehow.. Any directions on how to use it?  Recreate the savedsearch and not add allow_skew, but that would mean a lot of work for a bunch of savedearches.  Any help is appreciated. 
Hi, i'm trying to calculate the average events weekly by their severity and comparing the daily amount with the weekly average, i created a multivalue field but the values in the field get reordered... See more...
Hi, i'm trying to calculate the average events weekly by their severity and comparing the daily amount with the weekly average, i created a multivalue field but the values in the field get reordered and they don't match the rest of the data (the severity multivalue field),  I tried using mvsort() but it did not work, what did i do wrong? Thank you for any help. Query, results and expected results below:   index=myindex earliest=-7d@d latest=now()     | bin _time span=1d     | fields _time, severity     | stats count by _time, severity     | eventstats avg(count) as average by severity     | eval change_percent=round(((count-average)*100)/count,0)     | eval average=round(average,2)     | eval change_percent=change_percent+"%"     | table _time severity count average change_percent     | stats values(severity) as severity, values(count) as AlertCount, values(average) as average, values(change_percent) as change_percent by _time     | sort - _time     | eval average=mvsort(average)     | eval change_percent=mvsort(change_percent)     | eval AlertCount=mvsort(AlertCount)     | eval severity=mvsort(severity)   results:   _time severity AlertCount average change_percent 2022-12-23 High Informational 3 8 3.25 3.67 -22% 59% 2022-12-22 High 1 3.25 -225% 2022-12-21 High Informational 3 3.25 3.67 -22% -8% 2022-12-20 High 4 3.25 19% 2022-12-19 High Informational Medium 1 2 5 2.00 3.25 3.67 -100% -62% 27%     expected results:   _time severity AlertCount average change_percent _time severity AlertCount average change_percent 2022-12-23 High Informational 3 8 3.25 3.67 -22% 59% 2022-12-22 High 1 3.25 -225% 2022-12-21 High Informational 3 3.25 3.67 -8% -22% 2022-12-20 High 4 3.25 19% 2022-12-19 High Informational Medium 1 2 5 3.25 3.67 2.00 -225% -83,5% 60%
I have setup servicenow to splunk integration and coming to the inputs, I have turned on  the Splunk sys user group as well as splunk sys user input. Currently I am getting Assignment group name, doe... See more...
I have setup servicenow to splunk integration and coming to the inputs, I have turned on  the Splunk sys user group as well as splunk sys user input. Currently I am getting Assignment group name, does anyone have an idea whether we can get assignment group member details as well from servicenow to splunk?  i have hard time finding it, pls share your thoughts!
I want to run a query that would return results of users added to a group. This would assist me in creating an alert that notifies me when a new user is added to this group
Hello everyone, I need to create BTs from the value of an attribute. Example, If codigo == 1 must be a BT, elseif codigo== 3 must be another BT. The source code is written in Java and I'm us... See more...
Hello everyone, I need to create BTs from the value of an attribute. Example, If codigo == 1 must be a BT, elseif codigo== 3 must be another BT. The source code is written in Java and I'm using Pojo. Thanks.
Hi at all, In Enterprise Security, I'm trying to customize a Suppression Rule inserting a lookup containing the ip addresses to whitelist in one Correlation Search, using this search:     `get_no... See more...
Hi at all, In Enterprise Security, I'm trying to customize a Suppression Rule inserting a lookup containing the ip addresses to whitelist in one Correlation Search, using this search:     `get_notable_index` source="Network - Vulnerability Scanner Detection (by targets) - Rule" [ | inputlookup suppression_ip.csv | fields src ]     and I have the following error message:     Error saving suppression. Error parsing search.     I also tryed to replace the subsearch using a macro, but with the same result. Does anywone know if there's a limitation in Suppression Rules searches (as e.g. eventtypes) or what else? Ciao. Giuseppe
Hi, i'm struggling in calculating hourly or daily average and displaying the results if there's no events at all, which in theory should be calculated as 0 and included in the avg calculation. Curre... See more...
Hi, i'm struggling in calculating hourly or daily average and displaying the results if there's no events at all, which in theory should be calculated as 0 and included in the avg calculation. Currently my query calculates the avg in a given timespan only if there are events for a specific severity, if not it remains blank and is not included in the avg calculation, query, results and expected results:       index=myindex earliest=-7d@d latest=now()     | bin _time span=1h     | fields _time, severity     | stats count by _time, severity     | eval previous=if(_time<relative_time(now(),"@d"),count,null())     | eventstats avg(previous) as average by severity     | eval change_percent=round(((count-average)*100)/count,0)."%"     | table _time severity count average change_percent   _time severity count average change_percent 2022-12-16 10:00 High 2 2.25 -12% 2022-12-16 12:00 Low 2 2 0% 2022-12-16 14:00 Medium 3 2 33%     i'd like to show something like this: _time severity count average change_percent 2022-12-16 10:00-11:00 High 2 0.5 x% 2022-12-16 10:00-11:00 Medium 0 1 -x% 2022-12-16 10:00-11:00  Low 0 1 x% 2022-12-16 11:00-12:00 High 0 0.5  x% 2022-12-16 11:00-12:00 Medium 0 1  x% 2022-12-16 11:00-12:00 Low 0 1  x% 2022-12-16 12:00-13:00 High 0 0.5  x% 2022-12-16 12:00-13:00 Medium 0 1  x% 2022-12-16 12:00-13:00 Low 2 1 x%   thank you for any help 
Hello, I am trying to add a data input to an app I created using Splunk Add-on Builder. I enabled checkpointing and specified a checkpoint parameter name (last_updated) but in the UI for data input,... See more...
Hello, I am trying to add a data input to an app I created using Splunk Add-on Builder. I enabled checkpointing and specified a checkpoint parameter name (last_updated) but in the UI for data input, it gives the following error: "The following required arguments are missing: last_updated." There is no section in the UI that lets me input a 'last_updated' argument. Thanks
Search query for including non-business hours and weekends ie exclude Monday to Friday 9am to 5pm