All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi, i am trying to run splunk on docker for my research project. unfortunately after i connected to splunk in browser, i want to go to docker shell to do some configurations. so i command this: doc... See more...
hi, i am trying to run splunk on docker for my research project. unfortunately after i connected to splunk in browser, i want to go to docker shell to do some configurations. so i command this: docker start -i <CONTAINER ID> but i did not get response for too many minutes, until i had to close my cmd! pleas tell me why? by the way! i try all process (from download image to run docker) 2 times! and got this result every 2 times. Could it be cause image has system prerequisites Linux-based operating system (Debian, CentOS, etc.) and i try to use it on windows 10? Thanks for your help
Hello Team, I am getting timeout error while adding data to Splunk cloud index from REST API. I am using below endpoint. (or) help me how can i add data to Splunk cloud index through REST API's. ... See more...
Hello Team, I am getting timeout error while adding data to Splunk cloud index from REST API. I am using below endpoint. (or) help me how can i add data to Splunk cloud index through REST API's. URL : http://*********:8088/services/collector Thanks, Venkata.
Hi! I have 3 multivalue fields (max. 3 values per field) and I want to expand/extract them to single values. Data looks like this: When I use | mvexpand Splunk extracts to all skills, all skill... See more...
Hi! I have 3 multivalue fields (max. 3 values per field) and I want to expand/extract them to single values. Data looks like this: When I use | mvexpand Splunk extracts to all skills, all skillLevels with all skill hours: How can I tell splunk to extract only line by line?  Result should look like: Skill SkillLevel Hours Hardware-Techniker 3 Advanced 10 Software-Entwickler Sonderprogramme (C, C++) 3 Advanced 15 Query: (without | mvexpand)     | eval Skills = mvappend(customfield_26202_child_value, customfield_26204_child_value, customfield_26205_child_value) | eval SkillLevel = mvappend(customfield_26206_value, customfield_26207_value, customfield_26208_value) | eval Hours = mvappend(customfield_26300, customfield_26301, customfield_26302) | table Skills,SkillLevel,Hours     Thank you very much!
Hi! We are trying to push alerts into Swimlane using the swimlane add-on. But getting error as below: 06-28-2022 04:45:08.234 -0500 ERROR SearchScheduler [4094 AlertNotifierWorker-0] - Error in '... See more...
Hi! We are trying to push alerts into Swimlane using the swimlane add-on. But getting error as below: 06-28-2022 04:45:08.234 -0500 ERROR SearchScheduler [4094 AlertNotifierWorker-0] - Error in 'sendalert' command: Alert script returned error code 1., search='sendalert push_alerts_to_swimlane results_file="/opt/splunk/var/run/splunk/dispatch/scheduler_Xghedjhwqklahd" 06-28-2022 04:45:08.234 -0500 WARN sendmodalert [4094 AlertNotifierWorker-0] - action=push_alerts_to_swimlane - Alert action script returned error code=1   Swimlane App link: https://splunkbase.splunk.com/app/3708/   Any help with this is much appreciated.    Thanks
Hello,  For the context, I created a dashboard on the Splunk cloud app where a pie chart is displayed. The purpose of the pie chart is to display the different types of events and the associated pe... See more...
Hello,  For the context, I created a dashboard on the Splunk cloud app where a pie chart is displayed. The purpose of the pie chart is to display the different types of events and the associated percentages. However the separation between the value and its percentage is quite confusing  because it is two numbers separated by commas.  Is there a way to format the values displayed or change the separator?     Thanks in advance
Hi All, I need help with regex {"CreationTime": "2022-06-28T01:55:52", "ExchangeMetaData": {"BCC": [], "CC": ["cat@gmail.com", "ant@gmail.com", "sat@gmail.com", "mat@gmail.com"] Need to ca... See more...
Hi All, I need help with regex {"CreationTime": "2022-06-28T01:55:52", "ExchangeMetaData": {"BCC": [], "CC": ["cat@gmail.com", "ant@gmail.com", "sat@gmail.com", "mat@gmail.com"] Need to capture values under CC 4 different values to be captured under CC I tried a regex which is capturing only the first value -->    \"CC\"\:\s\[\"?(?<exchangeCc>(\w?\@?\.?)+) Else with a different regex it is capturing all 4 values as one single value ---> CC\"\:\s+\[(?<CC>[^\]]+) Is it possible to capture it as  4 different values?
Hello, We are in indexer cluster,2indexer,1clustermaster,deployment server & License master,2 HEC and 1 search head. I have created tokens in one of my HEC instance and i can able to see logs are co... See more...
Hello, We are in indexer cluster,2indexer,1clustermaster,deployment server & License master,2 HEC and 1 search head. I have created tokens in one of my HEC instance and i can able to see logs are coming into HEC1 but we need it on SH and also same token should be reflect on Another HEC2. Note: Two HEC are added as deployment client to DS. Please help me on this .
Hello Splunkers, I configured a new Notable suppression in ES for a repeated notable based on source IP. I could see the suppression entry is created under eventtypes, but the notable is still comi... See more...
Hello Splunkers, I configured a new Notable suppression in ES for a repeated notable based on source IP. I could see the suppression entry is created under eventtypes, but the notable is still coming to Incident Review console. I suspect issue with my Search configuration under the suppression settings. My search config is like below : index=network dest_port IN(389,636) src_ip=10.x.x.x  This was to suppress notables triggering for my recent LDAP traffic search. Thank you.
Hi All, Im trying use Splunk to produce a table which will highlight the duration between the RUNNING event of one and the SUCCESS event of another Autosys job. In this case the start job for eac... See more...
Hi All, Im trying use Splunk to produce a table which will highlight the duration between the RUNNING event of one and the SUCCESS event of another Autosys job. In this case the start job for each environment is denoted by a prefix ending *START_COMP_0 and the last job is *OSMPCONTROL_0. If I only compare a single environment (ENV1...) then the search works fine however I would like to grab the duration between two events for multiple environments (ENV1, ENV2...).  Using the SPL below or something similar, is it possible to group the ENVs together based on matching string?  Multiple Envs: index=_* OR index=* sourcetype=autosys_vm1_event_demon AND (JobName=DWP_VME_DLACS_*_START_COMP_0 Status=RUNNING) OR (JobName=DWP_VME_DLACS_*_OSMPCONTROL_0 Status=SUCCESS) | transaction maxevents=2 | table JobName duration _time I have attached two screenshots, one of a single env which gives me my desired output and one where I stick a * in the search and how the search is grouping multiple events. Multiple:  multiple Single working: working single TIA, Cameron.   
I made the column chart like this images. I want to change the color of particular column specified by field "No." , that is as "token" by other graph. My ideal is third image.   I'm sorry if... See more...
I made the column chart like this images. I want to change the color of particular column specified by field "No." , that is as "token" by other graph. My ideal is third image.   I'm sorry if my English is wrong.    
Hello Team, I am trying to create TCP and UDP ports from Splunk cloud REST api's, but i am getting below permission error. Please suggest me what permissions i need to change. <msg type="ERROR... See more...
Hello Team, I am trying to create TCP and UDP ports from Splunk cloud REST api's, but i am getting below permission error. Please suggest me what permissions i need to change. <msg type="ERROR">You (user=*****) do not have permission to perform this operation (requires capability: edit_tcp).</msg> How can i give edit_tcp role to my account. Thanks, venkata.
For example if my data is platform = "operational", task="draft||draft-published",jobstart="2021-06-27T15:46:08.34666||2021-06-27T18:46:08.70000, jobend="2021-06-28T12:86:08.37836||2021-06-28T18:46:... See more...
For example if my data is platform = "operational", task="draft||draft-published",jobstart="2021-06-27T15:46:08.34666||2021-06-27T18:46:08.70000, jobend="2021-06-28T12:86:08.37836||2021-06-28T18:46:08.70990" I need in the below format. I tried makemv delim="||" task but this happens for only one field. Is there any other option available ? platform task jobstart jobend operational draft 2021-06-27T15:46:08.34666 2021-06-28T12:86:08.37836 operational draft-published 2021-06-27T18:46:08.70000 2021-06-28T18:46:08.70990
My published addon was build using addon builder 3.0.1. I updated the addon app by exporting it from Addon builder version 4.1.0 recently. I am facing issue when we install the addon in fresh system... See more...
My published addon was build using addon builder 3.0.1. I updated the addon app by exporting it from Addon builder version 4.1.0 recently. I am facing issue when we install the addon in fresh system. When I open the input page it gives 500 error and shows page like one attached with this message. Following are the exception from the splunkd.log:   ------------------------------------------------------- 06-26-2022 13:36:02.878 +0000 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunktaucclib/rest_handler/handler.py", line 124, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunktaucclib/rest_handler/handler.py", line 179, in all\n **query\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 290, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 686, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 1199, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 1262, in request\n raise HTTPError(response)\nsplunklib.binding.HTTPError: HTTP 404 Not Found -- b'{"messages":[{"type":"ERROR","text":"Not Found"}]}'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 113, in init_persistent\n hand.execute(info)\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunk_aoblib/rest_migration.py", line 39, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 63, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunktaucclib/rest_handler/handler.py", line 129, in wrapper\n raise RestError(exc.status, str(exc))\nsplunktaucclib.rest_handler.error.RestError: REST Error [404]: Not Found -- HTTP 404 Not Found -- b'{"messages":[{"type":"ERROR","text":"Not Found"}]}'\n 06-26-2022 13:36:02.878 +0000 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [404]: Not Found -- HTTP 404 Not Found -- b'{"messages":[{"type":"ERROR","text":"Not Found"}]}'". See splunkd.log for more details. -------------------------------------------------------     This behavior is not reproducing in case we upgrade the addon from previous version. Please help me resolve this.  
Hi @guilmxm , I have installed and configured NMON monitoring app in my distributed Splunk environment all features are working and responding perfectly. But this Inventory summery was not showing ... See more...
Hi @guilmxm , I have installed and configured NMON monitoring app in my distributed Splunk environment all features are working and responding perfectly. But this Inventory summery was not showing list of Linux hosts. I have gone through the doc as well but not getting what needs to be done for this. Can anyone please let me know what needs to be done from my end.. Thanks in advance!      
Hi, Thanks in advance Below events is in splunk .We have two repos in git 1. AAA 2.BBB.When ever the repos will replicate and both repos should be same file. But in my case after replicate both rep... See more...
Hi, Thanks in advance Below events is in splunk .We have two repos in git 1. AAA 2.BBB.When ever the repos will replicate and both repos should be same file. But in my case after replicate both repos files are missing so i should compare the files and whare are the files is missing and send an alert as difference in repos. INTERESTING FIELDS: CODE.Modified_files{} TOOLKIT.Modified_files{}   Expected output after comparing: CODE.Modified_files{} "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/logs/refs/heads/master", "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/version.json" TOOLKIT.Modified_files{} "a/D:\\\\splunk_code_replication\\\\AAA_TOOLKIT/.git/logs/refs/heads/master" These files are only present in AAA repo but not in BBB. So we need compare both AAA and BBB missing files. As per the event and show the difference.         { "CODE": { "modified_files": [ "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/HEAD", "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/config", "a/D:\\\\splunk_code_replication\\\\BBB_CODE/.git/HEAD", "a/D:\\\\splunk_code_replication\\\\BBB_CODE/.git/config", "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/logs/refs/heads/master", "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/version.json" ] } } { "TOOlKIT": { "modified_files": [ "a/D:\\\\splunk_code_replication\\\\AAA_TOOLKIT/.git/HEAD", "a/D:\\\\splunk_code_replication\\\\AAA_TOOLKIT/.git/config", "a/D:\\\\splunk_code_replication\\\\BBB_TOOLKIT/.git/HEAD", "a/D:\\\\splunk_code_replication\\\\BBB_TOOLKIT/.git/config", "a/D:\\\\splunk_code_replication\\\\AAA_TOOLKIT/.git/logs/refs/heads/master", ] } }          
Hi, I am a beginner. I have a correlation rule that : - searches for IP addresses that are port scans - search in the lookup table, if each IP address is not listed - if an IP address is not in t... See more...
Hi, I am a beginner. I have a correlation rule that : - searches for IP addresses that are port scans - search in the lookup table, if each IP address is not listed - if an IP address is not in the lookup table: make an alert in ES - add this IP in the lookup table (to avoid duplicates) I have two lookup tables : - scan_port.csv - network_provider.csv Now I would like to filter the IP addresses by a lookup table (a list of cidr ranges : "network_provider.csv"). If possible, this filter would be first in this correlation rule, to avoid adding a filtered IP in the lookup table "scan_port.csv". The priority is to: - Find the port scan of the IPs - Filter IPs (by the lookup table "network_provider") - Check for duplicates (by the lookup table "scan_port") - Make an alert - Add the IP in the search table (port scan) As I said, I have a correlation rule for port scans that has been working for years. I would like to add the filter by cidr range. I have the command (cidrmatch) that works for the filter. But I can't get it to work, between the port scan lookup and the two lookup tables, I can't find a solution. Any ideas? Thanks in advance
On my replication bundle I have a whole list of unwanted files that exists from a particular App "XYZ" which are as shown below      apps/XYZ/bin/suds/mx/typer.pyc apps/XYZ/bin/suds/mx/encoded.py ... See more...
On my replication bundle I have a whole list of unwanted files that exists from a particular App "XYZ" which are as shown below      apps/XYZ/bin/suds/mx/typer.pyc apps/XYZ/bin/suds/mx/encoded.py apps/XYZ/bin/suds/mx/__init__.pyc apps/XYZ/bin/suds/mx/literal.py apps/XYZ/bin/suds/mx/__init__.py apps/XYZ/bin/suds/options.py apps/XYZ/bin/suds/sudsobject.py     Now, how can i apply replicationblacklist to anything that is under the APP "XYZ" ?    distsearch.conf   [replicationBlacklist] ....  
All,  I've noticed by default that Splunk Forwarder gives itself /bin/bash  in /etc/passwd.  e.g. splunk:x:1001:1001:Splunk Server:/opt/splunkforwarder:/bin/bash I changed it to the below and res... See more...
All,  I've noticed by default that Splunk Forwarder gives itself /bin/bash  in /etc/passwd.  e.g. splunk:x:1001:1001:Splunk Server:/opt/splunkforwarder:/bin/bash I changed it to the below and restarted: splunk:x:1001:1001:Splunk Server:/opt/splunkforwarder:/sbin/nologin Best I can tell there was no impact. Scripted inputs are working as it the monitor stanza's.  Is there any reason I should leave Splunk user with a Shell?    thanks!
Hello, It's possible that I've had too long of a day, but I can't wrap my head around nesting many ifs.  Is anyone willing to help me out?  I am really bad at writing out SPL queries to make it vis... See more...
Hello, It's possible that I've had too long of a day, but I can't wrap my head around nesting many ifs.  Is anyone willing to help me out?  I am really bad at writing out SPL queries to make it visually understanding with parentheses and commas.  Does anyone have some additional tips on that as well that would be useful for these nested scenarios? For example:           | eval new_field = if(pass_fail="fail", if(importance="0" OR importance="1", case( Days<7 OR State="Online","Gold", Days >=7 AND Days<14,"Orange", Days>=14,"Red"), if(importance="2", case( Days<30 OR State="Online","Gold", Days >=30 AND Days<60,"Orange", Days>=60,"Red"), if(importance="3", case( Days<60 OR State="Online","Gold", Days >=60 AND Days<120,"Orange", Days>=120,"Red"), "importance_3_false"), "importance_2_false"), "importance_1_0_false"), "main_if_fail")         The idea is to break out into a newfield by first looking at only the "fail" items, and then further breaking down the "fail" items by their importance (which can be 0, 1, 2, 3) where 0&1, 2, and 3 have their own case statements.  All the case statements and ifs should be true, and the "importance_3_false" (for example) are more for debugging and should never actually show in my output. I appreciate any help and thank you.         Error in 'eval' command: The arguments to the 'if' function are invalid.          
I have voltage data and want to get the average volts value per day for the last 7 days.  This is where I left off from using bin to multiple timecharts and still nothing.  source="/root/Scripts/... See more...
I have voltage data and want to get the average volts value per day for the last 7 days.  This is where I left off from using bin to multiple timecharts and still nothing.  source="/root/Scripts/VoltageLogger/Voltage.json" host="util" sourcetype="_json" | timechart span=1d count by avg(voltage)