All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Team, I am trying to create TCP and UDP ports from Splunk cloud REST api's, but i am getting below permission error. Please suggest me what permissions i need to change. <msg type="ERROR... See more...
Hello Team, I am trying to create TCP and UDP ports from Splunk cloud REST api's, but i am getting below permission error. Please suggest me what permissions i need to change. <msg type="ERROR">You (user=*****) do not have permission to perform this operation (requires capability: edit_tcp).</msg> How can i give edit_tcp role to my account. Thanks, venkata.
For example if my data is platform = "operational", task="draft||draft-published",jobstart="2021-06-27T15:46:08.34666||2021-06-27T18:46:08.70000, jobend="2021-06-28T12:86:08.37836||2021-06-28T18:46:... See more...
For example if my data is platform = "operational", task="draft||draft-published",jobstart="2021-06-27T15:46:08.34666||2021-06-27T18:46:08.70000, jobend="2021-06-28T12:86:08.37836||2021-06-28T18:46:08.70990" I need in the below format. I tried makemv delim="||" task but this happens for only one field. Is there any other option available ? platform task jobstart jobend operational draft 2021-06-27T15:46:08.34666 2021-06-28T12:86:08.37836 operational draft-published 2021-06-27T18:46:08.70000 2021-06-28T18:46:08.70990
My published addon was build using addon builder 3.0.1. I updated the addon app by exporting it from Addon builder version 4.1.0 recently. I am facing issue when we install the addon in fresh system... See more...
My published addon was build using addon builder 3.0.1. I updated the addon app by exporting it from Addon builder version 4.1.0 recently. I am facing issue when we install the addon in fresh system. When I open the input page it gives 500 error and shows page like one attached with this message. Following are the exception from the splunkd.log:   ------------------------------------------------------- 06-26-2022 13:36:02.878 +0000 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunktaucclib/rest_handler/handler.py", line 124, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunktaucclib/rest_handler/handler.py", line 179, in all\n **query\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 290, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 686, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 1199, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunklib/binding.py", line 1262, in request\n raise HTTPError(response)\nsplunklib.binding.HTTPError: HTTP 404 Not Found -- b'{"messages":[{"type":"ERROR","text":"Not Found"}]}'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 113, in init_persistent\n hand.execute(info)\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunk_aoblib/rest_migration.py", line 39, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 63, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/<APP_NAME>/bin/<APP_NAME>/aob_py3/splunktaucclib/rest_handler/handler.py", line 129, in wrapper\n raise RestError(exc.status, str(exc))\nsplunktaucclib.rest_handler.error.RestError: REST Error [404]: Not Found -- HTTP 404 Not Found -- b'{"messages":[{"type":"ERROR","text":"Not Found"}]}'\n 06-26-2022 13:36:02.878 +0000 ERROR AdminManagerExternal - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [404]: Not Found -- HTTP 404 Not Found -- b'{"messages":[{"type":"ERROR","text":"Not Found"}]}'". See splunkd.log for more details. -------------------------------------------------------     This behavior is not reproducing in case we upgrade the addon from previous version. Please help me resolve this.  
Hi @guilmxm , I have installed and configured NMON monitoring app in my distributed Splunk environment all features are working and responding perfectly. But this Inventory summery was not showing ... See more...
Hi @guilmxm , I have installed and configured NMON monitoring app in my distributed Splunk environment all features are working and responding perfectly. But this Inventory summery was not showing list of Linux hosts. I have gone through the doc as well but not getting what needs to be done for this. Can anyone please let me know what needs to be done from my end.. Thanks in advance!      
Hi, Thanks in advance Below events is in splunk .We have two repos in git 1. AAA 2.BBB.When ever the repos will replicate and both repos should be same file. But in my case after replicate both rep... See more...
Hi, Thanks in advance Below events is in splunk .We have two repos in git 1. AAA 2.BBB.When ever the repos will replicate and both repos should be same file. But in my case after replicate both repos files are missing so i should compare the files and whare are the files is missing and send an alert as difference in repos. INTERESTING FIELDS: CODE.Modified_files{} TOOLKIT.Modified_files{}   Expected output after comparing: CODE.Modified_files{} "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/logs/refs/heads/master", "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/version.json" TOOLKIT.Modified_files{} "a/D:\\\\splunk_code_replication\\\\AAA_TOOLKIT/.git/logs/refs/heads/master" These files are only present in AAA repo but not in BBB. So we need compare both AAA and BBB missing files. As per the event and show the difference.         { "CODE": { "modified_files": [ "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/HEAD", "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/config", "a/D:\\\\splunk_code_replication\\\\BBB_CODE/.git/HEAD", "a/D:\\\\splunk_code_replication\\\\BBB_CODE/.git/config", "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/logs/refs/heads/master", "a/D:\\\\splunk_code_replication\\\\AAA_CODE/.git/version.json" ] } } { "TOOlKIT": { "modified_files": [ "a/D:\\\\splunk_code_replication\\\\AAA_TOOLKIT/.git/HEAD", "a/D:\\\\splunk_code_replication\\\\AAA_TOOLKIT/.git/config", "a/D:\\\\splunk_code_replication\\\\BBB_TOOLKIT/.git/HEAD", "a/D:\\\\splunk_code_replication\\\\BBB_TOOLKIT/.git/config", "a/D:\\\\splunk_code_replication\\\\AAA_TOOLKIT/.git/logs/refs/heads/master", ] } }          
Hi, I am a beginner. I have a correlation rule that : - searches for IP addresses that are port scans - search in the lookup table, if each IP address is not listed - if an IP address is not in t... See more...
Hi, I am a beginner. I have a correlation rule that : - searches for IP addresses that are port scans - search in the lookup table, if each IP address is not listed - if an IP address is not in the lookup table: make an alert in ES - add this IP in the lookup table (to avoid duplicates) I have two lookup tables : - scan_port.csv - network_provider.csv Now I would like to filter the IP addresses by a lookup table (a list of cidr ranges : "network_provider.csv"). If possible, this filter would be first in this correlation rule, to avoid adding a filtered IP in the lookup table "scan_port.csv". The priority is to: - Find the port scan of the IPs - Filter IPs (by the lookup table "network_provider") - Check for duplicates (by the lookup table "scan_port") - Make an alert - Add the IP in the search table (port scan) As I said, I have a correlation rule for port scans that has been working for years. I would like to add the filter by cidr range. I have the command (cidrmatch) that works for the filter. But I can't get it to work, between the port scan lookup and the two lookup tables, I can't find a solution. Any ideas? Thanks in advance
On my replication bundle I have a whole list of unwanted files that exists from a particular App "XYZ" which are as shown below      apps/XYZ/bin/suds/mx/typer.pyc apps/XYZ/bin/suds/mx/encoded.py ... See more...
On my replication bundle I have a whole list of unwanted files that exists from a particular App "XYZ" which are as shown below      apps/XYZ/bin/suds/mx/typer.pyc apps/XYZ/bin/suds/mx/encoded.py apps/XYZ/bin/suds/mx/__init__.pyc apps/XYZ/bin/suds/mx/literal.py apps/XYZ/bin/suds/mx/__init__.py apps/XYZ/bin/suds/options.py apps/XYZ/bin/suds/sudsobject.py     Now, how can i apply replicationblacklist to anything that is under the APP "XYZ" ?    distsearch.conf   [replicationBlacklist] ....  
All,  I've noticed by default that Splunk Forwarder gives itself /bin/bash  in /etc/passwd.  e.g. splunk:x:1001:1001:Splunk Server:/opt/splunkforwarder:/bin/bash I changed it to the below and res... See more...
All,  I've noticed by default that Splunk Forwarder gives itself /bin/bash  in /etc/passwd.  e.g. splunk:x:1001:1001:Splunk Server:/opt/splunkforwarder:/bin/bash I changed it to the below and restarted: splunk:x:1001:1001:Splunk Server:/opt/splunkforwarder:/sbin/nologin Best I can tell there was no impact. Scripted inputs are working as it the monitor stanza's.  Is there any reason I should leave Splunk user with a Shell?    thanks!
Hello, It's possible that I've had too long of a day, but I can't wrap my head around nesting many ifs.  Is anyone willing to help me out?  I am really bad at writing out SPL queries to make it vis... See more...
Hello, It's possible that I've had too long of a day, but I can't wrap my head around nesting many ifs.  Is anyone willing to help me out?  I am really bad at writing out SPL queries to make it visually understanding with parentheses and commas.  Does anyone have some additional tips on that as well that would be useful for these nested scenarios? For example:           | eval new_field = if(pass_fail="fail", if(importance="0" OR importance="1", case( Days<7 OR State="Online","Gold", Days >=7 AND Days<14,"Orange", Days>=14,"Red"), if(importance="2", case( Days<30 OR State="Online","Gold", Days >=30 AND Days<60,"Orange", Days>=60,"Red"), if(importance="3", case( Days<60 OR State="Online","Gold", Days >=60 AND Days<120,"Orange", Days>=120,"Red"), "importance_3_false"), "importance_2_false"), "importance_1_0_false"), "main_if_fail")         The idea is to break out into a newfield by first looking at only the "fail" items, and then further breaking down the "fail" items by their importance (which can be 0, 1, 2, 3) where 0&1, 2, and 3 have their own case statements.  All the case statements and ifs should be true, and the "importance_3_false" (for example) are more for debugging and should never actually show in my output. I appreciate any help and thank you.         Error in 'eval' command: The arguments to the 'if' function are invalid.          
I have voltage data and want to get the average volts value per day for the last 7 days.  This is where I left off from using bin to multiple timecharts and still nothing.  source="/root/Scripts/... See more...
I have voltage data and want to get the average volts value per day for the last 7 days.  This is where I left off from using bin to multiple timecharts and still nothing.  source="/root/Scripts/VoltageLogger/Voltage.json" host="util" sourcetype="_json" | timechart span=1d count by avg(voltage)    
Simple setup, two different sites with a single clustered Indexer in each, a local Heavy Forwarder that is also the deployment server for the UF's, and a SH in each site. I've deployed the TA_docke... See more...
Simple setup, two different sites with a single clustered Indexer in each, a local Heavy Forwarder that is also the deployment server for the UF's, and a SH in each site. I've deployed the TA_docker_simple app in both sites, installed on both HF's and the intended docker servers at each site.  Works great in one site but I get no data indexed in the other.  All UF's send in the data from the .sh scripts that the app contains (I can see event counts in their metrics.log) but on the problem site HF, I'm seeing messages like this: 06-27-2022 21:00:50.057 +0000 WARN DateParserVerbose - Accepted time (Fri Apr 1 18:31:29 2022) is suspiciously far away from the previous event's time (Fri Apr 1 19:46:38 2022), but still accepted because it was extracted by the same pattern. Context: source=docker_simple_ps|host=XXXXXX|docker:ps|6581 host = XXXXXXX source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd splunk_server = XXXXXXXX Which looks like it's trying to use a string date that is in the script output but isn't the timestamp (it's the container creation timestamp). The actual timestamp is an epoch integer at the beginning of each event.  Even if it were getting imported with the invalid timestamps I would see the data with a realtime search but I see nothing coming in.  I'm not sure how to resolve this.  Both sites are using the same copy of the app on the HF (minus the inputs.conf) and on the UFs.    It works perfectly in one site but not the other.  I've used btool to verify the props and transforms on the HF's are exactly the same.  It's probably something obvious but I can't figure this one out.
Hello everyone,   We found that the VM has more RAM allocated currently we have 12GB of RAM for each Search Head, but that can be increase to 48GB of RAM. I have been reading and increase the capa... See more...
Hello everyone,   We found that the VM has more RAM allocated currently we have 12GB of RAM for each Search Head, but that can be increase to 48GB of RAM. I have been reading and increase the capacity of the Search Heads can affect the indexer nodes: An increase in search tier capacity corresponds to increased search load on the indexing tier, requiring scaling of the indexer nodes. Scaling either tier can be done vertically by increasing per-instance hardware resources, or horizontally by increasing the total node count. And that make sense, currently the environment has more searches than indexers and I think increase the capacity of the SH can overwhelm the  indexers. Current environment: 3 SH (cluster) and 2 Indexers(cluster). I would appreciate any recommendation to do this as good as possible and be able to use the memory allocated.   Kind Regards.
Splunkers, I want to get in Microsoft-Windows-PowerShell/Operational logs into Splunk. There is no default setting for it in the default/inputs.conf file. I think this is the answer: [WinEventLog:... See more...
Splunkers, I want to get in Microsoft-Windows-PowerShell/Operational logs into Splunk. There is no default setting for it in the default/inputs.conf file. I think this is the answer: [WinEventLog://Microsoft-Windows-PowerShell/Operational] disabled=0  
Hello Splunkers,  I've an issue with my event time configuration. It has incorrect timestamp. Below are my props settings..it doesn't seem to be working. Please Advise TIME_FORMAT = %Y-%m-%d %H:%M:... See more...
Hello Splunkers,  I've an issue with my event time configuration. It has incorrect timestamp. Below are my props settings..it doesn't seem to be working. Please Advise TIME_FORMAT = %Y-%m-%d %H:%M:%S TIME_PREFIX = ^ TZ = UTC-4 MAX_TIMESTAMP_LOOKAHEAD = 20    Sample log format Time                                                                      Event 6/27/21 8:30:56.000 PM                #Software: banana Internet Information Services 19.0                                                                    #Version: 10.0                                                                    #Date: 2021-06-27 20:32:46                                                                    #Fields: Sacramento is the capital of California 6/27/21 8:30:56.000 PM                 #Software: pineapple Internet Information Services 39.0                                                                     #Version: 12.0                                                                     #Date: 2021-06-27 20:32:46                                                                     #Fields: Austin is the capital of Texas    
Hello, When I plot a chart between two columns, I am not able to see all the values on the x-axis instead only the name of the column is visible. I have around 180 values in the x-axis column. How ... See more...
Hello, When I plot a chart between two columns, I am not able to see all the values on the x-axis instead only the name of the column is visible. I have around 180 values in the x-axis column. How do I solve the problem. 
Hello everyone, I have been reading about the way Splunk can audit the changes at the configuration files and I found this as a possibility https://docs.splunk.com/Documentation/Splunk/8.2.2/Trou... See more...
Hello everyone, I have been reading about the way Splunk can audit the changes at the configuration files and I found this as a possibility https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/WhatSplunklogsaboutitself   But even though the documentation said is enabled by default my Splunk instance is not logging anything into that log Do you know what I should be doing to track it?   Current Version 8.2.2 Cluster environment Linux   Thank you in advance.
I am trying to create a dynamic input using the dynamic search option. Can this search use a token in it? So far I've tried and haven't had any success.
I am trying to monitor license warnings but I am not sure when the Splunk License Day ends and rolls over to the next day. Are all Splunk Licenses set to UTC by default? Please advise. Thank you
I have rows in the form: ID Field1 Field2 Field3   And I would like to create a histogram that shows the values of all three fields. I can make one for Field1 by doing stats... See more...
I have rows in the form: ID Field1 Field2 Field3   And I would like to create a histogram that shows the values of all three fields. I can make one for Field1 by doing stats count by Field1 span=1000 but I can't seem to figure out how I would get the other values into the same table. Do I need to do multiple searches and join them? How would I go about doing that?
Hi, I installed both Spycloud add-on and app latest versions in Splunk cloud. When I tried the setup, it does not seem to be working. Does these app and add-on are supported in Cloud?  Thanks