All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I created a Custom Command to generate Events from a REST-API.     [cmdb] filename = cmdb.py generating = true chunked = true supports_multivalues = true       Command runs. The problem is tha... See more...
I created a Custom Command to generate Events from a REST-API.     [cmdb] filename = cmdb.py generating = true chunked = true supports_multivalues = true       Command runs. The problem is that there are different field sets per Host, where I'm looping through.  But I only get the fields where all the hosts have an entry.  Example: Host Field a Field b foo Value Value foobar Value   bar Value Value fbar Value     Field b will not show up in the Splunk Results List. Code sample. I add the patch report only to those hosts, which have on.     if len(sorted_patch_report) > 0: sorted_patch_report = (sorted_patch_report[0]) sorted_patch_report_renamed = {"Patching_" + str(key): val for key, val in sorted_patch_report.items()} i.update(sorted_patch_report_renamed) #self.logger.fatal(i) yield dict(i) else: #except IndexError: #sorted_patch_report = 'null' self.logger.info("No patch report for "+i['fullQualifiedDomainName']) #self.logger.fatal(i) yield dict(i)     If I print the dict to logger I see all the fields. Any Idea?
I'm working with an Google Super Admin and I'm trying to get Google DLP Logs into Splunk Cloud.   There is a HEC that is set up and the majority of the logs are flowing into Splunk via the HTTP Even... See more...
I'm working with an Google Super Admin and I'm trying to get Google DLP Logs into Splunk Cloud.   There is a HEC that is set up and the majority of the logs are flowing into Splunk via the HTTP Event Collector however, the problem I'm running into is that from the Google Admin Console, I can see and search the DLP logs BUT those logs, when I search in Splunk are not there. Google Work Space logs are coming in and the Super Admin states that he is sending everything on their side into Splunk.
I need to understand how to integrate oracle netsuite logs with splunk. I tried searching but I am unable to find a proper method for this . Please help.
Where can i find more information about Nutantix to Splunk cloud integration. I know there's a app for Nutanix Flow Central (FSC) Splunk application ****Not available for Splunk Cloud That was 2ye... See more...
Where can i find more information about Nutantix to Splunk cloud integration. I know there's a app for Nutanix Flow Central (FSC) Splunk application ****Not available for Splunk Cloud That was 2years ago. Does Splunk have a version tah works for Splunk Cloud?
Hello!   I'm trying to build out a lookup of services on specific servers that I want to know when they've stopped. But I wanted to use wildcards for servers so I didn't need to type out a lot of s... See more...
Hello!   I'm trying to build out a lookup of services on specific servers that I want to know when they've stopped. But I wanted to use wildcards for servers so I didn't need to type out a lot of servers.   This is the some sample data and the base of the search that I've been playing with. host Name severity failuresAllowed server1234 service1 low 3 server1* service2 high 1 server2* service3 medium 2   index=windows source=service earliest=-20m [inputlookup Windows_App_Services.csv | table host Name ] | stats count(eval(if(State!="Running",1,null()))) as failureCount by host Name | join host Name type=outer [inputlookup Windows_App_Services.csv] The first inputlookup pulls in just the server name and service we're looking at so that I can search only those events. Then I count how many of those events have a State of not running so I know how many times in the 20 minute lookup back period they haven't been running. Then I'd like to pull in severity and failuresAllowed so that I can use those to calculate severity in ITSI, but when I try to do the join it does not work because the host doesn't match what's in the lookup since it's wildcarded.   I've tried creating a wildcard match_type on that lookup, but that doesn't seem to help me. Anyone have any ideas?   Thanks for your help!
I'm currently attempting to setup an environment using https://github.com/splunk/splunk-ansible. When I run the playbook it creates the appropriate user-seed.conf but /opt/splunk/etc/passwd has no... See more...
I'm currently attempting to setup an environment using https://github.com/splunk/splunk-ansible. When I run the playbook it creates the appropriate user-seed.conf but /opt/splunk/etc/passwd has no user. There for when it tries to run the commands to turn a server into the cluster manager or a peer it isn't able to do it. Running Rocky Linux 8.
Trabalho na setor de TI do Santander, por onde começo para aprender o Splunk, meu interesse é aprender a colocar os dados no Splunk, fazer todo o monitoramento etc, nós trabalhamos com os painéis de ... See more...
Trabalho na setor de TI do Santander, por onde começo para aprender o Splunk, meu interesse é aprender a colocar os dados no Splunk, fazer todo o monitoramento etc, nós trabalhamos com os painéis de monitoramento em tempo real e em nuvens e AI, quero aprender a fazer todo esse monitoramento do inicio ao fim, entregado a informação em tempo real para os  departamentos. Tenho interesse em fazer o curso mais completo, talvez uma certificação, para esse tipo de ação, função. Tenho conhecimento de 10 anos de SAP e de 02 anos de Ariba. Esse curso pode ser feito em Português ? Quanto ganha um analista júnior, pleno e sênior de Splunk na pratica ?
Looking to build a report that would display/identity those hosts that are reporting into Forwarder Management but are not sending logs.  I can then know what hosts I need to troubleshoot.  
I'm trying to rename the IP's of our servers to splunknodes host_ip host_name ip-111-11-1-11 Searchhead ip-111-11-1-12 Searchhead ip-111-11-1-10 Masternode ip-111-11-2-11 Indexe... See more...
I'm trying to rename the IP's of our servers to splunknodes host_ip host_name ip-111-11-1-11 Searchhead ip-111-11-1-12 Searchhead ip-111-11-1-10 Masternode ip-111-11-2-11 Indexer ip-111-11-2-12 Indexer ip-111-11-2-10 Deploymentserver How do I get it to count the duplicates?: host_ip host_name ip-111-11-1-11 Searchhead1 ip-111-11-1-12 Searchhead2 ip-111-11-1-10 Masternode ip-111-11-2-11 Indexer1 ip-111-11-2-12 Indexer2 ip-111-11-2-10 Deploymentserver   Thanks in advance!
Hi All,   I want help in multiselect input type. when user starts typing value in multiselect input the suggestions should auto populate.  e.g. when I type Splunk - below results should populate ... See more...
Hi All,   I want help in multiselect input type. when user starts typing value in multiselect input the suggestions should auto populate.  e.g. when I type Splunk - below results should populate Splunk multiselect spl Splunk enterprise  kindly provide your inputs/ suggestions   thanks, Neha 
Given a field containing a "userId", I want a count per day of unique userIds by "new" vs "returning". E.g. Ends up with this output... | timechart span=1d dc(userId) by newOrReturning or I suppose... See more...
Given a field containing a "userId", I want a count per day of unique userIds by "new" vs "returning". E.g. Ends up with this output... | timechart span=1d dc(userId) by newOrReturning or I suppose... | bin _time span=1d | stats dc(userId) by _time, newOrReturning A userId is "new" on a given day if there is no record of it from any previous day, going as far back as a known fixed date. The bit I'm having trouble with is how to set newOrReturning, since (I assume) I need to somehow search back in time for a matching userId from previous days.

This post has been reviewed and site admins believe the most appropriate place for it is in the Getting Data In board. This post has been moved to that board.

Few DC servers not showing source=wineventlog:security. Can someone provide the troubleshooting steps to find the what change happened in configuration file or is any additional change required.  
Hello All, After adding the Remedy add-on on Splunk Search head cluster, i am adding the SOAP and Rest Account details under Splunk UI, while configuring i am getting below error message under SOAP ... See more...
Hello All, After adding the Remedy add-on on Splunk Search head cluster, i am adding the SOAP and Rest Account details under Splunk UI, while configuring i am getting below error message under SOAP account  "Unable to reach server at https://A.B.C.X.D:8443/arsys/WSDL/public/X.X.X.X/HPD_IncidentInterface_WS. Check configurations and network settings." Whereas under rest account i am getting below error message:: "Unable to reach BMC Remedy server at 'https://X.X.X.X:8443/api/jwt/login'. ( Check configurations and network settings" Note:: 1. https://A.B.C.X.D:8443 -->This is the IP address of my remedy mid tier server. 2. https://X.X.X.X:8443 -->This is Ip address of my remedy AR server 3. Ports are open from Remedy to Splunk or Splunk to Remedy on tcp port 8443. As i have checked this is issue with my AR server. Could you please assist me here asap.. Thanks..!!      
I have a table with the following data 08 09 10 data data data data data data data data data   i want to rename the 08,09,10 to Aug21,Sep21 and Oct21 the field name can b... See more...
I have a table with the following data 08 09 10 data data data data data data data data data   i want to rename the 08,09,10 to Aug21,Sep21 and Oct21 the field name can be anything from 01 to 12  01 = Jan21 , 02 = Feb 21, 03 = March21 etc  How can i rename them in this case.
Hi. I have a search as below index=myindex sourcetype=mytype field1=* field2=* |stats count(eval(condition1)) as count1 count(eval(condition2)) as count 2 by field1 field2 Now, field1 and field2 h... See more...
Hi. I have a search as below index=myindex sourcetype=mytype field1=* field2=* |stats count(eval(condition1)) as count1 count(eval(condition2)) as count 2 by field1 field2 Now, field1 and field2 has more than 10k values. so I need to find the top 100 values of field1 & field2 and use only that to my |stats Tried something like this: index=myindex sourcetype=mytype field1=* field2=* [|search index=myindex sourcetype=mytype field1=* field2=* |top 100 field1 field2 |fields field1 field2 |format] |stats count(eval(condition1)) as count1 count(eval(condition2)) as count 2 by field1 field2 but didn't work as expected
Hi everybody. Currently, we have a task which involve QRadar correlation rules translation to SPlunk ones. The Splunk rules will be used in a Splunk Enterprise Security environment. The big issue ... See more...
Hi everybody. Currently, we have a task which involve QRadar correlation rules translation to SPlunk ones. The Splunk rules will be used in a Splunk Enterprise Security environment. The big issue we are facing is the following: we got some elements in QRadar for what is not clear if we have a corresponding element in SPlunk. One of this is the event category: the QRadar definition of this element is the following one: https://www.ibm.com/docs/en/qsip/7.4?topic=administration-event-categories In a nutshell, this mechanism categorize the events in high level category which contains lover/more specific category. For example, we have the macro category Malware wich contains Backdor, Spyware and so on. So, my question is: have we, in Splunk, a similar mechanis? For example, in a QRadar rule I may have, between the filters, "when the event category for the event is one of the following: Potential Exploit.Potential Botnet Connection" ; how can I check this in SPlunk? If there is not a mechanism to automatize this and we have to set this check manually, what could be the best way to got the category?
I tried to get data using Google Workspace Add-on, but the following error occurs.  Could you please tell me how to resolve this error? [error message]  message from "/opt/splunk/bin/python3.7 /op... See more...
I tried to get data using Google Workspace Add-on, but the following error occurs.  Could you please tell me how to resolve this error? [error message]  message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/activity_report.py" Extra data: line 1 column 6 (char 5)
Hello! I try onboarding several Trend Micro Cloud Applications like Apex One as a Service but it just doesn't work.  On the Apex One Cloud Platform I can get the URL, Application ID and API Key nec... See more...
Hello! I try onboarding several Trend Micro Cloud Applications like Apex One as a Service but it just doesn't work.  On the Apex One Cloud Platform I can get the URL, Application ID and API Key necessary to connect.  but it doesn't seem to work. I get the following errors in the apex_one_as_a_service_api.log :  2021-11-12 09:56:08,859 DEBUG pid=105063 tid=MainThread file=connectionpool.py:_make_request:437 | https://xj7qb2.manage.trendmicro.com:443 "GET /WebApp/api/v1/Logs/officescan_virus?output_format=CEF&page_token=0&since_time=1636707248 HTTP/1.1" 404 1245   and: 2021-11-12 10:00:08,804 ERROR pid=122037 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/Apex-One-as-a-Service/bin/apex_one_as_a_service/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/Apex-One-as-a-Service/bin/apex_one_as_a_service_api.py", line 64, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/Apex-One-as-a-Service/bin/input_module_apex_one_as_a_service_api.py", line 91, in collect_events r_json = response.json() File "/opt/splunk/etc/apps/Apex-One-as-a-Service/bin/apex_one_as_a_service/aob_py3/requests/models.py", line 897, in json return complexjson.loads(self.text, **kwargs) File "/opt/splunk/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/opt/splunk/lib/python3.7/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/splunk/lib/python3.7/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)    splunkd.log itself says the same:  11-12-2021 10:02:08.931 +0100 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Apex-One-as-a-Service/bin/apex_one_as_a_service_api.py" ERRORExpecting value: line 1 column 1 (char 0)   I'm trying to use the following app for it:  https://splunkbase.splunk.com/app/5431/   What is wrong? does anyone know how to make this work?  PS: I'm sorry I can't use the "insert code" function here since it throws errors when I try.   
Hello All,   I have a search that uses stats command and displays the results as follows.  Note:  I have stripped out some columns.     index=index1 sourceType=xxxx | eventstats count(action) ... See more...
Hello All,   I have a search that uses stats command and displays the results as follows.  Note:  I have stripped out some columns.     index=index1 sourceType=xxxx | eventstats count(action) as Per_User_failures by user | stats latest(_time) as _time, values(host), values(src_ip), dc(src_ip) as srcIpCount, values(user), values(Failure_Reason), dc(user) as userCount, values(Per_User_failures) as Per_User_failures by Workstation_Name   Now, if i further add  | where Per_User_failures > 10  condition, the search shows "No Results Found".    index=index1 sourceType=xxxx | eventstats count(action) as Per_User_failures by user | stats latest(_time) as _time, values(host), values(src_ip), dc(src_ip) as srcIpCount, values(user), values(Failure_Reason), dc(user) as userCount, values(Per_User_failures) as Per_User_failures by Workstation_Name | where Per_User_failures >10   This is incorrect  as you can see there are some values where Per_user_Failures is greater than 10 such as 11,12,13, 1037 etc.   How can i make the where clause check any of the values under the "Per_user_failures" column.