All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,   I want help in multiselect input type. when user starts typing value in multiselect input the suggestions should auto populate.  e.g. when I type Splunk - below results should populate ... See more...
Hi All,   I want help in multiselect input type. when user starts typing value in multiselect input the suggestions should auto populate.  e.g. when I type Splunk - below results should populate Splunk multiselect spl Splunk enterprise  kindly provide your inputs/ suggestions   thanks, Neha 
Given a field containing a "userId", I want a count per day of unique userIds by "new" vs "returning". E.g. Ends up with this output... | timechart span=1d dc(userId) by newOrReturning or I suppose... See more...
Given a field containing a "userId", I want a count per day of unique userIds by "new" vs "returning". E.g. Ends up with this output... | timechart span=1d dc(userId) by newOrReturning or I suppose... | bin _time span=1d | stats dc(userId) by _time, newOrReturning A userId is "new" on a given day if there is no record of it from any previous day, going as far back as a known fixed date. The bit I'm having trouble with is how to set newOrReturning, since (I assume) I need to somehow search back in time for a matching userId from previous days.

This post has been reviewed and site admins believe the most appropriate place for it is in the Getting Data In board. This post has been moved to that board.

Few DC servers not showing source=wineventlog:security. Can someone provide the troubleshooting steps to find the what change happened in configuration file or is any additional change required.  
Hello All, After adding the Remedy add-on on Splunk Search head cluster, i am adding the SOAP and Rest Account details under Splunk UI, while configuring i am getting below error message under SOAP ... See more...
Hello All, After adding the Remedy add-on on Splunk Search head cluster, i am adding the SOAP and Rest Account details under Splunk UI, while configuring i am getting below error message under SOAP account  "Unable to reach server at https://A.B.C.X.D:8443/arsys/WSDL/public/X.X.X.X/HPD_IncidentInterface_WS. Check configurations and network settings." Whereas under rest account i am getting below error message:: "Unable to reach BMC Remedy server at 'https://X.X.X.X:8443/api/jwt/login'. ( Check configurations and network settings" Note:: 1. https://A.B.C.X.D:8443 -->This is the IP address of my remedy mid tier server. 2. https://X.X.X.X:8443 -->This is Ip address of my remedy AR server 3. Ports are open from Remedy to Splunk or Splunk to Remedy on tcp port 8443. As i have checked this is issue with my AR server. Could you please assist me here asap.. Thanks..!!      
I have a table with the following data 08 09 10 data data data data data data data data data   i want to rename the 08,09,10 to Aug21,Sep21 and Oct21 the field name can b... See more...
I have a table with the following data 08 09 10 data data data data data data data data data   i want to rename the 08,09,10 to Aug21,Sep21 and Oct21 the field name can be anything from 01 to 12  01 = Jan21 , 02 = Feb 21, 03 = March21 etc  How can i rename them in this case.
Hi. I have a search as below index=myindex sourcetype=mytype field1=* field2=* |stats count(eval(condition1)) as count1 count(eval(condition2)) as count 2 by field1 field2 Now, field1 and field2 h... See more...
Hi. I have a search as below index=myindex sourcetype=mytype field1=* field2=* |stats count(eval(condition1)) as count1 count(eval(condition2)) as count 2 by field1 field2 Now, field1 and field2 has more than 10k values. so I need to find the top 100 values of field1 & field2 and use only that to my |stats Tried something like this: index=myindex sourcetype=mytype field1=* field2=* [|search index=myindex sourcetype=mytype field1=* field2=* |top 100 field1 field2 |fields field1 field2 |format] |stats count(eval(condition1)) as count1 count(eval(condition2)) as count 2 by field1 field2 but didn't work as expected
Hi everybody. Currently, we have a task which involve QRadar correlation rules translation to SPlunk ones. The Splunk rules will be used in a Splunk Enterprise Security environment. The big issue ... See more...
Hi everybody. Currently, we have a task which involve QRadar correlation rules translation to SPlunk ones. The Splunk rules will be used in a Splunk Enterprise Security environment. The big issue we are facing is the following: we got some elements in QRadar for what is not clear if we have a corresponding element in SPlunk. One of this is the event category: the QRadar definition of this element is the following one: https://www.ibm.com/docs/en/qsip/7.4?topic=administration-event-categories In a nutshell, this mechanism categorize the events in high level category which contains lover/more specific category. For example, we have the macro category Malware wich contains Backdor, Spyware and so on. So, my question is: have we, in Splunk, a similar mechanis? For example, in a QRadar rule I may have, between the filters, "when the event category for the event is one of the following: Potential Exploit.Potential Botnet Connection" ; how can I check this in SPlunk? If there is not a mechanism to automatize this and we have to set this check manually, what could be the best way to got the category?
I tried to get data using Google Workspace Add-on, but the following error occurs.  Could you please tell me how to resolve this error? [error message]  message from "/opt/splunk/bin/python3.7 /op... See more...
I tried to get data using Google Workspace Add-on, but the following error occurs.  Could you please tell me how to resolve this error? [error message]  message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_Google_Workspace/bin/activity_report.py" Extra data: line 1 column 6 (char 5)
Hello! I try onboarding several Trend Micro Cloud Applications like Apex One as a Service but it just doesn't work.  On the Apex One Cloud Platform I can get the URL, Application ID and API Key nec... See more...
Hello! I try onboarding several Trend Micro Cloud Applications like Apex One as a Service but it just doesn't work.  On the Apex One Cloud Platform I can get the URL, Application ID and API Key necessary to connect.  but it doesn't seem to work. I get the following errors in the apex_one_as_a_service_api.log :  2021-11-12 09:56:08,859 DEBUG pid=105063 tid=MainThread file=connectionpool.py:_make_request:437 | https://xj7qb2.manage.trendmicro.com:443 "GET /WebApp/api/v1/Logs/officescan_virus?output_format=CEF&page_token=0&since_time=1636707248 HTTP/1.1" 404 1245   and: 2021-11-12 10:00:08,804 ERROR pid=122037 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/Apex-One-as-a-Service/bin/apex_one_as_a_service/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/Apex-One-as-a-Service/bin/apex_one_as_a_service_api.py", line 64, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/Apex-One-as-a-Service/bin/input_module_apex_one_as_a_service_api.py", line 91, in collect_events r_json = response.json() File "/opt/splunk/etc/apps/Apex-One-as-a-Service/bin/apex_one_as_a_service/aob_py3/requests/models.py", line 897, in json return complexjson.loads(self.text, **kwargs) File "/opt/splunk/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/opt/splunk/lib/python3.7/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/splunk/lib/python3.7/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)    splunkd.log itself says the same:  11-12-2021 10:02:08.931 +0100 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Apex-One-as-a-Service/bin/apex_one_as_a_service_api.py" ERRORExpecting value: line 1 column 1 (char 0)   I'm trying to use the following app for it:  https://splunkbase.splunk.com/app/5431/   What is wrong? does anyone know how to make this work?  PS: I'm sorry I can't use the "insert code" function here since it throws errors when I try.   
Hello All,   I have a search that uses stats command and displays the results as follows.  Note:  I have stripped out some columns.     index=index1 sourceType=xxxx | eventstats count(action) ... See more...
Hello All,   I have a search that uses stats command and displays the results as follows.  Note:  I have stripped out some columns.     index=index1 sourceType=xxxx | eventstats count(action) as Per_User_failures by user | stats latest(_time) as _time, values(host), values(src_ip), dc(src_ip) as srcIpCount, values(user), values(Failure_Reason), dc(user) as userCount, values(Per_User_failures) as Per_User_failures by Workstation_Name   Now, if i further add  | where Per_User_failures > 10  condition, the search shows "No Results Found".    index=index1 sourceType=xxxx | eventstats count(action) as Per_User_failures by user | stats latest(_time) as _time, values(host), values(src_ip), dc(src_ip) as srcIpCount, values(user), values(Failure_Reason), dc(user) as userCount, values(Per_User_failures) as Per_User_failures by Workstation_Name | where Per_User_failures >10   This is incorrect  as you can see there are some values where Per_user_Failures is greater than 10 such as 11,12,13, 1037 etc.   How can i make the where clause check any of the values under the "Per_user_failures" column. 
Hello experts,  So i have extreme network switch VSP 7000 and VSP 8000 that want to send syslog to our splunk.  When i'm looking for add on or app for those devices, there is none. Is it possible to... See more...
Hello experts,  So i have extreme network switch VSP 7000 and VSP 8000 that want to send syslog to our splunk.  When i'm looking for add on or app for those devices, there is none. Is it possible to collect syslog without add-on or app in splunk? Is there any documentation? 
I have a problem where an admin role user cannot see another analyst user to assign specific notable events to. However, I do not have any problems when I as another admin user try to assign the anal... See more...
I have a problem where an admin role user cannot see another analyst user to assign specific notable events to. However, I do not have any problems when I as another admin user try to assign the analyst user that the other admin role cannot see. I have checked notable_owners_lookup and it was filled correctly with the expected users. What could be the issue here and where should we check?
The link below provides the following paragraph: "...HEC responds with the status information to the client. The body of the reply contains the status of each of the requests that the client queried... See more...
The link below provides the following paragraph: "...HEC responds with the status information to the client. The body of the reply contains the status of each of the requests that the client queried. A true status indicates that the event that corresponds to that ackID was replicated at the desired replication factor. A true status does not guarantee that the event was indexed, because the parsing pipeline might drop events that can't be parsed. A false status indicates that there is no status information for that ackID, or that the corresponding event has not been indexed." Reference: https://docs.splunk.com/Documentation/Splunk/8.2.3/Data/AboutHECIDXAck This seems contradictory. How can the event for the ackID be replicated at the desired replication factor if it does not guarantee that the event was indexed? However, I noticed that earlier in the documentation, with indexer acknowledgement turned off, it states: "By default, when HEC receives an event successfully, it immediately sends an HTTP Status 200 code to the sender of the data. However, this only means that the event data appears to be valid, and HEC sends the status message before the event data enters the processing pipeline." Does the lack of guarantee only refer to when acknowledgement is NOT enabled? I.e. does an ackID value of "True" guarantee that the data has been indexed (and replicated) successfully?
hi, I have a local server on my network and would like to send data from this local host to the cloud instance. I have followed the instructions here, https://docs.splunk.com/Documentation/Forwarder... See more...
hi, I have a local server on my network and would like to send data from this local host to the cloud instance. I have followed the instructions here, https://docs.splunk.com/Documentation/Forwarder/8.2.3/Forwarder/ConfigSCUFCredentials and installed the splunkclouduf.spl obtained from my cloud instance profile. However I seem to be getting the following errors: 11-12-2021 13:56:53.874 +0800 WARN X509Verify [30879 HTTPDispatch] - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: <http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates> 11-12-2021 13:56:53.901 +0800 INFO UiHttpListener [30942 WebuiStartup] - Web UI disabled in web.conf [settings]; not starting 11-12-2021 13:56:54.039 +0800 INFO TcpOutputProc [30923 parsing] - _isHttpOutConfigured=NOT_CONFIGURED 11-12-2021 13:56:54.040 +0800 ERROR TcpOutputProc [30923 parsing] - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf. 11-12-2021 13:56:58.961 +0800 WARN TailReader [30932 tailreader0] - Could not send data to output queue (parsingQueue), retrying...   I thought that once we deploy via the splunkclouduf.spl, we need not configure any outputs.conf file?   Any assistance is greatly appreciated.
I have a two VIP names, and I would like to know the number of hits to it. I am new to splunk, and not sure on how to write a query. Could anyone help?
Hi All, I am using wildcard in inputs.conf since very long but recently when I am giving below path with wildcard splunk is not able to capture all the files: [monitor://C:\logdir\*\*\Katre\log\*.l... See more...
Hi All, I am using wildcard in inputs.conf since very long but recently when I am giving below path with wildcard splunk is not able to capture all the files: [monitor://C:\logdir\*\*\Katre\log\*.log] Around 178 files  should get selected with about monitor stanza but splunk forwarder is only send 20-30 files logs.  Am I hitting any limit or there is any limitation.    
Newly upgraded Splunk to 8.1.5 from 7.3.x and seeing the below error message on DMC Search Activity:Instance   Multiple renames to field 'Type' detected. Only the last one will appear, and previo... See more...
Newly upgraded Splunk to 8.1.5 from 7.3.x and seeing the below error message on DMC Search Activity:Instance   Multiple renames to field 'Type' detected. Only the last one will appear, and previous 'from' fields will be dropped.   Any ideas or suggestions on how to fix this ?  
Hello There, I'm a bit rusty when it comes to the syntax and I am trying to get a better grasp. I have an if else function, so if lets say ABC is greater than 3600 add 21600 seconds else don't add a... See more...
Hello There, I'm a bit rusty when it comes to the syntax and I am trying to get a better grasp. I have an if else function, so if lets say ABC is greater than 3600 add 21600 seconds else don't add any time. I have 3 of these types of conditions, but they are all under the same field name. The struggle for me is combining these if else functions into one multi conditional function.  I have spent a while looking at how to do this, but I didn't run into any examples that included strftime or strptime.  Any guidance on this type of syntax is apricated.       | eval SLA_Breach=case(ABC>3600, strftime(strptime(releaseToCarsTime, "%Y-%m-%d %H:%M:%S.%6N") +21600, "%Y-%m-%d %H:%M:%S.%6N"),"none") | eval SLA_Breach=if(DEF>2800,strftime(strptime(releaseToCarsTime, "%Y-%m-%d %H:%M:%S.%6N") +172800, "%Y-%m-%d %H:%M:%S.%6N"),"none") | eval SLA_Breach=if(GHI>1400,strftime(strptime(releaseToCarsTime, "%Y-%m-%d %H:%M:%S.%6N") +86400, "%Y-%m-%d %H:%M:%S.%6N"),"none")        
I need help for extracting the below fields. can someone help.. reference = 205, \"sample\":12345678, \"logic\":\"AB000012\", \"status\":0, \"result_message\":null, \"end_time\":null, sample=123456... See more...
I need help for extracting the below fields. can someone help.. reference = 205, \"sample\":12345678, \"logic\":\"AB000012\", \"status\":0, \"result_message\":null, \"end_time\":null, sample=12345678 logic=AB000012 status=0 result_message=null end_time=null