All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Currently I am trying to extract the crossReferenceId value using below rex query.  Its working fine and I can extract the data. But seems below rex query is not extract all the values from the logs.... See more...
Currently I am trying to extract the crossReferenceId value using below rex query.  Its working fine and I can extract the data. But seems below rex query is not extract all the values from the logs. For example, if I search the individual "agentname"  I cannot find that in the search (however I can find the same agentname without rex).  Seems below rex is not extracting the complete values. May be some are missing out.    index=xyz "crossReferenceId" | rex"\{\"crossReferenceId\"\:\"(?<agentname>\w*)\"\,\"providerInstanceId\"\:\"(?<providerInstanceId>\w*............................)\"\,\"userId\"\:\"(?<userid>\w*............................)\"\,\"dateModified\"\:\"(?<modifieddate>\d*................)\"\}" | search agentname="*" providerInstanceId="*" userid="*" modifieddate="*" | stats count by agentname, providerInstanceId, userid, modifieddate | table agentname, providerInstanceId, userid, modifieddate   2022-09-21 21:18:23.046 TRACE 5028 --- [pool-3-thread-2] i.e.p.c.p.OAuthAuthenticationInterceptor : Host-Client Response: GET | 200 from https://xyz.com.com/api/crossReferences?$filter=p: Payload: {"@odata.context":"$metadata#crossReferences","value":[{"crossReferenceId":"asdfdsf","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"336d6a6f-3124-4c7c-b57a-692fa5114c2e","dateModified":"2022-08-09T12:17:06Z"},{"crossReferenceId":"dgsgdf","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"79729cc5-d454-44dc-ad60-0a9caadef580","dateModified":"2022-07-23T11:35:32Z"},{"crossReferenceId":"wqruytuere","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"6fe5f478-fbcb-460f-99b8-af1757c03bc5","dateModified":"2021-06-27T11:07:43Z"},{"crossReferenceId":"yuiyiyui","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"511da6bf-c21f-40bf-a18a-23c9ad472a9d","dateModified":"2022-05-26T11:49:18Z"},{"crossReferenceId":"ttttttt","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"251a6976-1460-49b8-a3cc-5126cb2caa00","dateModified":"2022-08-23T11:11:47Z"},{"crossReferenceId":"ytujty","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"7c17da4f-2181-4392-abe9-0e8ea8290234","dateModified":"2020-10-24T11:25:46Z"},{"crossReferenceId":"iljkljlhl","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"54e850d8-e69e-4749-8244-f2700eec4d0f","dateModified":"2022-03-26T11:33:12Z"},{"crossReferenceId":"xcvxcvvcvx","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"6465cce8-2d40-4661-bc9a-6473e4a09597","dateModified":"2022-04-09T11:27:12Z"},{"crossReferenceId":"ertwetret","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"c679dbe2-e803-4057-92ca-106ed48370b8","dateModified":"2022-09-08T11:23:50Z"},{"crossReferenceId":"tyutyutu","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"8e63a413-f4e4-46cd-aa10-bf86206079de","dateModified":"2021-11-22T12:17:43Z"},{"crossReferenceId":"aaaaaaa","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"71255798-366e-4d1e-8654-c7adcbeb7473","dateModified":"2022-06-23T11:36:02Z"},{"crossReferenceId":"erererere","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"20e39e30-d31b-4ad2-8993-b087104e34fa","dateModified":"2021-09-13T11:10:05Z"},{"crossReferenceId":"yutyuyutyu","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"6735fd0b-1148-4193-8971-f7a3afadb807","dateModified":"2022-07-25T11:20:29Z"},{"crossReferenceId":"ertrtrttr","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"bf3ffa03-83e8-4973-a292-817d0fd9a412","dateModified":"2022-08-23T11:11:47Z"},{"crossReferenceId":"tyuyuyuyu","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"5e622f17-7dce-4f2b-a264-1224fc709469","dateModified":"2022-08-30T21:07:02Z"},{"crossReferenceId":"wewewewewe","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"b46acff6-aedf-45ab-b353-2ce699c0c454","dateModified":"2022-08-23T11:35:20Z"}]}
A notable event triggered 30000 notables how can i delete them all?
Hi, I have a field X with values similar to the following "device-group APCC1_Core_Controller pre-rulebase application-override rules NFS-bypass UDP-1" and "device-group APCC1_Core_Controller pre-... See more...
Hi, I have a field X with values similar to the following "device-group APCC1_Core_Controller pre-rulebase application-override rules NFS-bypass UDP-1" and "device-group APCC1_Core_Controller pre-rulebase application-override rules" as 2 examples of possible values. I need to extract the value in between "device_group" and "per_rulebase...." and assign this as Y. So, if X = "device-group APCC1_Core_Controller pre-rulebase application-override rules NFS-bypass UDP-1" => Y = "APCC1_Core_Controller" If X = "device-group APCC1_Core_Controller pre-rulebase application-override rules" => Y = "APCC1_Core_Controller". What would the rex command be??? Thanks,
Hi, I have a scenario where I receive multiple requests which contain same field value basically OrderNumber. So the backend is receiving duplicate orders from the front end. It only happens every n... See more...
Hi, I have a scenario where I receive multiple requests which contain same field value basically OrderNumber. So the backend is receiving duplicate orders from the front end. It only happens every now and then and I'd like to plot a graph which can show me when exactly it happens. I think just the count of duplicates vs _time would be enough. I've tried using a query but it's giving me a distorted graph. Is there a better way to achieve this? Query I have -  index=myapp OrderService "HttpMethod=POST" | rex field=_raw "orderNumber\"\:\s\"(?<orderNumber>[^\"]+)" | bin span=15m _time | stats count by _time orderNumber | where count > 1 | table _time count Let me know if anyone has any suggestions to do this in a better way.
Hello, I recently migrated few of my Indexes to Smartstore Indexes using Azure. After the migration, now when I go to the Indexes page on my Splunk Web UI, the "New Index" button is disabled and it ... See more...
Hello, I recently migrated few of my Indexes to Smartstore Indexes using Azure. After the migration, now when I go to the Indexes page on my Splunk Web UI, the "New Index" button is disabled and it says "Disabled new index for Smart store Indexes". Does this mean I cannot create any new Splunk Indexes via Splunk Web directly now, or is it a bug? How can I re-enable the "New Index" button now? (I am using Splunk Enterprise verson 9.0.1 and a single instance)
Hello Splunkers, I need your help to understand and to solve an issue we discovered with Splunk. This issue seems to be a limitation or a bug of Splunk Enterprise : We work with microsoft sysmon... See more...
Hello Splunkers, I need your help to understand and to solve an issue we discovered with Splunk. This issue seems to be a limitation or a bug of Splunk Enterprise : We work with microsoft sysmon data, and sometimes we have events with the value of a command executed in prompt. Splunk reports the exact value of the command executed in the raw event : And the value extracted by Splunk for the field CommandLine is the following : However, when I want to display the CommandLine  field in a table or a stats table, then I get that. See the last row of the table for our CommandLine example : Splunk replaces my quotes by HTML encoded charactersin the table. However, the strange thing is not that Splunk replaces everytime special characters by HTML character, Splunk only replaces the special character by HTML characters for some commands executed.  Just check the examples below to understand the issue : Depending on whether we use some texts that Splunk seems to do not like or not, Splunk will encode my special characters in the table or not. The texts in the command executed, that generates the Splunk HTML encoding in table or stats are the followings :          <script> or vbsscript: or javascript&colon;         Otherwise, if I put another text, in the command like "blablascript:" or "script:" I do not have the issue. Could someone please help us to understand from where this issue may come ? Is it a Splunk limitation/bug or just something that we need to configure somewhere ? Great Thanks to you by advance.    
index=sap source=P* (EVENT_TYPE=abc) | fields FDATE FTIME LDATE LTIME QDEEP QNAME FIRSTTID QSTATE EVENT_TYPE source | eval earliestCT = strptime(strftime(now() + `utcdiff("America/Chicago")`,"00:00:0... See more...
index=sap source=P* (EVENT_TYPE=abc) | fields FDATE FTIME LDATE LTIME QDEEP QNAME FIRSTTID QSTATE EVENT_TYPE source | eval earliestCT = strptime(strftime(now() + `utcdiff("America/Chicago")`,"00:00:00 %m/%d/%Y America/Chicago"),"%H:%M:%S %m/%d/%Y %Z"), latestCT = strptime(strftime(now() + `utcdiff("America/Chicago")`,"23:59:59 %m/%d/%Y America/Chicago"),"%H:%M:%S %m/%d/%Y %Z"), DateCT = strftime(now() + `utcdiff("America/Chicago")`,"%m/%d/%Y"),Created = strptime(FDATE." ".FTIME,"%Y%m%d %H%M%S"), lastupdate=strptime(LDATE." ".LTIME,"%Y%m%d %H%M%S") | where Created >= earliestCT AND Created <= latestCT | dedup source EVENT_TYPE QNAME FIRSTTID | stats sum(QDEEP) as TotalEntries values(DateCT) as DateCT by source EVENT_TYPE | lookup Lookup_SAP_PERF_EntryThresholds.csv source EVENT_TYPE OUTPUTNEW Threshold LastAlertedDate | where (tostring(DateCT) != tostring(LastAlertedDate)) AND match(Threshold,".+") AND (TotalEntries >= Threshold) To add new requirement in the existing alert, When the entries are greater than threshold and staying for more than 10 mins and not reducing further then it should trigger.
We have implemented a real-time search in [Alerts] of Splunk that sends out an email when the corresponding search result is output. When multiple logs (error logs) are output to Splunk at the same t... See more...
We have implemented a real-time search in [Alerts] of Splunk that sends out an email when the corresponding search result is output. When multiple logs (error logs) are output to Splunk at the same time (timing), multiple e-mails are sent out, but we want the e-mails to be received in the order in which the logs were output, but the order in which the e-mails are received is different from the order in which the logs were output, and they are scattered. ※Splunk search results are output in the order in which the logs were generated. Example: ================ ■Splunk Side 01/01 00:00 Real-time search is executed & alert is triggered because alert condition (1) is met (Alert (1)) 01/01 00:00 Real-time search is executed & alert is triggered because alert condition (2) is met (Alert (2)) 01/01 00:00 Real-time search is executed & alert is triggered because alert condition (3) is met (Alert (3))  ■Mail receiving side 01/01 00:01 Mail received(Alert 2) 01/01 00:02 Mail received(Alert 3) 01/01 00:03 Mail received(Alert 1) ================ ※Mail is received in a scattered order.   How to receive emails in the same order as triggered alert?
Hi Splunkers, we have a customer with a Splunk Cloud environment. Every tenant has 1 HF managed by us that sends data to cloud platform and we must manage the HA problem. Due  a Splunk recommenda... See more...
Hi Splunkers, we have a customer with a Splunk Cloud environment. Every tenant has 1 HF managed by us that sends data to cloud platform and we must manage the HA problem. Due  a Splunk recommendation, we have not the HA implemented in the "usual" form, so we cannot have another (or more) HF s and manage them by a Deployment Server, to implement HA. Our first solution is a scheduled snapshot that runs every day and, in case of crash of HF server, restore the last working snap. This solution has a big problem: suppose that a crash occurs in the early afternoon and that the restore happen the following morning; this make us make the following question: What happen to data sent from sources to HF in this time range of HF "death"? Are lost or processed once the HF came back up and running? In case data are recovered after the forwarder restore, I suppose they are stored in the forwarder queue. Which limits this queue have? What is his size? Will be able to ingest all data or some will be lost? Suppose that the que is capable to manage all data; the speed of processing depend only by hardware or forwarder have some limits? Another problem is: in case this solution does not save us by a data loss, considering we cannot have multiple HF, what could be a feasible solution for HA?  
Hai , we are getting data with host name as FQDN name for few linux hosts. how to get hostname so that all events should come with hostname only, let us know where can i update the config. Than... See more...
Hai , we are getting data with host name as FQDN name for few linux hosts. how to get hostname so that all events should come with hostname only, let us know where can i update the config. Thanks 
Want to create search to get info from lookup file if event field contains data from two field in lookup file. log event have field "machineUserName" having value "employeeNumber" or "Email-ID" wan... See more...
Want to create search to get info from lookup file if event field contains data from two field in lookup file. log event have field "machineUserName" having value "employeeNumber" or "Email-ID" want to do lookup from "workdayData.csv" having two separate field for "employeeNumber" and "Email-ID" want to create lookup query  which will check "machineUserName" field from log event having either "employeeNumber" or "Email-ID" as value will check respective field in lookup and provide other information in lookup table. Log Event Field Lookup-table: WorkdayData.csv Sample Data HEADER:empId,empNum,name,email,country,loc,locDesc,OCGRP,OCSGRP,deptName,jobTitle,empStatus,bu,l1MgrEmail Sample-Data: X0134567,AMAT-0134567,"Jose numo --CNTR","Jose_numo@contractor.amat.com","United States of America",CASCL,"Santa Clara,CA",AGS,OCE,"NACDC NAmer Entity","Logistics Operations - Supplie",Active,"AGS GPS&T, Operations & Central Engineering","Carmy_Hyden@amat.com"  
Hi, I am trying to search for a list of users who have not logged into the azure ad  past 30 days Can you please help
Hi, everyone. Need some help for detection exclusion setting.  Want to exclude detections of  the files which are applicable to the file path below. c:\users\01234567\downloads\1234567890123xx... See more...
Hi, everyone. Need some help for detection exclusion setting.  Want to exclude detections of  the files which are applicable to the file path below. c:\users\01234567\downloads\1234567890123xx.exe For preventing alerts, I would like to use "13 digits number" and "xx.exe" as indicators. For now, I found it can be excluded only by "xx.exe." ex) file_path="*xx.exe" Although, when I use regex like below, it doesn't work. ex) file_path="*\d{13}xx.exe" Could you please let me know how to set both "13 digits number" and "xx.exe" as indicators for excluding detections?
Hello,  I have an output list like this one:       { "10.10.10.15": { "High": [ { "name": "vu1", "nvt_id": "123", "port": "", "protocol": "" ... See more...
Hello,  I have an output list like this one:       { "10.10.10.15": { "High": [ { "name": "vu1", "nvt_id": "123", "port": "", "protocol": "" } ], "Medium": [], "Low": [], "Log": [], "False Positive": [] }, "10.10.10.24": { "High": [ { "name": "vul", "nvt_id": "123", "port": "", "protocol": "" } ], "Medium": [], "Low": [], "Log": [], "False Positive": [] } }       I want to get All the IP address and extract the fields in each object.
Does splunk support push mechanism? How to push the available application logs to API endpoint?
When creating a Rest API data input for the Add-on Builder, and testing the REST API call I receive the following error: Traceback (most recent call last): File "/Applications/Splunk/etc/apps/TA... See more...
When creating a Rest API data input for the Add-on Builder, and testing the REST API call I receive the following error: Traceback (most recent call last): File "/Applications/Splunk/etc/apps/TA-test/bin/testtest_1663829580_707.py", line 14, in <module> import input_module_testtest_1663829580_707 as input_module File "/Applications/Splunk/etc/apps/TA-test/bin/input_module_testtest_1663829580_707.py", line 29, in <module> from cloudconnectlib.client import CloudConnectClient File "/Applications/Splunk/etc/apps/TA-test/bin/ta_test/aob_py3/cloudconnectlib/client.py", line 8, in <module> from .configuration import get_loader_by_version File "/Applications/Splunk/etc/apps/TA-test/bin/ta_test/aob_py3/cloudconnectlib/configuration/__init__.py", line 1, in <module> from .loader import get_loader_by_version File "/Applications/Splunk/etc/apps/TA-test/bin/ta_test/aob_py3/cloudconnectlib/configuration/loader.py", line 15, in <module> from ..core.exceptions import ConfigException File "/Applications/Splunk/etc/apps/TA-test/bin/ta_test/aob_py3/cloudconnectlib/core/__init__.py", line 1, in <module> from .engine import CloudConnectEngine File "/Applications/Splunk/etc/apps/TA-test/bin/ta_test/aob_py3/cloudconnectlib/core/engine.py", line 6, in <module> from .http import HttpClient File "/Applications/Splunk/etc/apps/TA-test/bin/ta_test/aob_py3/cloudconnectlib/core/http.py", line 26, in <module> 'http_no_tunnel': socks.PROXY_TYPE_HTTP_NO_TUNNEL, AttributeError: module 'socks' has no attribute 'PROXY_TYPE_HTTP_NO_TUNNEL' Splunk Version: 8.2.2.1 Add-on Builder Version: 4.1.1 OS: Mac I found a post to a similar issue, but moving the socks.py one directory up did not fix the issue. https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-TA-New-Relic-Insight-not-ingesting-data/m-p/528756 Creating a new Add-on still experienced the issue. Downloading Add-on Builder version 3.0.1 still experienced the issue.
Hi, I am creating a single value panel with different search query for each. I want to combine all these values into a table, It should look like an excel table in the splunk dashboard. My individu... See more...
Hi, I am creating a single value panel with different search query for each. I want to combine all these values into a table, It should look like an excel table in the splunk dashboard. My individual query for each single value wizard looks like below. I want to combine all these queries and form a table with values. 1. index=abcd laas_appID=xyz OSBUILD=Linux3.1 | where OSVendor="Redhat" | stats count by OSBUILD 2. index=abcd laas_appID=xyz OSBUILD=Linux3.2 | where OSVendor="Redhat" | stats count by OSBUILD 3. index=abcd laas_appID=xyz OSBUILD=Linux3.3 | where OSVendor="Redhat" | stats count by OSBUILD 4. index=abcd laas_appID=xyz OSBUILD=Linux3.1 | where OSVendor="Ubuntu" | stats count by OSBUILD etc 5. index=abcd laas_appID=xyz OSBUILD=Linux3.1 | where OSVendor="Solaries" | stats count by OSBUILD etc Table shoud look Like the below in dashboard: OS Type  Redhat Ubuntu Solaris Linux 3.1 12 84 54 Linux 3.2 13 45 123 Linux 3.3 56 658 678
We would like to know how to onboard an AIX wtmp logs to splunk ?Can it be done via Universal Forwarder ? If so can you please help us with the documentations for onboarding AIX logs ?    
Is there a way to reduce memory usage for splunk Forwarder? I have two directories with 57k files each (120Mb each) and After restarting service I checked Task manager and memory usage reach 2Gb. I c... See more...
Is there a way to reduce memory usage for splunk Forwarder? I have two directories with 57k files each (120Mb each) and After restarting service I checked Task manager and memory usage reach 2Gb. I can't reduce the number of files because I've already done that and can't reduce more. I'm sure that the issue is with those directories because after I delete them from inputs.conf memory reach only 100Mb. 
Hi,  I am having the following output: [txn_key] field2 field3 status thread [time1] time2 time3 status2 [IDMS-TJ_TJG022092200005GN00017] 332950 311551 OK 2 [133369] 342 29 OK [ZVKK_R1000001-2... See more...
Hi,  I am having the following output: [txn_key] field2 field3 status thread [time1] time2 time3 status2 [IDMS-TJ_TJG022092200005GN00017] 332950 311551 OK 2 [133369] 342 29 OK [ZVKK_R1000001-235CDC24E191DBCE4906CCD0ND0000001] 498728 488378 OK 1 [133564] 509 9 OK [PE_CZ_R19.6_2226500012123062] 342295 331477 OK 2 [133365] 353 49 OK [BAFIROPC_R1.1_186951760] 289068 282128 OK 1 [133392] 295 5 OK [GALILEO_R19.4_MTA_03FH220922110216] 394234 383672 OK 2 [133537] 405 11 OK [DBINTERNET_R19.4_HU_RE02209223-06008] 187797 168329 OK 2 [133526] 201 7 OK [IDMS_1-I0781_944e2c3cafc0487db56f6b8d3a6a6e231] 193581 178804 OK 2 [133576] 206 4 OK [....] I need to create a search string that can count the number of occurrences for the prefixes on [txn_key].  Therefore, I would need to have the output similar to:  txn_key count of txns IDMS-TJ 1 ZVKK 543 PE_CZ_R19.6 0 BAFIROPC_R1.1 231 GALILEO_R19.4 12 DBINTERNET_R19.4_HU 212312 [...]     Tried so far using following logic | stats count(eval(tnx_key=="ZVKK")) as ZVKK, count(eval(tnx_key=="GALAPAC")) as GALAPAC by tnx_key but it doesn't produce the desired output.   A bit of help please?