All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hai we are using multiple Heavy forwarders while doing any configuration in inputs.conf during logs collection doing manually in all heavy forwarders. is there anyway to update and push configura... See more...
hai we are using multiple Heavy forwarders while doing any configuration in inputs.conf during logs collection doing manually in all heavy forwarders. is there anyway to update and push configuration for all at once  we are using deployment server to manage universal forwarders/clients. how we can use deployment to manage HF also 
I am trying to create a dashboard that has a dropdown input eg: <input type="dropdown" token="HWStat" searchWhenChanged="true"> <label>HW Status</label> <choice value="*">All... See more...
I am trying to create a dashboard that has a dropdown input eg: <input type="dropdown" token="HWStat" searchWhenChanged="true"> <label>HW Status</label> <choice value="*">All</choice> <choice value="Installed">Installed</choice> <default>*</default> <initialValue>*</initialValue> </input>   This was the easier part.   Now when I go to use the token $HWStat$ it's just fine for passing the "Installed" option to the search.   However, the asterisk "*" option for all when passed only displays results with values.  I know there are a lot of "null" values in there as well that I would like the output to display.  Also there appears to be some with "blanks" as well. (I've tried to set all blank fields to null using fillnull, and some appear to be either a white space or blank).   basic search I was testing: ending with: | stats ... | search hw_stat_column=$HWStat$   What is the better option to pass other than "*" to capture everything?  I've scoured the internet, and this is not an easy thing to search for.  Thanks in advance!
Hello! I have a log file with the following pattern:         13:06:03 CRITICAL  [app] An error happened while processing message active/mastercard/event/secondpre... See more...
Hello! I have a log file with the following pattern:         13:06:03 CRITICAL  [app] An error happened while processing message active/mastercard/event/secondpresentmentcreateevent/v1/2022/08/30/afae9068-8dc2-5e3a-9e4a-83081925238f ["message" => "[{"requestid":"49120180-f64d-863d-f7f5-c2f58b180587","source":"SYSTEM","reasoncode":"INVALID_REQUEST","description":" [CreateCR2] usecase is not applicable in this context.","recoverable":false,"details":[{"name":"ErrorDetailCode","value":"100001"}]}]","status" => 400,"trace" => [["file" => "/var/www/drm-scheme/vendor/react/event-loop/src/Timer/Timers.php","line" => 101,"function" => "App\Command\{closure}","class" => "App\Command\AbstractQueueProcessor","type" => "->"],["file" => "/var/www/drm-scheme/vendor/react/event-loop/src/StreamSelectLoop.php","line" => 185,"function" => "tick","class" => "React\EventLoop\Timer\Timers","type" => "->"],["file" => "/var/www/drm-scheme/src/AppBundle/Command/AbstractQueueProcessor.php","line" => 311,"function" => "run","class" => "React\EventLoop\StreamSelectLoop","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/console/Command/Command.php","line" => 255,"function" => "execute","class" => "App\Command\AbstractQueueProcessor","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/console/Application.php","line" => 929,"function" => "run","class" => "Symfony\Component\Console\Command\Command","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/framework-bundle/Console/Application.php","line" => 96,"function" => "doRunCommand","class" => "Symfony\Component\Console\Application","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/console/Application.php","line" => 264,"function" => "doRunCommand","class" => "Symfony\Bundle\FrameworkBundle\Console\Application","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/framework-bundle/Console/Application.php","line" => 82,"function" => "doRun","class" => "Symfony\Component\Console\Application","type" => "->"],["file" => "/var/www/drm-scheme/vendor/symfony/console/Application.php","line" => 140,"function" => "doRun","class" => "Symfony\Bundle\FrameworkBundle\Console\Application","type" => "->"],["file" => "/var/www/drm-scheme/bin/console","line" => 42,"function" => "run","class" => "Symfony\Component\Console\Application","type" => "->"]],"line" => 261,"class" => "App\Command\AbstractQueueProcessor","request" => "active/mastercard/request/secondpresentmentrequest/v1/2022/08/29/204618304273/cb905322-b2ab-4742-8acd-a7915b9be744","caseId" => "204618304273"] ["uid" => "31473ed"]           But i need to understand how can i set up in the Settings -> Source types -> (sourcetype name create) to make this event highlighted. Here is how i tried to set up (but didn't work):  
Hi, can anybody help, please? I'm using classical forwarder to index regular CSV file. The time/date of the CSV logFile changes always if a new entry comes. Each event has TIME Attribute. If I choose... See more...
Hi, can anybody help, please? I'm using classical forwarder to index regular CSV file. The time/date of the CSV logFile changes always if a new entry comes. Each event has TIME Attribute. If I choose time interval TODAY there have been indexed 100 events. The indexed time _time is always the same (similar to the time of the first event). The time attribute of each event changes of course. Does anybody have an idea, where is the problem? If I restart the forwarder, the problem appears on the next day.
My rex search is returning all the rows instead of the one being searched. What am I doing wrong? index=cloudwatchlogs loggroup="/aws-glue/jobs/xxxxx/*" meta_region="us-east-1" meta_env="TEST" meta... See more...
My rex search is returning all the rows instead of the one being searched. What am I doing wrong? index=cloudwatchlogs loggroup="/aws-glue/jobs/xxxxx/*" meta_region="us-east-1" meta_env="TEST" meta_type="aws:jobs" | rex field="message.message" max_match=0 "Total rows from Raw Call meta:\s(?<msg1>\d+)\s" | rex field="message.message" max_match=0 "Total Meta rows written to S3 bucket:\s(?<msg2>\d+)\s" | rex field="message.message" max_match=0 "Total QCI Raw Data rows read from S3 bucket:\s(?<msg3>\d+)\s" | rex field="message.message" max_match=0 "Total root rows written to S3 bucket:\s(?<msg4>\d+)\s" Sample data - INFO:__main__:Total rows from Raw Call meta: 3995 INFO:__main__:Deleting duplicate rows INFO:__main__:Total rows before Deleting duplicate rows: 3995 INFO:__main__:Listing duplicates, if any INFO:__main__:Total Meta rows written to S3 bucket: 3995 INFO:__main__:Processing RAW QCI Data.
Hi   I have several search where I performed renaming. Some of them are done on fied which looks like xxx.yyy{}.aaa xxx.yyy{}.bbb zzz{}.ccc In the search I do | rename xxx.yyy{}.aaa as newna... See more...
Hi   I have several search where I performed renaming. Some of them are done on fied which looks like xxx.yyy{}.aaa xxx.yyy{}.bbb zzz{}.ccc In the search I do | rename xxx.yyy{}.aaa as newname1,      xxx.yyy{}.bbb as newname2,     zzz{}.ccc as newname3 I tried to implement it with field alias configuration but it's doesn't work   Is it possible ? I don't find any documentation about this specification   PS : my field alias works properly without curly braces
I am also facing similar issue. I have installed the configured the Commvault Splunk plugin available at: https://splunkbase.splunk.com/app/5718/  and I am trying to get respective data from Commvalu... See more...
I am also facing similar issue. I have installed the configured the Commvault Splunk plugin available at: https://splunkbase.splunk.com/app/5718/  and I am trying to get respective data from Commvalut version-11.24.48. I have checked the permissions for the user(not a service account) of Commcell it is same as per your shared screenshot still no luck.   Also I am using the port 81 in the format <webserver_url>:<IIS port number> for  setting up the Commcell page. This port is open from splunk to commvault but is not open from commvault to splunk Please help me understand the possible reasons for the issue. 1) connectivity issue could be du  
Hi Community! I am trying to find a good example of setting a background image to a classic dashboard.  This process seems very simple in Dashboard Studio but near impossible in Classic Dashboard. ... See more...
Hi Community! I am trying to find a good example of setting a background image to a classic dashboard.  This process seems very simple in Dashboard Studio but near impossible in Classic Dashboard.  Please can someone point me in the right direction.   Thanks
Currently I am trying to extract the crossReferenceId value using below rex query.  Its working fine and I can extract the data. But seems below rex query is not extract all the values from the logs.... See more...
Currently I am trying to extract the crossReferenceId value using below rex query.  Its working fine and I can extract the data. But seems below rex query is not extract all the values from the logs. For example, if I search the individual "agentname"  I cannot find that in the search (however I can find the same agentname without rex).  Seems below rex is not extracting the complete values. May be some are missing out.    index=xyz "crossReferenceId" | rex"\{\"crossReferenceId\"\:\"(?<agentname>\w*)\"\,\"providerInstanceId\"\:\"(?<providerInstanceId>\w*............................)\"\,\"userId\"\:\"(?<userid>\w*............................)\"\,\"dateModified\"\:\"(?<modifieddate>\d*................)\"\}" | search agentname="*" providerInstanceId="*" userid="*" modifieddate="*" | stats count by agentname, providerInstanceId, userid, modifieddate | table agentname, providerInstanceId, userid, modifieddate   2022-09-21 21:18:23.046 TRACE 5028 --- [pool-3-thread-2] i.e.p.c.p.OAuthAuthenticationInterceptor : Host-Client Response: GET | 200 from https://xyz.com.com/api/crossReferences?$filter=p: Payload: {"@odata.context":"$metadata#crossReferences","value":[{"crossReferenceId":"asdfdsf","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"336d6a6f-3124-4c7c-b57a-692fa5114c2e","dateModified":"2022-08-09T12:17:06Z"},{"crossReferenceId":"dgsgdf","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"79729cc5-d454-44dc-ad60-0a9caadef580","dateModified":"2022-07-23T11:35:32Z"},{"crossReferenceId":"wqruytuere","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"6fe5f478-fbcb-460f-99b8-af1757c03bc5","dateModified":"2021-06-27T11:07:43Z"},{"crossReferenceId":"yuiyiyui","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"511da6bf-c21f-40bf-a18a-23c9ad472a9d","dateModified":"2022-05-26T11:49:18Z"},{"crossReferenceId":"ttttttt","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"251a6976-1460-49b8-a3cc-5126cb2caa00","dateModified":"2022-08-23T11:11:47Z"},{"crossReferenceId":"ytujty","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"7c17da4f-2181-4392-abe9-0e8ea8290234","dateModified":"2020-10-24T11:25:46Z"},{"crossReferenceId":"iljkljlhl","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"54e850d8-e69e-4749-8244-f2700eec4d0f","dateModified":"2022-03-26T11:33:12Z"},{"crossReferenceId":"xcvxcvvcvx","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"6465cce8-2d40-4661-bc9a-6473e4a09597","dateModified":"2022-04-09T11:27:12Z"},{"crossReferenceId":"ertwetret","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"c679dbe2-e803-4057-92ca-106ed48370b8","dateModified":"2022-09-08T11:23:50Z"},{"crossReferenceId":"tyutyutu","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"8e63a413-f4e4-46cd-aa10-bf86206079de","dateModified":"2021-11-22T12:17:43Z"},{"crossReferenceId":"aaaaaaa","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"71255798-366e-4d1e-8654-c7adcbeb7473","dateModified":"2022-06-23T11:36:02Z"},{"crossReferenceId":"erererere","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"20e39e30-d31b-4ad2-8993-b087104e34fa","dateModified":"2021-09-13T11:10:05Z"},{"crossReferenceId":"yutyuyutyu","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"6735fd0b-1148-4193-8971-f7a3afadb807","dateModified":"2022-07-25T11:20:29Z"},{"crossReferenceId":"ertrtrttr","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"bf3ffa03-83e8-4973-a292-817d0fd9a412","dateModified":"2022-08-23T11:11:47Z"},{"crossReferenceId":"tyuyuyuyu","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"5e622f17-7dce-4f2b-a264-1224fc709469","dateModified":"2022-08-30T21:07:02Z"},{"crossReferenceId":"wewewewewe","providerInstanceId":"c8d1a13b-2ebc-4762-acd0-c788bdd79125","userId":"b46acff6-aedf-45ab-b353-2ce699c0c454","dateModified":"2022-08-23T11:35:20Z"}]}
A notable event triggered 30000 notables how can i delete them all?
Hi, I have a field X with values similar to the following "device-group APCC1_Core_Controller pre-rulebase application-override rules NFS-bypass UDP-1" and "device-group APCC1_Core_Controller pre-... See more...
Hi, I have a field X with values similar to the following "device-group APCC1_Core_Controller pre-rulebase application-override rules NFS-bypass UDP-1" and "device-group APCC1_Core_Controller pre-rulebase application-override rules" as 2 examples of possible values. I need to extract the value in between "device_group" and "per_rulebase...." and assign this as Y. So, if X = "device-group APCC1_Core_Controller pre-rulebase application-override rules NFS-bypass UDP-1" => Y = "APCC1_Core_Controller" If X = "device-group APCC1_Core_Controller pre-rulebase application-override rules" => Y = "APCC1_Core_Controller". What would the rex command be??? Thanks,
Hi, I have a scenario where I receive multiple requests which contain same field value basically OrderNumber. So the backend is receiving duplicate orders from the front end. It only happens every n... See more...
Hi, I have a scenario where I receive multiple requests which contain same field value basically OrderNumber. So the backend is receiving duplicate orders from the front end. It only happens every now and then and I'd like to plot a graph which can show me when exactly it happens. I think just the count of duplicates vs _time would be enough. I've tried using a query but it's giving me a distorted graph. Is there a better way to achieve this? Query I have -  index=myapp OrderService "HttpMethod=POST" | rex field=_raw "orderNumber\"\:\s\"(?<orderNumber>[^\"]+)" | bin span=15m _time | stats count by _time orderNumber | where count > 1 | table _time count Let me know if anyone has any suggestions to do this in a better way.
Hello, I recently migrated few of my Indexes to Smartstore Indexes using Azure. After the migration, now when I go to the Indexes page on my Splunk Web UI, the "New Index" button is disabled and it ... See more...
Hello, I recently migrated few of my Indexes to Smartstore Indexes using Azure. After the migration, now when I go to the Indexes page on my Splunk Web UI, the "New Index" button is disabled and it says "Disabled new index for Smart store Indexes". Does this mean I cannot create any new Splunk Indexes via Splunk Web directly now, or is it a bug? How can I re-enable the "New Index" button now? (I am using Splunk Enterprise verson 9.0.1 and a single instance)
Hello Splunkers, I need your help to understand and to solve an issue we discovered with Splunk. This issue seems to be a limitation or a bug of Splunk Enterprise : We work with microsoft sysmon... See more...
Hello Splunkers, I need your help to understand and to solve an issue we discovered with Splunk. This issue seems to be a limitation or a bug of Splunk Enterprise : We work with microsoft sysmon data, and sometimes we have events with the value of a command executed in prompt. Splunk reports the exact value of the command executed in the raw event : And the value extracted by Splunk for the field CommandLine is the following : However, when I want to display the CommandLine  field in a table or a stats table, then I get that. See the last row of the table for our CommandLine example : Splunk replaces my quotes by HTML encoded charactersin the table. However, the strange thing is not that Splunk replaces everytime special characters by HTML character, Splunk only replaces the special character by HTML characters for some commands executed.  Just check the examples below to understand the issue : Depending on whether we use some texts that Splunk seems to do not like or not, Splunk will encode my special characters in the table or not. The texts in the command executed, that generates the Splunk HTML encoding in table or stats are the followings :          <script> or vbsscript: or javascript&colon;         Otherwise, if I put another text, in the command like "blablascript:" or "script:" I do not have the issue. Could someone please help us to understand from where this issue may come ? Is it a Splunk limitation/bug or just something that we need to configure somewhere ? Great Thanks to you by advance.    
index=sap source=P* (EVENT_TYPE=abc) | fields FDATE FTIME LDATE LTIME QDEEP QNAME FIRSTTID QSTATE EVENT_TYPE source | eval earliestCT = strptime(strftime(now() + `utcdiff("America/Chicago")`,"00:00:0... See more...
index=sap source=P* (EVENT_TYPE=abc) | fields FDATE FTIME LDATE LTIME QDEEP QNAME FIRSTTID QSTATE EVENT_TYPE source | eval earliestCT = strptime(strftime(now() + `utcdiff("America/Chicago")`,"00:00:00 %m/%d/%Y America/Chicago"),"%H:%M:%S %m/%d/%Y %Z"), latestCT = strptime(strftime(now() + `utcdiff("America/Chicago")`,"23:59:59 %m/%d/%Y America/Chicago"),"%H:%M:%S %m/%d/%Y %Z"), DateCT = strftime(now() + `utcdiff("America/Chicago")`,"%m/%d/%Y"),Created = strptime(FDATE." ".FTIME,"%Y%m%d %H%M%S"), lastupdate=strptime(LDATE." ".LTIME,"%Y%m%d %H%M%S") | where Created >= earliestCT AND Created <= latestCT | dedup source EVENT_TYPE QNAME FIRSTTID | stats sum(QDEEP) as TotalEntries values(DateCT) as DateCT by source EVENT_TYPE | lookup Lookup_SAP_PERF_EntryThresholds.csv source EVENT_TYPE OUTPUTNEW Threshold LastAlertedDate | where (tostring(DateCT) != tostring(LastAlertedDate)) AND match(Threshold,".+") AND (TotalEntries >= Threshold) To add new requirement in the existing alert, When the entries are greater than threshold and staying for more than 10 mins and not reducing further then it should trigger.
We have implemented a real-time search in [Alerts] of Splunk that sends out an email when the corresponding search result is output. When multiple logs (error logs) are output to Splunk at the same t... See more...
We have implemented a real-time search in [Alerts] of Splunk that sends out an email when the corresponding search result is output. When multiple logs (error logs) are output to Splunk at the same time (timing), multiple e-mails are sent out, but we want the e-mails to be received in the order in which the logs were output, but the order in which the e-mails are received is different from the order in which the logs were output, and they are scattered. ※Splunk search results are output in the order in which the logs were generated. Example: ================ ■Splunk Side 01/01 00:00 Real-time search is executed & alert is triggered because alert condition (1) is met (Alert (1)) 01/01 00:00 Real-time search is executed & alert is triggered because alert condition (2) is met (Alert (2)) 01/01 00:00 Real-time search is executed & alert is triggered because alert condition (3) is met (Alert (3))  ■Mail receiving side 01/01 00:01 Mail received(Alert 2) 01/01 00:02 Mail received(Alert 3) 01/01 00:03 Mail received(Alert 1) ================ ※Mail is received in a scattered order.   How to receive emails in the same order as triggered alert?
Hi Splunkers, we have a customer with a Splunk Cloud environment. Every tenant has 1 HF managed by us that sends data to cloud platform and we must manage the HA problem. Due  a Splunk recommenda... See more...
Hi Splunkers, we have a customer with a Splunk Cloud environment. Every tenant has 1 HF managed by us that sends data to cloud platform and we must manage the HA problem. Due  a Splunk recommendation, we have not the HA implemented in the "usual" form, so we cannot have another (or more) HF s and manage them by a Deployment Server, to implement HA. Our first solution is a scheduled snapshot that runs every day and, in case of crash of HF server, restore the last working snap. This solution has a big problem: suppose that a crash occurs in the early afternoon and that the restore happen the following morning; this make us make the following question: What happen to data sent from sources to HF in this time range of HF "death"? Are lost or processed once the HF came back up and running? In case data are recovered after the forwarder restore, I suppose they are stored in the forwarder queue. Which limits this queue have? What is his size? Will be able to ingest all data or some will be lost? Suppose that the que is capable to manage all data; the speed of processing depend only by hardware or forwarder have some limits? Another problem is: in case this solution does not save us by a data loss, considering we cannot have multiple HF, what could be a feasible solution for HA?  
Hai , we are getting data with host name as FQDN name for few linux hosts. how to get hostname so that all events should come with hostname only, let us know where can i update the config. Than... See more...
Hai , we are getting data with host name as FQDN name for few linux hosts. how to get hostname so that all events should come with hostname only, let us know where can i update the config. Thanks 
Want to create search to get info from lookup file if event field contains data from two field in lookup file. log event have field "machineUserName" having value "employeeNumber" or "Email-ID" wan... See more...
Want to create search to get info from lookup file if event field contains data from two field in lookup file. log event have field "machineUserName" having value "employeeNumber" or "Email-ID" want to do lookup from "workdayData.csv" having two separate field for "employeeNumber" and "Email-ID" want to create lookup query  which will check "machineUserName" field from log event having either "employeeNumber" or "Email-ID" as value will check respective field in lookup and provide other information in lookup table. Log Event Field Lookup-table: WorkdayData.csv Sample Data HEADER:empId,empNum,name,email,country,loc,locDesc,OCGRP,OCSGRP,deptName,jobTitle,empStatus,bu,l1MgrEmail Sample-Data: X0134567,AMAT-0134567,"Jose numo --CNTR","Jose_numo@contractor.amat.com","United States of America",CASCL,"Santa Clara,CA",AGS,OCE,"NACDC NAmer Entity","Logistics Operations - Supplie",Active,"AGS GPS&T, Operations & Central Engineering","Carmy_Hyden@amat.com"  
Hi, I am trying to search for a list of users who have not logged into the azure ad  past 30 days Can you please help