All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the following event  which contains an array of  records ProcessName: TestFlow270    message: TestMessage1    records: [ [-]      {"Username": "138perf_test1@netgear.com.org", "Email": "... See more...
I have the following event  which contains an array of  records ProcessName: TestFlow270    message: TestMessage1    records: [ [-]      {"Username": "138perf_test1@netgear.com.org", "Email": "tmckinnon@netgear.com.invalid", "Id": "00530000000drllAAA"}      {"Username": "clau(smtest145)@netgear.com.org", "Email": "clau@netgear.com.invalid", "Id": "0050M00000DtmxIQAR"}      {"Username": "d.mitra@netgear.com.test1", "Email": "d.mitratest1@netgear.com", "Id": "0052g000003DSbTAAW"}      {"Username": "demoalias+test1@guest.netgear.com.org", "Email": "demoalias+test1@gmail.com.invalid", "Id": "0050M00000CyZJYQA3"}      {"Username": "dlohith+eventstest1@netgear.com.org", "Email": "sfdcapp_gacks@netgear.com.invalid", "Id": "0050M00000CzJvYQAV"}      {"Username": "juan.gimenez+test1@netgear.com.apsqa2", "Email": "juan.gimenez+test1@netgear.com", "Id": "005D10000043gVxIAI"}      {"Username": "kulbir.singh+test1@netgear.com.org", "Email": "sfdcapp_gacks@netgear.com.invalid", "Id": "0050M00000CzJvaQAF"}      {"Username": "rktest1028@guest.netgear.com.org", "Email": "rktest1028@gmail.com.invalid", "Id": "0053y00000G0UmxAAF"}      {"Username": "test123test2207@test.com", "Email": "kkhatri@netgear.com", "Id": "005D10000042Mi1IAE"}      {"Username": "test123test@test.com", "Email": "test123test@test.com", "Id": "0052g000003EUIUAA4"}    ]    severity: DEBUG I tried this query  index=abc|spath input=records{} | mvexpand records{} | table ProcessName, message, severity, Username, Email, as Id it returns 10 records but all the 10 records have same value I mean the first record Is there way to parse this array with all the key value pairs  @gcusello  @yuanliu 
Hello! I have recently upgraded my splunk enterprise servers from 9.1.2 to 9.2.1. I noticed the following web behaviors in deployment server ; 1. When searching for hostname, it takes a lon... See more...
Hello! I have recently upgraded my splunk enterprise servers from 9.1.2 to 9.2.1. I noticed the following web behaviors in deployment server ; 1. When searching for hostname, it takes a long time to load 2. Server class and app for (any) host is not reflecting correctly. This was crossed checked on CLI serverclass.conf    Wondering if anyone face this issue and if it is a GUI bug.
this one didn't work <done> <condition match="$job.resultCount$==0"> <set token="Tokent">0</set> </condition> <condition> <set token="Tokent">$row.device_ip_address.value$</set> </condition> ... See more...
this one didn't work <done> <condition match="$job.resultCount$==0"> <set token="Tokent">0</set> </condition> <condition> <set token="Tokent">$row.device_ip_address.value$</set> </condition> </done> below one is only giving 1st value of the field... I need to show rest of the values of device_ip_address <done> <condition match="$job.resultCount$==0"> <set token="Tokent">0</set> </condition> <condition> <set token="Tokent">$result.device_ip_address$</set> </condition> </done>
I am using ingest action to filter the log message before being indexed in splunk.. I want to include the message that matches only the keyword :ERROR: and :FATAL: rest all of the messages should ... See more...
I am using ingest action to filter the log message before being indexed in splunk.. I want to include the message that matches only the keyword :ERROR: and :FATAL: rest all of the messages should not be indexed. Whereas in splunk ingest action has the filter to only exclude message not the include
I want to show  a custom message when the panel shows count=0 , which means search is not giving any results but in future might give.
Hi All, My props and transforms is not working. Kept the props and transforms in the Heavy Forwarder. can anyone please assist. I want to drop the below lines from ingesting into Splunk but its n... See more...
Hi All, My props and transforms is not working. Kept the props and transforms in the Heavy Forwarder. can anyone please assist. I want to drop the below lines from ingesting into Splunk but its not working. #Date: 2024-05-03 00:00:01 #Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken https props: [mysourcetype] TRANSFORMS-drop_header= drop_header Transforms: [drop_header] REGEX = ^#Date.+\n#Fields.+ DEST_KEY = queue FORMAT = nullQueue
Since I can get it to show me when the percentage of errors 69 and 10001 is greater than 10, with the following search it doesn't work, you can help me. index="cdr" | search "Tipo_Trafico"="*" "Cod... See more...
Since I can get it to show me when the percentage of errors 69 and 10001 is greater than 10, with the following search it doesn't work, you can help me. index="cdr" | search "Tipo_Trafico"="*" "Codigo_error"="*" | stats count(eval(Tipo_Trafico="MT")) AS Total_MT, count(eval(Codigo_error="69")) AS Error_69 | eval P_Error_69=((Error_69*100/Total_MT)) | stats count(eval(Tipo_Trafico="MT")) AS Total_MT, count(eval(Codigo_error="10001")) AS Error_10001 | eval P_Error_10001=((Error_10001*100/Total_MT)) | stats count by P_Error_69, P_Error_10001 | where count>10
Hi All, I'm working hard to create a SIEM dashboard that has the AH list: higher priority :1)ab 2)CD 3)if 4)GH rest of the AH: 5)IJ 6)kl 7)MN for each of these systems, I need a list of hosts as... See more...
Hi All, I'm working hard to create a SIEM dashboard that has the AH list: higher priority :1)ab 2)CD 3)if 4)GH rest of the AH: 5)IJ 6)kl 7)MN for each of these systems, I need a list of hosts associated with the AH and what is currently being ingested from the AH.  
Here is my example search to start... index=data | timechart span=1d by user Now, I am trying to build out so the last 30 days I can get a count of new users that has not been seen on previous da... See more...
Here is my example search to start... index=data | timechart span=1d by user Now, I am trying to build out so the last 30 days I can get a count of new users that has not been seen on previous days.  Tried some bin options and something like this but no joy.  index=data | stats min(_time) as firstTime by user | eval isNew=if(strftime(firstTime, "%Y-%m-%d") == strftime(_time, "%Y-%m-%d"), 1, 0) | where isNew=1 Any help?   
How to fetch the fieldForLabel value using token(option). i have to pass fieldForLabel to query <input type="dropdown" token="option"> <label>Choose from options</label> <fieldForLabel>TEST</f... See more...
How to fetch the fieldForLabel value using token(option). i have to pass fieldForLabel to query <input type="dropdown" token="option"> <label>Choose from options</label> <fieldForLabel>TEST</fieldForLabel> <fieldForValue>aaa</fieldForValue> <search> <query> | inputlookup keyvalue_pair.csv | dedup TEST | sort TEST | table TEST aaa </query> </search> </input>  
index=_internal source=*splunkd.log* host=<all indexer hosts> bucketreplicator full earliest=-15m | stats count dc(host) as num_indexer_blocked_by_peer by peer | where num_indexer_blocked_by_peer > ... See more...
index=_internal source=*splunkd.log* host=<all indexer hosts> bucketreplicator full earliest=-15m | stats count dc(host) as num_indexer_blocked_by_peer by peer | where num_indexer_blocked_by_peer > 0 AND count > 0 | join type=left peer [ search index=_introspection host=<all indexer hosts> hostwide earliest=-10m | stats values(data.instance_guid) as peer by host]
index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.head... See more...
index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF | rename msg.OM_Body.header.transactionId as transactionId | eval lenth=len(transactionId) |sort 1000000 _time | dedup transactionId _time | search lenth=40 | rename _time as Time1 | eval Request_time=strftime(Time1,"%y-%m-%d %H:%M:%S") | stats count by Time1 transactionId Request_time | appendcols [| search index=hum_stg_app earliest=-30d fcr-np-sms-gateway "msg.service_name"="fcr-np-sms-gateway" "msg.TransactionId"=* "msg.NowSMSResponse"="{*Success\"}" | rename "msg.TransactionId" as transactionId_request|sort 1000000 _time | dedup transactionId_request _time |eval Time=case(like(_raw,"%fcr-np-sms-gateway%"),_time) | eval lenth=len(transactionId_request) | search lenth=40 | dedup transactionId_request | stats count by transactionId_request Time ] | eval Transaction_Completed_time=strftime(Time,"%y-%m-%d %H:%M:%S") | eval Time_dif=Time-Time1 | eval Time_diff=(Time_dif)/3600 | fields transactionId transactionId_request Request_time Transaction_Completed_time count Time_diff Request_time Time Time1 #getting wrong value in Transaction_Completed_time.
HI Team, when the status is H and it has to complete within the day itself. expected output for below sample data is count 2 completed overall within the day. Thanks in Advance! Sample outp... See more...
HI Team, when the status is H and it has to complete within the day itself. expected output for below sample data is count 2 completed overall within the day. Thanks in Advance! Sample output below: _time OVERAL DT NUM STAT FM WLM CS OB EM RC ER ST 2024-03-07T01:50:00.000-05:00 X 20240307 5 C C C H X X X X X 2024-03-07T03:30:10.000-05:00 X 20240307 5 C C C P X X X X X 2024-03-07T03:40:07.000-05:00 X 20240307 5 C C H H H H H H H 2024-03-07T06:10:14.000-05:00 X 20240307 5 C C C I X X X X X 2024-03-07T07:10:16.000-05:00 X 20240307 5 C C C H X X X X X 2024-03-07T07:30:17.000-05:00 X 20240307 5 C C C I X X X X X 2024-03-07T08:20:18.000-05:00 X 20240307 5 C C C C I C I C C 2024-03-07T08:30:22.000-05:00 C 20240307 5 C C C C C C C C C 2024-03-07T02:20:01.000-05:00 X 20240307 5 C C C X X X X X X 2024-03-07T03:30:10.000-05:00 X 20240307 5 C C C P X X X X X 2024-03-07T03:40:07.000-05:00 X 20240307 5 C C H H H H H H H 2024-03-07T07:10:16.000-05:00 X 20240307 5 C C C H X X X X X 2024-03-07T07:30:17.000-05:00 X 20240307 5 C C C I X X X X X 2024-03-07T08:20:18.000-05:00 X 20240307 5 C C C C I C I C C 2024-03-07T08:30:22.000-05:00 C 20240307 5 C C C C C C C C C 2024-03-07T010:30:10.000-05:00 X 20240307 5 C C C P X X X X X 2024-03-07T22:40:07.000-05:00 X 20240307 5 C C H H H H H H H 2024-03-07T22:10:16.000-05:00 X 20240307 5 C C C H X X X X X 2024-03-07T23:30:17.000-05:00 X 20240308 5 C C C I X X X X X 2024-03-07T00:20:18.000-05:00 X 20240308 5 C C C C I C I C C 2024-03-08T08:30:22.000-05:00 C 20240308 5 C C C C C C C C C
Hi All, I have created a lookup table Status.csv which is having all the status of tickets and whether they are SLA relevant or not. However, due to having incorrect data while creating the table th... See more...
Hi All, I have created a lookup table Status.csv which is having all the status of tickets and whether they are SLA relevant or not. However, due to having incorrect data while creating the table the values for all the Statuses are coming wrong. I want to update all the data for these statuses and add few more status values to the lookup table. How do I do that? Please suggest.
We are writing Log Statements in Java,  and then reviewing the info and exception alerts. Our team is then conducting a Splunk Search count of log statements by Category. Many of our log statements... See more...
We are writing Log Statements in Java,  and then reviewing the info and exception alerts. Our team is then conducting a Splunk Search count of log statements by Category. Many of our log statements can have share multiple categories.  Using this reference for key-value pair, https://dev.splunk.com/enterprise/docs/developapps/addsupport/logging/loggingbestpractices/ So in our log statements, We are doing      LOG.info("CategoryA=true , CategoryG=true");     Of course, we aren't going to write "Category=false" in any logger, since its inherent in the statement. Is this a overall good method to count values in Splunk by Category, or do you recommend a better practice?    
Hello, I have a problem because I can't see the windows logs in splunk cloud. My architecture is as follows: UF->HF->Splunk cloud   I get the logs on the HF because I see them by doing p... See more...
Hello, I have a problem because I can't see the windows logs in splunk cloud. My architecture is as follows: UF->HF->Splunk cloud   I get the logs on the HF because I see them by doing packet inspection with tcpdump. So I have 9997 open, but these are not being forwarded to the cloud. These are my inputs.conf /opt/splunk/etc/apps/Splunk_TA_windows/local/ ###### OS Logs ###### [WinEventLog://Application] disabled = 0 index=mx_windows start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true [WinEventLog://Security] disabled = 0 index=mx_windows start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=true [WinEventLog://System] disabled = 0 index=mx_windows start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 ## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false. renderXml=true host=WinEventLogForwardHost index=mx_windows /opt/splunk/etc/system/local/inputs.conf [splunktcp://9997] index=mx_windows disabled = 0 [WinEventLog://ForwardedEvents] index=mx_windows disabled = 0
Hi all I'd like to plot the difference between two values on a timechart Example data: _time a b t 10 1 t+1s 11 1.5 t+2s 12 2   Expected resulting data time a b c ... See more...
Hi all I'd like to plot the difference between two values on a timechart Example data: _time a b t 10 1 t+1s 11 1.5 t+2s 12 2   Expected resulting data time a b c t 10 1 9 t+1s 11 1.5 9.5 t+2s 12 2 10   I'm using the query       index=indx sourcetype=src (Instrument="a" OR Instrument="b") | eval c = a - b | timechart values(a) values(b) values(c) span=1s       Any ideas where I'm going wrong?
HI Set up the add on on a cloud instance. Not seeing any data come in via HEC. Any ideas on how to troubleshoot?   Thanks
How should I refine this query so that I can get every fields in one table without using join or append or any other sub search. (index=whcrm OR index=whcrm_int)sourcetype="bmw-sl-gcdm-int-api" ("S... See more...
How should I refine this query so that I can get every fields in one table without using join or append or any other sub search. (index=whcrm OR index=whcrm_int)sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*") | stats count(eval(match(_raw, "Sending POST consents to *"))) as Total, count(eval(match(_raw, "Create / Update Consents done"))) as Success, count(eval(match(_raw, "Error in sync-consent-dataFlow:*"))) as Error | eval ErrorRate = round((Error / TotalReceived) * 100, 2) | table Total, Success, Error, ErrorRate | append [ search (index=whcrm OR index=whcrm_int) (sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*")) | rex field=message ": (?<json>\{[\w\W]*\})$" | rename properties.correlationId as correlationId | rename properties.gcid as GCID | rename properties.gcid as errorcode | rename properties.entity as entity | rename properties.country as country | rename properties.targetSystem as target_system | table correlationId GCID errorcode entity country target_system ]
I am trying to forward data from UF to few indexers but the indexes have dynamic IPs which keep changing. Now, how does the UF know where to forward the data How can I tackle this problem? Also, ca... See more...
I am trying to forward data from UF to few indexers but the indexes have dynamic IPs which keep changing. Now, how does the UF know where to forward the data How can I tackle this problem? Also, can someone explain what is a smartstore & how does it work?