All Topics

Top

All Topics

Hello, Good Day! I having the values in the field Data As shown below 2022-05-31 10:18:09   emea   2022-05-31 2022-05-31 10:18:14    apac  2022-05-31 2022-05-31 1... See more...
Hello, Good Day! I having the values in the field Data As shown below 2022-05-31 10:18:09   emea   2022-05-31 2022-05-31 10:18:14    apac  2022-05-31 2022-05-31 10:18:20     us  I want to show the time zone as well like if emea comes after time it should show CST Output should be as follows: 2022-05-31 10:18:09 CST  emea   2022-05-31 2022-05-31 10:18:14 HKT   apac  2022-05-31 2022-05-31 10:18:20  EDT   us  Please help me on this Thank you in Advance Veeru
Hi Spunkers, I have a request by customer never faced before. For one particular Data Model, the Email one, it is required that certaine filed are always populated, even if the logs have this fields... See more...
Hi Spunkers, I have a request by customer never faced before. For one particular Data Model, the Email one, it is required that certaine filed are always populated, even if the logs have this fields empty and/or are not present. So for example it is required that the field subject is always filled; of course, if subject is not present in events, we have to fill it with a token, like the fillnullvalue function does. The particular part is that the customer required that this filling is performed not at search time, with a fillnull command in search, but by the Data Model itself; so, for example, if a log from mail server arrive and it not contain the subject field and/or it is not populated, the DM must fill it with a token value and so, when a search is executed, subject will be already filled with this token. My question is: is this possible to perform?
There are two queries `query 1` will give ID, TIME fields `query 2` will give list of SPECIAL_ID I want to create a table with TIME, ID, IS_SPECIAL_ID IS_SPECIAL_ID is evaluated to true/false... See more...
There are two queries `query 1` will give ID, TIME fields `query 2` will give list of SPECIAL_ID I want to create a table with TIME, ID, IS_SPECIAL_ID IS_SPECIAL_ID is evaluated to true/false based on the condition where is ID is part of the list SPECIAL_ID  
Hi All,   Does Splunk Security Essentials app also map our custom (user defined) correlation searches to different MITRE tactics & techniques ?  Based on what i see,  if we run the setup wizard it... See more...
Hi All,   Does Splunk Security Essentials app also map our custom (user defined) correlation searches to different MITRE tactics & techniques ?  Based on what i see,  if we run the setup wizard it will do so for the pre defined ones that come with ES or with Security Essentials app itself.   There is nothing mentioned about custom correlation searches that one sets up in ES.
Hi guys, I'm using ipinfo to check IPs of my system.       <base search> | stats sum(Download) as Download by DestIP | sort 5 -Size | ipinfo DestIP       The problem is it didn't wait for t... See more...
Hi guys, I'm using ipinfo to check IPs of my system.       <base search> | stats sum(Download) as Download by DestIP | sort 5 -Size | ipinfo DestIP       The problem is it didn't wait for the final result then call the command "ipinfo", it'll make more request than 5 times, depends on how much DestIP it had. Are there any solutions for this case?
Hi all,   i am using React and the Splunk JS SDK to create an manage secrets, which works fine - as long as you stay in JS. The secret is correctly written in passwords.conf. Now I am trying to lis... See more...
Hi all,   i am using React and the Splunk JS SDK to create an manage secrets, which works fine - as long as you stay in JS. The secret is correctly written in passwords.conf. Now I am trying to list the secrets using Python SDK and I can not find the secrets, that got created by the JS SDK. Am I missing anything out? I thought, JS SDK is pretty staight forward when using the builtin StogarePasswords functions and there are not many options to pass parameters to t (realm, username, secret).  Can anyone help?
Hello, I have a group pie chart that shows number of occurrences based on some parameter being in several predefined ranges (group1: 0-20, group2: 20-50, group3: 50-80,..... ) I want to update en... See more...
Hello, I have a group pie chart that shows number of occurrences based on some parameter being in several predefined ranges (group1: 0-20, group2: 20-50, group3: 50-80,..... ) I want to update entire dashboard if some group on this chart is selected and filter only data that have the parameter in the specified range. Thus I need to get 2 tokens with limits of the range that was selected. $click.value seems to be not set properly and things like this: <drilldown> <eval token="BI_groupLow">case(match($click.value$, "less than 20"), 0)</eval> <eval token="BI_groupHigh">case(match($click.value$, "less than 20"), 20)</eval> </drilldown> doesn't seem to work either..... Does anyone have an idea how get range limits in this case? Thanks in advance!
Hi Team, Could you please help me with below issue? I have created a Java custom Business transaction.  I am trying to set the business transaction as "Mark as permanent".  But, the option is alway... See more...
Hi Team, Could you please help me with below issue? I have created a Java custom Business transaction.  I am trying to set the business transaction as "Mark as permanent".  But, the option is always disabled.  Attached below the screenshot.  I have all the admin rights.  I do not understand how to enable the option.  Could you please help me in fixing the issue? Thanks&Regards Srinivas
Hi All, I haven3 events in splunk where there is one unique field in all the three events. Here is the example:   [2022-05-10 23:17:23,049] [INFO ] [] [c.c.n.t.e.i.T.JmsMessageEventData] [] - ... See more...
Hi All, I haven3 events in splunk where there is one unique field in all the three events. Here is the example:   [2022-05-10 23:17:23,049] [INFO ] [] [c.c.n.t.e.i.T.JmsMessageEventData] [] - channel="NPP_MPIR.CHANNEL", productVersion="1.3.1-0-1-404089bc7", uuid="3c78031b-12b3-4694-ab88-3a265bf8499e", eventDateTime="2022-05-10T23:17:23.049Z", severity="INFO", code="JmsMessageEventData", component="mq.listener", category="default", serviceName="Mandated Payment Initiation", eventName="MANDATED_PAYMENT_INITIATION.SERVICE_START", message="Mandated Payment Initiation Event", entityType="MSG", start="1652188643002", messageIdentification="CTBAAUSNXXX20220510020220510131721", queueManagerName="PGT201", queueManagerHostname="10.39.9.38",    Initial: [2022-05-10 23:17:24,425] [INFO ] [] [c.c.n.t.e.i.T.JmsMessageEventData] [] -  eventDateTime="2022-05-10T23:17:24.425Z", severity="INFO", code="JmsMessageEventData", component="submission.sent", category="default", serviceName="Submission Service", eventName="PAYMENT_STATUS_REPORT.SENT", message="Customer initial status report sent to PAG", entityType="INSTR", externalSystem="PAG", start="1652188644418", stop="1652188644425", elapsed="7", exceptionInfo="null", messageIdentification="CTBAAUSNXXX20220510020220510131721", firstMessageTraceIdentification="2TDyn8AlRMud1mfUA49o6A" Final: [2022-05-10 23:17:30,528] [INFO ] [] [c.c.n.t.e.i.T.JmsMessageEventData] [] -  eventDateTime="2022-05-10T23:17:30.528Z", severity="INFO", code="JmsMessageEventData", component="submission.sent", category="default", serviceName="Submission Service", eventName="PAYMENT_STATUS_REPORT.SENT", message="Customer final status report sent to PAG", entityType="INSTR", externalSystem="PAG", start="1652188650520", stop="1652188650528", elapsed="8", exceptionInfo="null", messageIdentification="CTBAAUSNXXX20220510020220510131721", firstMessageTraceIdentification="2TDyn8AlRMud1mfUA49o6A",                         These are the 3 events with unique field "messageIdentification",  I need to combine 1 and 2 events and also 1 and 3 and get difference of time between them and calculate how much percentage of events are triggering in less than 15 sec and 30 sec. I tried using transaction command but not able to fetch ..i think i am using it wrong. Can anyone help me on the same. Thanks in Advance.      
This looks easy but I couldn't figure it out. Any help is appreciated. How to extract user email from raw message and assign to a field? For example, here is my event message message: Specia... See more...
This looks easy but I couldn't figure it out. Any help is appreciated. How to extract user email from raw message and assign to a field? For example, here is my event message message: SpeciaService: Received  Status for xxxxxxx Message=xxx(timeStamp=xxxx, job=1234(super=xxxx(id=1376, userId=xxxxx@xxxx.com , status = success) I want to generate a table with userId and status fields  generated from event logs that matches 'SpeciaService' events I tried below, it didn't work index=xxxx-* SERVICE="xxx-service" | rex field=SpeciaService: Exception "\S* (?<userId>\S*)"  |eval status = if(exception, error:success )| table userId, status
Hi, I have index that call "myindex" and have several question about it: 1-how can i remove specific date range of specific index and force to reindex it?  (cli or web?) 2-how to view percentage ... See more...
Hi, I have index that call "myindex" and have several question about it: 1-how can i remove specific date range of specific index and force to reindex it?  (cli or web?) 2-how to view percentage of status of current indexing job?  (cli or web?) 3-how to force reindex specific directory? (cli or web?) 4-i have 2 seprate index (1-daily, 2-ondemand) first one index this path /opt/daily, second index this path /opt/ondemand every night a script sync daily path, and indexed correctly. the issue is when I put log of today on ondemand path it will index correctly but next day when daily script run, daily index not update correctly and just show log that belong after that on splunk!   e.g. 1-I've update ondemand path and it contain log og today from 00:00 to 11:00 2-next day after script run and daily path update on splunk only show from 11:00 to 23:59    any idea? Thanks,
Hi All, I am trying to deploy Splunk in a different context. By default when I run this command /opt/splunk/bin/splunk start --accept-license It starts the web with "/" as a deployment web contex... See more...
Hi All, I am trying to deploy Splunk in a different context. By default when I run this command /opt/splunk/bin/splunk start --accept-license It starts the web with "/" as a deployment web context and I can access it by http://localhost:8000/ Now, if I would like to access Splunk like this http://localhost:8000/splunk-local, so that all the redirect URLs would look like this http://localhost:8000/splunk-local/en-US/app/launcher/home, http://localhost:8000/splunk-local/en-US/account/login?return_to=%2Fen-US%2Fapp%2Flauncher%2Fhome Reason for asking this question: In the Kubernetes environment, it's a bit challenging to configure Networks for applications using the default "/" as a context Thanks Vikas @jho-splunk @daniel333 @48tfhd86gv @gcusello @dstromberg 
version : splunk  enterprise 8.1.3 I have a datasource with a field that is either an ip address. The following ip addresses are examples. If i do a search for a ip the response time is quite go... See more...
version : splunk  enterprise 8.1.3 I have a datasource with a field that is either an ip address. The following ip addresses are examples. If i do a search for a ip the response time is quite good. earliest=05/30/2022:00:00:00 earliest=05/30/2022:04:00:00 index=firewall src_ip=1.1.1.1 But, If i do a search for a ip the response time is slow. earliest=05/30/2022:00:00:00 earliest=05/30/2022:04:00:00 index=firewall src_ip=2.2.2.2 Is there a reason for the difference in search speed depending on the IP? 
Hi, rather new to this community, but trying to figure this out.  I have table 1 with two fields, (src_ip and dest_ip) and another table 2 with (IP) field.  I would like to highlight any IPs in table... See more...
Hi, rather new to this community, but trying to figure this out.  I have table 1 with two fields, (src_ip and dest_ip) and another table 2 with (IP) field.  I would like to highlight any IPs in table 2 that are a match to any in table 1 in either field.  Is there an easy way to accomplish this?  Thanks in advance.
Hi Experts, I'm new to splunk. I have created a dashboard to which logs are ingested every min and shows how many logs were ingested, as percentage i.e, on hourly basis it calculates the value as t... See more...
Hi Experts, I'm new to splunk. I have created a dashboard to which logs are ingested every min and shows how many logs were ingested, as percentage i.e, on hourly basis it calculates the value as total. ingested logs/60.  However, in the drill-down part, I would like to show the actual timing when the logs were not ingested. Let me know if there is a mechanism to achieve this. Regards, Karthikeyan
Similar to https://community.splunk.com/t5/Splunk-Search/How-do-I-extract-all-fields-from-userdata/m-p/596078#M207501 Could you please help me with this I use       source=http:splunk_ec... See more...
Similar to https://community.splunk.com/t5/Splunk-Search/How-do-I-extract-all-fields-from-userdata/m-p/596078#M207501 Could you please help me with this I use       source=http:splunk_ecp_IPC2_kafka_logs sourcetype=yo_kafka_logs properties YoRouterLoggingInterceptor | rex "properties=(?\{.*\})" |table chatOriginUrl,firstName,lastName, conversationId,clientSourceId,engagedHandler       The string is       30 May 2022 08:38:20,741 log_level='DEBUG' thread_name='yoRouterExecutor-9' hostName=yo-router-b-deployment-39-gb2hf class_name='com.al.wsgcat.ngsp.yo.logging.YoRouterLoggingInterceptor' app=NGSPYO event_name=YOROUTER correlationId=BLiLDEyd-24052022-070434975 URI=https://yo.al.com/yo/gateway/v1/handleRouting,Method=POST,Headers=[Accept:"application/json", Content-Type:"application/json", Content-Length:"2388"],Request body={"yoMessage":{"messageText":"Representative has disconnected","from":null,"to":"mglueck@ngspchattims.al.com","properties":{"lineOfBusiness":"MYCA","messageCategory":"returningasync","messageCount":"","yoId":"svc.yo7@ngspchattims.al.com/Smack","transferIntentCode":"","experience":"platinum","checkoutStatus":"","customerMemberConnectionId":"44f4d6263627d8267385ea64d8bfc057","requestHandler":"","messageType":"ccpdisconnected","browserVersion":"Chrome 101.0.4951.61","action":"","workGroupName":"Social_Media_Team","chatType":null,"aao_locale":"en-US","microBotIntent":null,"deviceType":"mobile","applicationVersion":"1.0","interactionId":"159MS6U2J6NFHGP4","clientSourceId":"smrt","deviceOS":"Android 12","chatOriginUrl":"https://online.al.com/myca/mycaassist/us/startChat.do?request_type=authreg_home","messageId":"f3b5c925-2ac9-41a5-9917-41b0edb9e065","chatSessionId":"s_675f1a75-94b7-4e02-a240-94ef07b25c6e","masterBotIntent":null,"messageOrigin":"ccp","firstName":"J","userGroups":"","intentCode":"offers_generic","alSession":"","bbv":"6cf84eea-a1270454-e62fd5be-273cb071","smallCustomerArt":"","escalationIndicator":"","customerNumber":"CRPXMSYRO9UK7P3","riskflag":"","queuedTimeStamp":"","toId":"svc.yo24@ngspchattims.al.com/Smack","lastName":"","conversationHeader":"","customerProduct":"137","correlation-id":"f3b5c925-2ac9-41a5-9917-41b0edb9e065","channel-user-id":"44f4d6263627d8267385ea64d8bfc057","locale":"en-US","gatekeeper":"DF25AD3025E28FFB6B6C8701A1DA0DEEF8DA561973401A20FDC35FBFDB68118DEF63E653045C3B52BCDADCE57398C054AEA7B99DCD0FA2B1628E31E96AFE7BC0EC16F04DF6BA0CF2406C14EF3BFC6ECD73F4F8CC155AAD568EB6F44816A8C576667749FA70F9B9F48A99EC3723D2AEABEF11BBC65DB47E317B99BB95CC71D8D03B394999B87CC149618E59061DD0AD06A","historicalChat":"","confidenceScore":"","creditFlag":"N","engagedHandler":"mglueck","botId":"","channelId":"web","productCreatedDate":"","conversationId":"","conversationTopic":null,"languageId":"US","customerMemberId":"","ccpId":"mglueck","sessionId":"itc_9d9907d7-e64d-475f-b9ea-21b26e6b2797","globalCustomerMemberId":"","pegaMessageId":null,"createdDate":"2022-05-30T15:38:18.481Z","customerMemberIPAddress":"192.16.1","waitTime":"1358"}},"routeCode":"CCP","xmppId":"mglueck"}      
I encounter with strange issue when i use transaction and at the end sort by duration it show highest duration is 15000 but when i remove transaction it show 17000 as highest duration!!! FYI1:corre... See more...
I encounter with strange issue when i use transaction and at the end sort by duration it show highest duration is 15000 but when i remove transaction it show 17000 as highest duration!!! FYI1:correct value is 17000 and there is no special filter exist here! FYI2:duration directly print in log i just use transaction to aggregate two lines.   Here is with transaction command: | rex "actionName.*\.(?\w+\.\w+)\]" | rex "duration\[(?\d+)" | rex "transactionId\[(?\w+-\w+-\w+-\w+-\w+)" |transaction transactionId | sort - duration | table duration actionName username   Here is without transaction: | rex "actionName.*\.(?\w+\.\w+)\]" | rex "duration\[(?\d+)" | rex "transactionId\[(?\w+-\w+-\w+-\w+-\w+)" | sort - duration | table duration actionName username   Here is the log: 2022-05-30 12:39:34,262 INFO  [APP] [Act] actionName[us.st.zxc.asda.app.session.protector.QueryOnData.Allow] parameters[] transactionId[8d135d45-c117-4781-a3ed-9a6a9db7ce4d] username[ABC] startTime[1653898174262] 2022-05-30 12:42:26,109 INFO  [APP] [Act] actionName[us.st.zxc.asda.app.session.protector.QueryOnData.Allow] transactionId[8d135d45-c117-4781-a3ed-9a6a9db7ce4d] duration[171847] status[done]   any idea? Thanks
Hi, We have implemented a custom command which queries the external rest api and pulls the data to splunk search page. The challenge we are facing is when the response data is huge Splunk search pa... See more...
Hi, We have implemented a custom command which queries the external rest api and pulls the data to splunk search page. The challenge we are facing is when the response data is huge Splunk search page is waiting for couple of minutes(more than 5 minutes) with out showing any data. The result of the api's comes in the form of partitions. Lets say, if we have 100k records in the api rsult, All those 100k rows would be splitting up into 100 partitions and we need to iterate over 100 times to get all the 100k records. Similarly if we could send the partition data to splunk and get the results appended to the splunk page as and when we get data for all partitions, we can have end user see the data as soon as possible instead of waiting for couple of minutes. My custom command is generating custom command. I would like to know if there is any way to send the data in chunks to Splunk page instead waiting to pull all the 100k records. We tried couple of ways like yield (our code is in Python and using Splunk python SDK), enabling streaming attribute etc. Please help me here to figure out a way to send the data in chunks for the generating custom command. Thanking you.
Hi try to use transaction command, but actionName is empty!   Here is my SPL | rex "actionName.*\.(?<actionName>\w+\.\w+)\]" | rex "duration\[(?<duration>\d+)" | rex "transactionId\[(?<tra... See more...
Hi try to use transaction command, but actionName is empty!   Here is my SPL | rex "actionName.*\.(?<actionName>\w+\.\w+)\]" | rex "duration\[(?<duration>\d+)" | rex "transactionId\[(?<transactionId>\w+-\w+-\w+-\w+-\w+)" |transaction transactionId | table duration actionName username Here is the current result: duration    actionName    username   171847                                           ABC      Here is the expected result: duration            actionName           username   171847    QueryOnData.Allow     ABC       Here is the log: 2022-05-30 12:39:34,262 INFO  [APP] [Act] actionName[us.st.zxc.asda.app.session.protector.QueryOnData.Allow] parameters[] transactionId[8d135d45-c117-4781-a3ed-9a6a9db7ce4d] username[ABC] startTime[1653898174262] 2022-05-30 12:42:26,109 INFO  [APP] [Act] actionName[us.st.zxc.asda.app.session.protector.QueryOnData.Allow] transactionId[8d135d45-c117-4781-a3ed-9a6a9db7ce4d] duration[171847] status[done]
I received this image from support and I would like to create a panel in my dashboard to mimic this information.  How would I go about doing that?  I was trying with the current query, but am not hav... See more...
I received this image from support and I would like to create a panel in my dashboard to mimic this information.  How would I go about doing that?  I was trying with the current query, but am not having luck.       index=_introspection data.normalized_pct_cpu=* sourcetype=splunk_resource_usage host=idx* | stats avg(data.normalized_pct_cpu) AS cpu_usage BY host | table host cpu_usage         I am using data.normalized_pct_cpu as the docs state that it is Percentage of CPU usage across all cores. 100% is equivalent to all CPU resources on the machine, which seems to be what I want but not sure if that is the best way to go about this.