All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have successfully setup the Java app agent to connect to the saas controller. I can view the BTs, tiers and nodes info from controller. However, when i check the errors in my Application, there are... See more...
I have successfully setup the Java app agent to connect to the saas controller. I can view the BTs, tiers and nodes info from controller. However, when i check the errors in my Application, there are continually says UnknownHostException: risktest.saas.appdynamics.com: Name or service not known. I have check the springboot log, but cannot find such errors. I'm not sure where these exceptions come from. I know it talking about cannot resolve the host, however, i have config the proxyHost and proxyPort during startup. Here is the full stack trace for this exception. java.net.UnknownHostException: java.net.UnknownHostException java.net.Inet4AddressImpl.lookupAllHostAddr(Inet4AddressImpl.java:-2) java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1519) java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) java.net.InetAddress.getAllByName0(InetAddress.java:1509) java.net.InetAddress.getAllByName(InetAddress.java:1368) java.net.InetAddress.getAllByName(InetAddress.java:1302) okhttp3.Dns$Companion$DnsSystem.lookup(Dns.kt:49) okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.kt:164) okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.kt:129) okhttp3.internal.connection.RouteSelector.next(RouteSelector.kt:71) okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:205) okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:106) okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:74) okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:255) okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32) okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95) okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83) okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76) okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109) okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201) okhttp3.internal.connection.RealCall.execute(RealCall.kt:154) com.cisco.mtagent.utils.NetworkUtils.executeCall(NetworkUtils.java:199) com.cisco.mtagent.tenant.MTAgentTenantAPI.executeCall(MTAgentTenantAPI.java:906) com.cisco.argento.transport.NetworkUtilities.genericHTTPRequest(NetworkUtilities.java:223) com.cisco.argento.transport.NetworkUtilities.genericHTTPRequest(NetworkUtilities.java:173) com.cisco.argento.transport.AuthUtilities.sendAuthService(AuthUtilities.java:170) com.cisco.argento.transport.AuthUtilities.sendAuthService(AuthUtilities.java:153) com.cisco.argento.transport.AuthUtilities._refreshToken(AuthUtilities.java:123) com.cisco.argento.transport.AuthUtilities.refreshToken(AuthUtilities.java:96) com.cisco.argento.management.HeartbeatThread.heartbeatAndCheckPolicyFile(HeartbeatThread.java:143) com.cisco.argento.management.HeartbeatThread.initialRegistration(HeartbeatThread.java:131) com.cisco.argento.management.HeartbeatThread.launchManagerServerHeartbeatThread(HeartbeatThread.java:87) com.cisco.argento.loadhandlers.BootstrapLoadHandler.bootstrapArgento(BootstrapLoadHandler.java:292) com.cisco.argento.loadhandlers.BootstrapLoadHandler.<init>(BootstrapLoadHandler.java:99) jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(NativeConstructorAccessorImpl.java:-2) jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance(Constructor.java:490) org.picocontainer.injectors.AbstractInjector.newInstance(AbstractInjector.java:145) org.picocontainer.injectors.ConstructorInjector$1.run(ConstructorInjector.java:342) org.picocontainer.injectors.AbstractInjector$ThreadLocalCyclicDependencyGuard.observe(AbstractInjector.java:270) org.picocontainer.injectors.ConstructorInjector.getComponentInstance(ConstructorInjector.java:364) org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.getComponentInstance(AbstractInjectionFactory.java:56) org.picocontainer.behaviors.AbstractBehavior.getComponentInstance(AbstractBehavior.java:64) org.picocontainer.behaviors.Stored.getComponentInstance(Stored.java:91) org.picocontainer.DefaultPicoContainer.getInstance(DefaultPicoContainer.java:699) org.picocontainer.DefaultPicoContainer.getComponent(DefaultPicoContainer.java:647) org.picocontainer.DefaultPicoContainer.getComponent(DefaultPicoContainer.java:678) com.cisco.argento.core.ArgentoPicoContainer.customMethodHandlerBuilder(ArgentoPicoContainer.java:148) com.cisco.mtagent.boot.registry.MethodHandlerRegistry.createMethodHandler(MethodHandlerRegistry.java:66) com.cisco.mtagent.boot.registry.MethodHandlerRegistry.createMethodHandlersOnRuleCreation(MethodHandlerRegistry.java:45) com.cisco.mtagent.instrumentation.InstrumentationRule.<init>(InstrumentationRule.java:115) com.cisco.mtagent.config.AgentConfiguration.addInstrumentationRule(AgentConfiguration.java:324) com.cisco.mtagent.config.AgentConfiguration.processInstrumentationSection(AgentConfiguration.java:315) com.cisco.mtagent.config.AgentConfiguration.configureTenant(AgentConfiguration.java:280) com.cisco.mtagent.config.AgentConfiguration.configureAgent(AgentConfiguration.java:195) com.cisco.mtagent.instrumentation.InstrumentationBootstrap.initializeMultiTenantAgent(InstrumentationBootstrap.java:80) com.cisco.mtagent.core.AgentPicoContainer.bootstrapMultiTenantAgent(AgentPicoContainer.java:60) jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2) jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:566) com.cisco.mtagent.entry.MTAgent$2.run(MTAgent.java:324)
There is a saved search which has been orphaned. When I attempted to reassign it to another user like admin or nobody, it didn't let me and showed the below warning message. cannot change user and/... See more...
There is a saved search which has been orphaned. When I attempted to reassign it to another user like admin or nobody, it didn't let me and showed the below warning message. cannot change user and/or app context of a report that is embedded
I'm not sure where to address the problem, but let't try here: The documentation says that Splunk sets locale basing on browser language. It's fine, however if browser is not set to English (GB) but... See more...
I'm not sure where to address the problem, but let't try here: The documentation says that Splunk sets locale basing on browser language. It's fine, however if browser is not set to English (GB) but any foreign language (i.e. Polish), Splunk sets locale to English (US). This is pretty uncomfortable, as most of the World uses different time format than US.  Is it possible to change this behaviour in Splunk settings? And could future Splunk releases use English (GB) by default?
hello,    I have some xml files coming in which is working fine, however, despite setting the TIME_FORMAT to %d/%m/%Y %H:%M:%S it is still putting some events into indexes with MM/DD/YYYY.    the... See more...
hello,    I have some xml files coming in which is working fine, however, despite setting the TIME_FORMAT to %d/%m/%Y %H:%M:%S it is still putting some events into indexes with MM/DD/YYYY.    the time format is set in a props.conf file for my input but it appears to me ignoring it.    I've also noticed that despite me telling it to use a particular source type its making up its own that isnt in my instance. could that be the reason? if so why?   any ideas?   thanks in advance
I do get lots of these message from my index servere on a Splunk Enterprice solution.   05-06-2021 12:18:08.218 +0200 WARN MetaData::string - Received metadata string exceeding maxLength -- it wil... See more...
I do get lots of these message from my index servere on a Splunk Enterprice solution.   05-06-2021 12:18:08.218 +0200 WARN MetaData::string - Received metadata string exceeding maxLength -- it will be truncated as an indexed extraction. Only searches scanning _raw will be able to find it as a whole word.    source: /opt/splunk/var/log/splunk/splunkd.log splunk_server: all my index servere Google does not give me anything.   Edit turned on metadata::string debugging and did get more detailed info:   Received metadata string exceeding maxLength length=1642 maxLength=1000   But are still not able to find where to change maxLength for meta data.  
Hi, I'm new to Splunk. How do I have to set the props.conf in the indexer so that my JSON reads correctly? I would like to have the fields: LOGTIME, CMDBID, RunID and Msg per event. [{"LOGTIME" : "0... See more...
Hi, I'm new to Splunk. How do I have to set the props.conf in the indexer so that my JSON reads correctly? I would like to have the fields: LOGTIME, CMDBID, RunID and Msg per event. [{"LOGTIME" : "06-05-2021 11:27:12","CMDBID" : "TEST","RunID" : "356166995","Msg" : "************************************************************************ ** ucxja64m version 12.2.0+build.2108 changelist 1530016094 ** ** JOB 356166995 (ProcID:0012059010) START AT 04.05.2021 / 15:48:42 ** ** UTC TIME 04.05.2021 / 13:48:42 ** ** TEXT= Job started ** ************************************************************************ total 72 drwxr-xr-x 11 utpuc4 utpuc4 8192 May 04 12:15 . drwxr-xr-x 3 utxuc4 utxuc4 256 Aug 04 2020 .. -rw------- 1 utpuc4 utpuc4 8546 Apr 27 13:02 .bash_history drwxr-x--- 2 utpuc4 utpuc4 4096 Apr 27 10:38 .histfiles -rw-r--r-- 1 utpuc4 utpuc4 0 Nov 06 2019 .lesshsQ -rw------- 1 utpuc4 utpuc4 47 Mar 14 2019 .lesshst -rw-r----- 1 utpuc4 utpuc4 1727 Aug 19 2019 .profile drwx------ 2 utpuc4 utpuc4 256 Dec 23 2019 .ssh drwxr-x--- 3 utpuc4 utpuc4 256 Aug 31 2018 CAPKI drwxr-xr-x 6 utpuc4 utpuc4 256 Aug 20 2020 V12.3.3_2020-08-18 drwxr-x--- 6 utpuc4 utpuc4 256 Aug 30 2018 V122_HF1 drwxr-xr-x 2 utpuc4 utpuc4 256 Sep 04 2018 archive lrwxrwxrwx 1 utpuc4 utpuc4 23 Jan 31 2019 global -> /var/uc4/uc4global/prod lrwxrwxrwx 1 utpuc4 utpuc4 28 Sep 13 2018 java8_64 -> /opt/java/java8_64_8.0.0.521 lrwxrwxrwx 1 utpuc4 utpuc4 11 Sep 04 2018 latest -> ./V122_HF1/ drwxr-xr-x 2 utpuc4 utpuc4 256 Jul 13 2018 lost+found drwxr-xr-x 3 utpuc4 utpuc4 256 Sep 03 2018 network -rw------- 1 utpuc4 utpuc4 0 Aug 03 2020 nohup.out drwxr-xr-x 7 utpuc4 utpuc4 256 Feb 17 2020 share ************************************************************************ ** ucxja64m version 12.2.0+build.2108 changelist 1530016094 ** ** JOB 356166995 (ProcID:0012059010) ENDED AT 04.05.2021 / 15:48:42 ** ** UTC TIME 04.05.2021 / 13:48:42 ** ** TEXT= Job ended RETCODE=00 ** ************************************************************************ "}, {"LOGTIME" : "06-05-2021 11:27:12","CMDBID" : "TEST","RunID" : "356178213","Msg" : "************************************************************************ ** ucxja64m version 12.2.0+build.2108 changelist 1530016094 ** ** JOB 356178213 (ProcID:0023593346) START AT 04.05.2021 / 15:49:53 ** ** UTC TIME 04.05.2021 / 13:49:53 ** ** TEXT= Job started ** ************************************************************************ total 72 drwxr-xr-x 11 utpuc4 utpuc4 8192 May 04 12:15 . drwxr-xr-x 3 utxuc4 utxuc4 256 Aug 04 2020 .. -rw------- 1 utpuc4 utpuc4 8546 Apr 27 13:02 .bash_history drwxr-x--- 2 utpuc4 utpuc4 4096 Apr 27 10:38 .histfiles -rw-r--r-- 1 utpuc4 utpuc4 0 Nov 06 2019 .lesshsQ -rw------- 1 utpuc4 utpuc4 47 Mar 14 2019 .lesshst -rw-r----- 1 utpuc4 utpuc4 1727 Aug 19 2019 .profile drwx------ 2 utpuc4 utpuc4 256 Dec 23 2019 .ssh drwxr-x--- 3 utpuc4 utpuc4 256 Aug 31 2018 CAPKI drwxr-xr-x 6 utpuc4 utpuc4 256 Aug 20 2020 V12.3.3_2020-08-18 drwxr-x--- 6 utpuc4 utpuc4 256 Aug 30 2018 V122_HF1 drwxr-xr-x 2 utpuc4 utpuc4 256 Sep 04 2018 archive lrwxrwxrwx 1 utpuc4 utpuc4 23 Jan 31 2019 global -> /var/uc4/uc4global/prod lrwxrwxrwx 1 utpuc4 utpuc4 28 Sep 13 2018 java8_64 -> /opt/java/java8_64_8.0.0.521 lrwxrwxrwx 1 utpuc4 utpuc4 11 Sep 04 2018 latest -> ./V122_HF1/ drwxr-xr-x 2 utpuc4 utpuc4 256 Jul 13 2018 lost+found drwxr-xr-x 3 utpuc4 utpuc4 256 Sep 03 2018 network -rw------- 1 utpuc4 utpuc4 0 Aug 03 2020 nohup.out drwxr-xr-x 7 utpuc4 utpuc4 256 Feb 17 2020 share ************************************************************************ ** ucxja64m version 12.2.0+build.2108 changelist 1530016094 ** ** JOB 356178213 (ProcID:0023593346) ENDED AT 04.05.2021 / 15:49:53 ** ** UTC TIME 04.05.2021 / 13:49:53 ** ** TEXT= Job ended RETCODE=00 ** ************************************************************************ "}]  
Greetings all!! Hope this finds you well. - Kindly help me to understand  how in distributed environment , how Splunk license is measured and consumed?    - I want to know if it is measured on th... See more...
Greetings all!! Hope this finds you well. - Kindly help me to understand  how in distributed environment , how Splunk license is measured and consumed?    - I want to know if it is measured on the raw data from (syslog sender/data sources) we receive in syslog server collector/management instance ?  OR  if it is measured on all the data ingested in Splunk indexers?  kindly help me to understand this?   - what components or apps that are also part of license consumed?   - What query to use to check the license usage in previous 6months.   Thank you in advance for your help.    
Hey, Is there a way to set indexer hostname by environment Variable? We plan to deploy this Env variable with deployment server in order to set all indexer hostnames... we have tried this: [def... See more...
Hey, Is there a way to set indexer hostname by environment Variable? We plan to deploy this Env variable with deployment server in order to set all indexer hostnames... we have tried this: [default] host = $HOSTNAME without any changes [the hostname of the indexers just changed to "$HOSTNAME"... ] Have anyone tried this before? Is there more efficient way to set indexers hostnames from deployment servers? Thanks Tankwell
Hi Community,  We think about how to transfer log information from azure to Splunk. We would like to forward all relevant log information like Activity Logs, Diagnostic logs, Security Center logs ... See more...
Hi Community,  We think about how to transfer log information from azure to Splunk. We would like to forward all relevant log information like Activity Logs, Diagnostic logs, Security Center logs to Event hub. Can the splunk addon read all log types  or only a view of them? Will the log data be stored in json raw data or fully parsed?
Hi everyone,  I'm trying to get the following search work, but for some reason I'm doing something wrong:   inputlookup events_lookup | eval key = _key |search key in [| inputlookup notable_events... See more...
Hi everyone,  I'm trying to get the following search work, but for some reason I'm doing something wrong:   inputlookup events_lookup | eval key = _key |search key in [| inputlookup notable_events_lookup search name="tobedeleted" | fields - _time | fields event_id] |table key   I'm basically trying to import event_id from a lookup ( notable_events_lookup) which is matching to another lookup (evets_lookup) in order to remove the matching event in the lookup (events_lookup) I hope it makes sense what I'm trying to explain. Thanks everyone  
Hello members, I am new to Splunk and able to produce simple stats using STATS count by command but looking for direction on how to combine indexes and generate stats. We need to generate monthly s... See more...
Hello members, I am new to Splunk and able to produce simple stats using STATS count by command but looking for direction on how to combine indexes and generate stats. We need to generate monthly stats from 3 systems residing in different indexes.  The events across systems/indexes can be joined using TRAN_ID. Transactions are passed from System 1 to System 2 and system 2 to System 3.  The purpose is to do recon every month and identify the number of transactions initiated from System 1 that didn’t make it to System 2 or System 3. Listed below are the fields that are present in different systems/indexes. The transactions codes and status are different across systems and it needs to be shown on the report.  System 1 (index= ind1) Timestamp Month_&_Year – to be extract from timestamp field TRAN_ID Tran_Name Sys1_Transaction_Status_Code   System 2 (index= ind2) Timestamp TRAN_ID Sys2_Transaction_Code Sys2_Transaction_Status System 3 (index= ind3) Timestamp TRAN_ID Sys3_Transaction_Code Sys3_Transaction_Status Stats Layout: Transaction counts – 10 and 8  indicate transactions initiated from system 1 made it to system 2 but didn’t make it to system 3. Therefore the cells are highlighted in colours just to indicate they contain no values. Transaction counts – 5 and 3  indicate transactions initiated from system 1 failed so didn’t make it system 2. Therefore, the cells are highlighted in colours just to indicate they contain no values. In SQL world, we achieve the desired results by using LEFT JOIN and GROUP BY clause and I know Splunk is not a relational database but yet powerful. In the community, I also read that joins are not recommended. Please advise how do I generate a similar stats in Splunk?    
I want to extract from the Message field in the Windows Event Log just the first few words until the period - example would be: Message=A user account was unlocked. Subject: Security ID: xxxxxxxxxxx... See more...
I want to extract from the Message field in the Windows Event Log just the first few words until the period - example would be: Message=A user account was unlocked. Subject: Security ID: xxxxxxxxxxxxxxxx Account Name: xxxxxxxxxx Account Domain: xxxxxxxxx Logon ID: xxxxxxxxxx Target Account: Security ID: xxxxxxxxxxxxxx-xxxxxxxx Account Name: xxxxxxxxxx Account Domain: xxxxxxxxxx
Hi, Descripted at the following manual I should add many application permissions at my Azure app registration: https://prd-p-8fw51.splunkcloud.com/de-DE/app/microsoft_cloud_app/data_onboarding_guid... See more...
Hi, Descripted at the following manual I should add many application permissions at my Azure app registration: https://prd-p-8fw51.splunkcloud.com/de-DE/app/microsoft_cloud_app/data_onboarding_guide?form.field1=azure At my two azure environments I do not have this options and any idea why? Thanks for your response Christian
I am new to SPLUNK learning with the Enterprise Edition. I created a new host with JSON source type. When I search something using host="my_new_host", none of my searches are saved in the "SEARCH HIS... See more...
I am new to SPLUNK learning with the Enterprise Edition. I created a new host with JSON source type. When I search something using host="my_new_host", none of my searches are saved in the "SEARCH HISTORY". This only occurs for that host, the other hosts searches are recorded Any assistance is greatly appreciated  
I am currently cleaning up the backlog of open Investigations and would like to close all investigations opened before a certain date. Is there a way to bulk modify investigations?
Hi guys I have two stats index |Exception| count index |Error |count I want is something like this : index |Exception|Error |count . count should be shown for exceptions where it is present for ... See more...
Hi guys I have two stats index |Exception| count index |Error |count I want is something like this : index |Exception|Error |count . count should be shown for exceptions where it is present for an index and same for Error. something like this   index                 Exception                                   Error              count index_1           ArithmeticException                                         50 index_2                                                                   loginerror         2 index_1                                                                   inputerror        5 the count shouldnt mess up due to same index.Please help !  
If I want to install the new IT Essential Work app : https://splunkbase.splunk.com/app/5403/ But when installing from the UI, the .spl upload returns an error. This is because the App is indeed a p... See more...
If I want to install the new IT Essential Work app : https://splunkbase.splunk.com/app/5403/ But when installing from the UI, the .spl upload returns an error. This is because the App is indeed a package with several apps, that have to be installed manually on the file system, like ITSI.  
Hi So I've upgraded the Alert Manager app to version 3.0.7 and enable the logging of alerts into an index called "alerts", I have an index cluster and a search head cluster. The index has been crea... See more...
Hi So I've upgraded the Alert Manager app to version 3.0.7 and enable the logging of alerts into an index called "alerts", I have an index cluster and a search head cluster. The index has been created on the index cluster, and the search head is aware as you can see it listed when updating Roles. I've given the "user" role access to the index and set it as a default, but I'm getting this error below: ERROR sendmodalert - action=alert_manager STDERR - splunk.RESTException: [HTTP 400] ["message type=WARN code=None text=supplied index 'alerts' missing;"] The Alert Manager page http://docs.alertmanager.info/en/latest/installation_manual/ has the text below, but I'm not sure what it means by create it on a Search Head when we have an index cluster (what do I need to do to allow the Rest API to know about the index):  "Be sure to create the index on all Indexers and Search Heads (the Rest API requires the index to know on Search Heads too)." Anyone any ideas, I'm sure it's something simple? Cheers Andy  
Hi- I am trying to get a number of Google G Suite / Workspace logs, GCP logs, etc into Splunk for security monitoring. I have been trying the various apps by Kyle Smith, but they don't get me every... See more...
Hi- I am trying to get a number of Google G Suite / Workspace logs, GCP logs, etc into Splunk for security monitoring. I have been trying the various apps by Kyle Smith, but they don't get me everything I am required to collect. Currently we use python scripts to call the Google API and pull in our logs with a UF.  But I was hoping there was a better way (by now).   I was hoping there is a way to redirect/store my G-suite logs into Google Storage and then pull it into Splunk with the (Splunk Supported) GCP app?  Like using the AWS TA and an S3 bucket. I have not been able to test my theory yet, but if any one can advise how to collect MDM and Alert Center Logs with the GCP app that would be great. TY!
I am trying to measure our success rate on our platform. there are two individual events which we care to see in order to consider a transaction 'successful'. The two event_names we are looking for a... See more...
I am trying to measure our success rate on our platform. there are two individual events which we care to see in order to consider a transaction 'successful'. The two event_names we are looking for are "TRANSACTIONA" and "TRANSACTIONB" . The tricky part here is that i don't only want to know the overall success rate, but i want each individual success rate to be grouped by a field which i am creating in my query using rex. the field here is 'app_id'.  I already have the overall success rate working in the below query, but trying to get these same 3 stats grouped for each individual app_id found is where i am having issues. Below is my current working query, looking to be extended into groups for each app_id: index=jj3 "TRANSACTIONA" OR "TRANSACTIONB" | rex field=log "\"app_id\": \W(?<app_id>\w+)\W" | rex field=log "\"event_name\": \W(?<event_name>[a-zA-Z-|_|:]+)\W" | eval firstTransaction=if(event_name=="TRANSACTIONA", 1, 0) | eval secondTransaction=if(event_name=="TRANSACTIONB", 1, 0) | stats sum(firstTransaction) as TotalfirstTransaction sum(secondTransaction) as TotalsecondTransaction | eval successRate=round(TotalsecondTransaction/TotalfirstTransaction*100, 1)."%"   So essentially i want: for each app_id found, i want to stats the following into output : App_id, TotalfirstTransaction, TotalsecondTransaction, successRate