All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kare.peng, That link seems to be broken and also links to an old version of AppDynamics. Can you confirm what Controller version number you are using? Here is the most recent documentation fo... See more...
Hi @kare.peng, That link seems to be broken and also links to an old version of AppDynamics. Can you confirm what Controller version number you are using? Here is the most recent documentation for this feature: https://docs.appdynamics.com/appd/24.x/24.7/en/end-user-monitoring/mobile-real-user-monitoring
Hi @Anees Ur.Rahman, Did you get a chance to see the reply from @Xiangning.Mao? If the reply helped, please click the 'Accept as Solution' button on their reply. If not, keep the conversation going... See more...
Hi @Anees Ur.Rahman, Did you get a chance to see the reply from @Xiangning.Mao? If the reply helped, please click the 'Accept as Solution' button on their reply. If not, keep the conversation going by replying to this thread. 
Yeah, I was starting to consider that afterwards.  I appreciate the assistance. 
So you have it - if you run the first instance you'll overwrite earlier gathered data. True, subsequent three runs will append to your lookup but only after the fourth run you'll have the full 24h-lo... See more...
So you have it - if you run the first instance you'll overwrite earlier gathered data. True, subsequent three runs will append to your lookup but only after the fourth run you'll have the full 24h-long result set. I'd rather consider summary indexing instead of building a lookup.
While technically it should be possible to do with @gcusello 's way of chaining subsearches it's a very bad idea. Subsearches do have their limitation so your result can be completely wrong. Unfortu... See more...
While technically it should be possible to do with @gcusello 's way of chaining subsearches it's a very bad idea. Subsearches do have their limitation so your result can be completely wrong. Unfortunately if you really need to do a full text search it's not possible to use the techniques typically used in similar cases since they rely on common fields. Be aware though that regardless of the subsearch use searching through unparsed data can also be very performance-intensive.
Each query would be offset in its scheduling queryA would run at midnight, looking back from the previous midnight - to previous 0600 queryB would run a bit later, looking back from the previous ... See more...
Each query would be offset in its scheduling queryA would run at midnight, looking back from the previous midnight - to previous 0600 queryB would run a bit later, looking back from the previous 0600 - to previous 1200 queryC would run a bit later, looking back from the previous 1200 - to previous 1800 queryD would run a bit later, looking back from the previous 1800 - to previous 0000 Purpose is intended to not create so much resource utilization. I essentially want to piecemeal the 4 outputs into 1 lookup, read that lookup, enrich it, and schedule that as the alert itself. Then I want it do it all over again, but I do not want the lookup to keep appending after a 24hr cycle.  TL;DR I want a solution to break up a 24hr alert into chunks and bring it back together. 
It does not... I gave it shot, but thank you for the idea! 
Hi Community, We have the "Splunk Add-on for Microsoft Office 365" installed.  We've created "Inputs" for "Audit.AzureActiveDirectory", "Audit.Exchange","Audit.SharePoint". As a result, we are gett... See more...
Hi Community, We have the "Splunk Add-on for Microsoft Office 365" installed.  We've created "Inputs" for "Audit.AzureActiveDirectory", "Audit.Exchange","Audit.SharePoint". As a result, we are getting all the Azure, Exchange, and SharePoint Azure audit log events loaded into Splunk! Perfect! Now we want to add the "Teams" audit log events also.   But we don't see an "Audit.Teams" entry in the "Content Type" picklist on the "Add Management Activity" screen.  We only see the entries listed above. The only option we see relative to Teams is on the "Create New Input" list and that only loads aggregate Usage Report data on calliong.  Unfortunately, that is useless for us. Has anyone figured out how to load/ingest all the Teams related Azure Audit Log events like the above AzureAD, Exchange, SharePoint events are loaded? Thanks in advance for any advice!!
Make two separate correlation searches. These are two separate conditions.
That's probably one of the quirks of MSI - sometimes it calls for an installation package even when you're uninstalling a program.
Do your strings have spaces? Try using the trim function | eval match = tonumber(trim(SequenceNumber_Comment)) - tonumber(trim(SequenceNumber_Withdrawal))
You can either start from the beginning adding subsequent commands to see when your results stop being what you wanted them to be or from the end - removing commands one by one untill your intermedia... See more...
You can either start from the beginning adding subsequent commands to see when your results stop being what you wanted them to be or from the end - removing commands one by one untill your intermediate results start making sense.
So I have the fields that I want to subtract.  One is SequenceNumber_Comment (ex 211) and SequenceNumber_Withdrawal (ex 210). I want to subtract the values and put them in the variable match. Below i... See more...
So I have the fields that I want to subtract.  One is SequenceNumber_Comment (ex 211) and SequenceNumber_Withdrawal (ex 210). I want to subtract the values and put them in the variable match. Below is the SPLI have but I get an empty value. | eval match = tonumber(SequenceNumber_Comment) - tonumber(SequenceNumber_Withdrawal)   What do I do???  Thank you!!!
No value is getting displayed in TotalTrans field when I am running the given query.
Ok. This is a windows event. Normal approach to this kind of events would be to ingest them as XML using renderXml=true setting in input(s) and use TA_windows to parse them.
Yes.  The data is organized in KV pairs.  What is different is that it uses two different connectors, "=" and ":".  It also does not quote the value.  So, I am not sure if automatic extraction is fea... See more...
Yes.  The data is organized in KV pairs.  What is different is that it uses two different connectors, "=" and ":".  It also does not quote the value.  So, I am not sure if automatic extraction is feasible.  But at search time, you can simply do   | kv pairdelim=" " kvdelim="=:"   Your sample data will give the following fields: field name field value ComputerName sacreblue Domaine_du_compte AUTORITE NT EventCode 4672 EventType 0 ID de sécurité AUTORITE NT\Système ID_d_ouverture_de_session 0x3e7 Keywords Succès de l'audit LogName Security Message Privilèges spéciaux attribués à la nouvelle ouverture de session. Nom_du_compte Système OpCode Informations Privilèges SeAssignPrimaryTokenPrivilege RecordNumber 2746 SourceName Microsoft Windows security auditing. TaskCategory Ouverture de session spéciale Type Information Here is an emulation that you can play with and compare with real data         | makeresults | eval _raw = "04/29/2014 02:50:23 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4672 EventType=0 Type=Information ComputerName=sacreblue TaskCategory=Ouverture de session spéciale OpCode=Informations RecordNumber=2746 Keywords=Succès de l'audit Message=Privilèges spéciaux attribués à la nouvelle ouverture de session. Sujet : ID de sécurité : AUTORITE NT\Système Nom du compte : Système Domaine du compte : AUTORITE NT ID d'ouverture de session : 0x3e7 Privilèges : SeAssignPrimaryTokenPrivilege SeTcbPrivilege SeSecurityPrivilege SeTakeOwnershipPrivilege SeLoadDriverPrivilege SeBackupPrivilege SeRestorePrivilege SeDebugPrivilege SeAuditPrivilege SeSystemEnvironmentPrivilege SeImpersonatePrivilege" ``` data emulation above ```         I still have a question about your conversion to XML.  Do you mean that you use an external tool to convert that raw text into XML before ingesting into Splunk?  If you have this option, why not convert the raw text into JSON for which Splunk has better support?
This is the new link to the documentation. https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Configuredistributedsearch#Use_the_CLI
I want to display total transactions without where condition in result with other fields which has specific where condition, for.eg  | eval totalResponseTime=round(requestTimeinSec*1000), | conve... See more...
I want to display total transactions without where condition in result with other fields which has specific where condition, for.eg  | eval totalResponseTime=round(requestTimeinSec*1000), | convert num("requestTimeinSec") | rangemap field="totalResponseTime" "totalResponseTime"=0-3000 | rename range as RangetotalResponseTime | eval totalResponseTimeabv3sec=round(requestTimeinSec*1000) | rangemap field="totalResponseTimeabv3sec" "totalResponseTimeabv3sec"=3001-60000 | rename range as RangetotalResponseTimeabv3sec | eval Product=case( (like(proxyUri,"URI1") AND like(methodName,"POST"))OR (like(proxyUri,"URI2") AND like(methodName,"GET"))OR (like(proxyUri,"URI3") AND like(methodName,"GET")),"ABC") | bin span=5m _time | stats count(totalResponseTime) as TotalTrans count(eval(RangetotalResponseTime="totalResponseTime")) as TS<3S count(eval(RangetotalResponseTimeabv3sec="totalResponseTimeabv3sec")) as TS>3SS by Product URI methodName _time | eval TS<XS=case( Product="ABC",'TS<3S') | eval TS>3S = 'TotalTrans'-'TS<XS' | eval SLI=case(Product="ABC",round('TS<3S'/TotalTrans*100,4)) | rename methodName AS Method | where (Product="ABC") and (SLI<99) | stats sum(TS>3S) As AvgImpact count(URI) as DataOutage by Product URI Method | fields Product URI Method TotalTrans SLI AvgImpact DataOutage | sort Product URI Method
I just had this same problem with 6.4.0 and found a workaround. The Tenable docs for Add-On describe a setting "Verify SSL Certificate" that for whatever reason is not visible from the UI. https://... See more...
I just had this same problem with 6.4.0 and found a workaround. The Tenable docs for Add-On describe a setting "Verify SSL Certificate" that for whatever reason is not visible from the UI. https://docs.tenable.com/integrations/Splunk/Content/Splunk2/ConfigureTenablescCertificatesS2.htm Modify /opt/splunk/etc/apps/TA-tenable/bin/tenable_consts.py Change True to False where applicable to your deployment. Save and close the file. No restart required. verify_ssl_for_ot = False verify_ssl_for_sc_cert = False verify_ssl_for_sc_api_key = False verify_ssl_for_sc_creds = False