All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the quick reply. That has helped in that it's extracted the "msg data" section from the headers, but I'm still unsure of how to parse each individual value ("meteoTemp", or "meteolunarPerc... See more...
Thanks for the quick reply. That has helped in that it's extracted the "msg data" section from the headers, but I'm still unsure of how to parse each individual value ("meteoTemp", or "meteolunarPercent" for example) into separate objects so they can represented by separate and different "widgets" on a dashboard. Sticking with those same two examples, I ultimately want to plot temperature on a line chart, but show lunarPercent as a single value   Thanks.
I have a Splunk table that has 3 rows and a count for each row. How do I make each value in table go to a different URL.  This is what I have but every row I click goes to that link. I want each tabl... See more...
I have a Splunk table that has 3 rows and a count for each row. How do I make each value in table go to a different URL.  This is what I have but every row I click goes to that link. I want each table to go to a different link.     "type": "splunk.table",     "dataSources": {         "primary": "ds_5ds4f5"     },     "title": "Device Inventory",     "eventHandlers": [         {             "type": "drilldown.customUrl",             "options": {                 "url": "https://device.com",                 "newTab": true             }         }     ],
What is it and how does it work? I've got it installed but there is no documentation that I can find... 
Hi @kare.peng, That link seems to be broken and also links to an old version of AppDynamics. Can you confirm what Controller version number you are using? Here is the most recent documentation fo... See more...
Hi @kare.peng, That link seems to be broken and also links to an old version of AppDynamics. Can you confirm what Controller version number you are using? Here is the most recent documentation for this feature: https://docs.appdynamics.com/appd/24.x/24.7/en/end-user-monitoring/mobile-real-user-monitoring
Hi @Anees Ur.Rahman, Did you get a chance to see the reply from @Xiangning.Mao? If the reply helped, please click the 'Accept as Solution' button on their reply. If not, keep the conversation going... See more...
Hi @Anees Ur.Rahman, Did you get a chance to see the reply from @Xiangning.Mao? If the reply helped, please click the 'Accept as Solution' button on their reply. If not, keep the conversation going by replying to this thread. 
Yeah, I was starting to consider that afterwards.  I appreciate the assistance. 
So you have it - if you run the first instance you'll overwrite earlier gathered data. True, subsequent three runs will append to your lookup but only after the fourth run you'll have the full 24h-lo... See more...
So you have it - if you run the first instance you'll overwrite earlier gathered data. True, subsequent three runs will append to your lookup but only after the fourth run you'll have the full 24h-long result set. I'd rather consider summary indexing instead of building a lookup.
While technically it should be possible to do with @gcusello 's way of chaining subsearches it's a very bad idea. Subsearches do have their limitation so your result can be completely wrong. Unfortu... See more...
While technically it should be possible to do with @gcusello 's way of chaining subsearches it's a very bad idea. Subsearches do have their limitation so your result can be completely wrong. Unfortunately if you really need to do a full text search it's not possible to use the techniques typically used in similar cases since they rely on common fields. Be aware though that regardless of the subsearch use searching through unparsed data can also be very performance-intensive.
Each query would be offset in its scheduling queryA would run at midnight, looking back from the previous midnight - to previous 0600 queryB would run a bit later, looking back from the previous ... See more...
Each query would be offset in its scheduling queryA would run at midnight, looking back from the previous midnight - to previous 0600 queryB would run a bit later, looking back from the previous 0600 - to previous 1200 queryC would run a bit later, looking back from the previous 1200 - to previous 1800 queryD would run a bit later, looking back from the previous 1800 - to previous 0000 Purpose is intended to not create so much resource utilization. I essentially want to piecemeal the 4 outputs into 1 lookup, read that lookup, enrich it, and schedule that as the alert itself. Then I want it do it all over again, but I do not want the lookup to keep appending after a 24hr cycle.  TL;DR I want a solution to break up a 24hr alert into chunks and bring it back together. 
It does not... I gave it shot, but thank you for the idea! 
Hi Community, We have the "Splunk Add-on for Microsoft Office 365" installed.  We've created "Inputs" for "Audit.AzureActiveDirectory", "Audit.Exchange","Audit.SharePoint". As a result, we are gett... See more...
Hi Community, We have the "Splunk Add-on for Microsoft Office 365" installed.  We've created "Inputs" for "Audit.AzureActiveDirectory", "Audit.Exchange","Audit.SharePoint". As a result, we are getting all the Azure, Exchange, and SharePoint Azure audit log events loaded into Splunk! Perfect! Now we want to add the "Teams" audit log events also.   But we don't see an "Audit.Teams" entry in the "Content Type" picklist on the "Add Management Activity" screen.  We only see the entries listed above. The only option we see relative to Teams is on the "Create New Input" list and that only loads aggregate Usage Report data on calliong.  Unfortunately, that is useless for us. Has anyone figured out how to load/ingest all the Teams related Azure Audit Log events like the above AzureAD, Exchange, SharePoint events are loaded? Thanks in advance for any advice!!
Make two separate correlation searches. These are two separate conditions.
That's probably one of the quirks of MSI - sometimes it calls for an installation package even when you're uninstalling a program.
Do your strings have spaces? Try using the trim function | eval match = tonumber(trim(SequenceNumber_Comment)) - tonumber(trim(SequenceNumber_Withdrawal))
You can either start from the beginning adding subsequent commands to see when your results stop being what you wanted them to be or from the end - removing commands one by one untill your intermedia... See more...
You can either start from the beginning adding subsequent commands to see when your results stop being what you wanted them to be or from the end - removing commands one by one untill your intermediate results start making sense.
So I have the fields that I want to subtract.  One is SequenceNumber_Comment (ex 211) and SequenceNumber_Withdrawal (ex 210). I want to subtract the values and put them in the variable match. Below i... See more...
So I have the fields that I want to subtract.  One is SequenceNumber_Comment (ex 211) and SequenceNumber_Withdrawal (ex 210). I want to subtract the values and put them in the variable match. Below is the SPLI have but I get an empty value. | eval match = tonumber(SequenceNumber_Comment) - tonumber(SequenceNumber_Withdrawal)   What do I do???  Thank you!!!
No value is getting displayed in TotalTrans field when I am running the given query.
Ok. This is a windows event. Normal approach to this kind of events would be to ingest them as XML using renderXml=true setting in input(s) and use TA_windows to parse them.
Yes.  The data is organized in KV pairs.  What is different is that it uses two different connectors, "=" and ":".  It also does not quote the value.  So, I am not sure if automatic extraction is fea... See more...
Yes.  The data is organized in KV pairs.  What is different is that it uses two different connectors, "=" and ":".  It also does not quote the value.  So, I am not sure if automatic extraction is feasible.  But at search time, you can simply do   | kv pairdelim=" " kvdelim="=:"   Your sample data will give the following fields: field name field value ComputerName sacreblue Domaine_du_compte AUTORITE NT EventCode 4672 EventType 0 ID de sécurité AUTORITE NT\Système ID_d_ouverture_de_session 0x3e7 Keywords Succès de l'audit LogName Security Message Privilèges spéciaux attribués à la nouvelle ouverture de session. Nom_du_compte Système OpCode Informations Privilèges SeAssignPrimaryTokenPrivilege RecordNumber 2746 SourceName Microsoft Windows security auditing. TaskCategory Ouverture de session spéciale Type Information Here is an emulation that you can play with and compare with real data         | makeresults | eval _raw = "04/29/2014 02:50:23 PM LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4672 EventType=0 Type=Information ComputerName=sacreblue TaskCategory=Ouverture de session spéciale OpCode=Informations RecordNumber=2746 Keywords=Succès de l'audit Message=Privilèges spéciaux attribués à la nouvelle ouverture de session. Sujet : ID de sécurité : AUTORITE NT\Système Nom du compte : Système Domaine du compte : AUTORITE NT ID d'ouverture de session : 0x3e7 Privilèges : SeAssignPrimaryTokenPrivilege SeTcbPrivilege SeSecurityPrivilege SeTakeOwnershipPrivilege SeLoadDriverPrivilege SeBackupPrivilege SeRestorePrivilege SeDebugPrivilege SeAuditPrivilege SeSystemEnvironmentPrivilege SeImpersonatePrivilege" ``` data emulation above ```         I still have a question about your conversion to XML.  Do you mean that you use an external tool to convert that raw text into XML before ingesting into Splunk?  If you have this option, why not convert the raw text into JSON for which Splunk has better support?