All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@leykmekoo A tip for the future  | inputlookup your_lookup | eval your_wildcard_field=your_wildcard_field."*" | outputlookup your_lookup  
Extract and test for the day of the week similar to how date_hour was done. index=winsec source=WinEventLog:Security EventCode=6272 | eval date_hour = strftime(_time, "%H"), date_wday = strftime(_t... See more...
Extract and test for the day of the week similar to how date_hour was done. index=winsec source=WinEventLog:Security EventCode=6272 | eval date_hour = strftime(_time, "%H"), date_wday = strftime(_time, "%A") | where date_hour >= 19 OR date_hour <=06 OR date_wday = "Saturday" OR date_wday = "Sunday" | timechart count(src_user)
Hello,  my current search is  index=winsec source=WinEventLog:Security EventCode=6272 | eval date_hour = strftime(_time, "%H") | where date_hour >= 19 OR date_hour <=06 | timechart count(src_... See more...
Hello,  my current search is  index=winsec source=WinEventLog:Security EventCode=6272 | eval date_hour = strftime(_time, "%H") | where date_hour >= 19 OR date_hour <=06 | timechart count(src_user) This provides me with a graph of logins made after hours. I want to expand the acceptable items to include the entire days of saturday/sunday as well. When I attempt to add this, i get "no results" what would be the best way to include that? 
We want to discuss this with technical support.   As of this time it seems we are acting as QA when we need a fix.  
Hi,  I know this post is quite old but anyway, here is my stanza which is working fine: [WinEventLog://Microsoft-Windows-BitLocker/BitLocker Management] index = windows disabled = 0 renderXml = ... See more...
Hi,  I know this post is quite old but anyway, here is my stanza which is working fine: [WinEventLog://Microsoft-Windows-BitLocker/BitLocker Management] index = windows disabled = 0 renderXml = 1 evt_resolve_ad_obj= 1 start_from = oldest current_only = 0 checkpointInterval = 5 sourcetype=XmlWinEventLog   Did you check the Splunkd.log on you UF on start? Maybe the user running Splunk Forwarder Service is not able to access the logs. Or there are just no logs available? On my site it is not written very frequently.  The Splunk_TA_windows is also required on the UF. 
Thank you for the reply and example! Greatly appreciated. Ken
Hi @Vishnu Teja.Katta, Thanks for asking your question on the Community. Since it's been a few days with no reply, did you happen to find any new information you could share? If you are looking f... See more...
Hi @Vishnu Teja.Katta, Thanks for asking your question on the Community. Since it's been a few days with no reply, did you happen to find any new information you could share? If you are looking for help still, you can contact Cisco AppDynamics Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.  
Hi @Roberto.Barnes, Thanks for asking your question on the community. It seems no one was able to offer any info. I think it would be helpful to reach out to your AppD Rep for more information on t... See more...
Hi @Roberto.Barnes, Thanks for asking your question on the community. It seems no one was able to offer any info. I think it would be helpful to reach out to your AppD Rep for more information on this or reach out to AppD Call a Consultant. https://community.appdynamics.com/t5/Knowledge-Base/A-guide-to-AppDynamics-help-resources/ta-p/42353#call-a-consultant
Hello @Kamal.Manchanda, Since it's been a few days and the community did not jump in, did you happen to find a solution yourself you can share? If you still need help, you can learn more about c... See more...
Hello @Kamal.Manchanda, Since it's been a few days and the community did not jump in, did you happen to find a solution yourself you can share? If you still need help, you can learn more about contacting Cisco AppDynamics Support here: AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. 
The message appears because httpout is not configured.  The outputs.conf file shown defines tcpout, not httpout.  Since the [httpout] stanza is optional, these INFO messages can be ignored.
Hello @Surendra.Maddullapalli, It's been a few days with no reply from the community. Have you discovered a solution or have any further information you can share? If you are still looking for h... See more...
Hello @Surendra.Maddullapalli, It's been a few days with no reply from the community. Have you discovered a solution or have any further information you can share? If you are still looking for help, you can contact Cisco AppDynamics Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. 
Hello @Lidiane.Wiesner, @Terence.Chen had some follow-up questions for you to help find a solution to your question? If this is still an issue, reply with that info to keep the conversation going. 
I am using  Splunk Enterprise Version: 9.2.1 and installed IT Essentials Learn but getting error fetching use case families. Is ITSI a prerequisite for ITE? I installed the app using the GUI.
Joining the two searches would require some common field to join on. Since none exists in your example, you'll need to either add an identifier to all related logs at the source, or get creative with... See more...
Joining the two searches would require some common field to join on. Since none exists in your example, you'll need to either add an identifier to all related logs at the source, or get creative with a solution based on time that could get finicky. For example, in your sample data, you have 3 events. The transaction ID in event 1 occurs 2 seconds before the error log. If there can be more than one concurrent transaction, then there doesn't appear to be a way to be certain that the correct transaction ID will be found that corresponds to error. e.g. 240614 04:35:50 Algorithm: Al10: <=== Recv'd TRN: AAA (TQ_HOST -> TQ_HOST) 240614 04:35:51 Algorithm: Al10: <=== Recv'd TRN: BBB (TQ_HOST -> TQ_HOST) 240614 04:35:52 Algorithm: TSXXX hs_handle_base_rqst_msg: Error Executing CompareRBSrules Procedure. 240614 04:35:52 Algorithm: TSXXX hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription: In this case, does the error belong to transaction AAA or BBB? The second issue will be how much time can elapse between the "Rec'd TRN" log, and any possible error. Without a field linking these logs, you'll have to use some fixed time range to try to bring logs together. Too short, and you'll fail to find the transaction ID, too long and you might find multiple IDs (leading to the issue mentioned above). IF you can assume that logs are synchronous, and there is no interleaving of transactions, then something like this should work:   index=test_index source=/test/instance | sort _time | rex field=_raw "<=== Recv'd TRN:\s+(?<transaction_id>\w+)" | eval failure=if(like(_raw, "%ORA-00001%"), 1, 0) | filldown transaction_id | where failure=1 | table transaction_id, failure, _raw  
Adding a wildcard to a 1000+ lookup table was a pain   but that seems to resolve the issue i was having.  It's a good lesson as well. Thank you and everyone for your recommendations!! 
In fact from https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/ it states that,  "In a distributed deployment, the location where a user installs a custom data input d... See more...
In fact from https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/ it states that,  "In a distributed deployment, the location where a user installs a custom data input depends on their Splunk Cloud Platform Experience (Classic or Victoria). In Classic Experience, custom data inputs run on the the Inputs Data Manager (IDM). If you deploy an app with a custom data input to the search head or indexer, the input does not run on these components. In Victoria Experience, custom data inputs run on the search head and don’t require the IDM."  
Thank you! This pointed me in the right direction! It turned out that the issue was that the token was somehow picking up the nat_source_address field as well.    
Thank you very much for your comment and share of source code! It has helped me out. I am not very well versed in xml, html, web design etc but this is bringing back some memories and I'm starting to... See more...
Thank you very much for your comment and share of source code! It has helped me out. I am not very well versed in xml, html, web design etc but this is bringing back some memories and I'm starting to get more accustomed to it again.  Ken
Thanks @gcusello for your response.  From the doc, I read that for Splunk Classic experience, it is recommended to install TA on IDM. Whereas in case of Splunk Victoria, it is recommended to install... See more...
Thanks @gcusello for your response.  From the doc, I read that for Splunk Classic experience, it is recommended to install TA on IDM. Whereas in case of Splunk Victoria, it is recommended to install TA on search head. I like the second approach, might as well, strip out the KV store logic out of the TA and place it in App such that whether it is on-prem or cloud, there shouldnt be an issue in updating kvstore data since app is installed on search head and that would take care of updating kv store. Does this sound reasonable?  
Your data that's already in ingested needs to be made CIM complaint, it might be worth spending some time getting your head around the CIM concepts, after this you can look  at developing correlation... See more...
Your data that's already in ingested needs to be made CIM complaint, it might be worth spending some time getting your head around the CIM concepts, after this you can look  at developing correlation rules.    https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Complying_with_the_Splunk_Common_Information_model