All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @bmanikya please confirm if my understanding is correct: You want to match the "user" field from the first screenshot with the "user" field from the bd_users_hierarchy.csv lookup, and the "app_id... See more...
Hi @bmanikya please confirm if my understanding is correct: You want to match the "user" field from the first screenshot with the "user" field from the bd_users_hierarchy.csv lookup, and the "app_id" field from the third screenshot?
If I'm reading this right, you have data that has events with pods and their phases. In your example query, you appear to be using decimal values to create your ranges, but can we assume that the act... See more...
If I'm reading this right, you have data that has events with pods and their phases. In your example query, you appear to be using decimal values to create your ranges, but can we assume that the actual pos states fall on specific integers? Something like this might work:   | makeresults count=25 | eval phase=(random()%5)+1 ``` Everything above here is just to create sample data ``` ``` The following statement groups and counts phases. | stats count by phase ``` The following statement maps phases to a string equivalent ``` | eval label=case(phase=1,"A (Pending)", phase=2,"B (Running)", phase=3,"C (Succeeded)", phase=4,"D (Failed)", phase=5,"E (Stopping?)", 1=1,"Unknown")   If the phase values are not discreet, and the range you mention is necessary, then you can use a case statement like this:   | makeresults count=25 | eval phase=((random()%50)/10)+1 | eval phase_group=case(phase<1.5,1, phase<2.5,2, phase<3.5,3, phase<4.5,4, phase<5.5,5) | stats count by phase_group | eval label=case(phase_group=1,"A (Pending)", phase_group=2,"B (Running)", phase_group=3,"C (Succeeded)", phase_group=4,"D (Failed)", phase_group=5,"E (Stopping?)", 1=1,"Unknown")    
Could you please share a screenshot?
Assume for the moment that these work individually: Outputs1 [tcpout] defaultGroup = primary_indexers forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) ... See more...
Assume for the moment that these work individually: Outputs1 [tcpout] defaultGroup = primary_indexers forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) useSSL = true [indexer_discovery:company] pass4SymmKey = passhere manager_uri = https://clustermanager:8089 [tcpout:primary_indexers] indexerDiscovery = company sslCertPath = $SPLUNK_HOME/etc/apps/allforwarders_outputs/local/cert.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/allforwarders_outputs/local/cacert.pem Outputs2 [tcpout] defaultGroup = heavy_forwarders forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) useSSL = true [tcpout:primary_heavy_forwarders] server = y.y.y.y:9997 sslCertPath = $SPLUNK_HOME/etc/apps/uf_outputs/local/othercert.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/uf_outputs/local/othercacert.pem If I understand the documentation correctly all we would need to do is this: [tcpout] defaultGroup = primary_indexers, primary_heavy_forwarders forceTimebasedAutoLB = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) useSSL = true [indexer_discovery:company] pass4SymmKey = passhere manager_uri = https://clustermanager:8089 [tcpout:primary_indexers] indexerDiscovery = company sslCertPath = $SPLUNK_HOME/etc/apps/allforwarders_outputs/local/cert.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/allforwarders_outputs/local/cacert.pem [tcpout:primary_heavy_forwarders] server = y.y.y.y:9997 sslCertPath = $SPLUNK_HOME/etc/apps/uf_outputs/local/othercert.pem sslRootCAPath = $SPLUNK_HOME/etc/apps/uf_outputs/local/othercacert.pem Is this correct? In this configuration the exact same data would be flowing to both destinations? There would be no issues binding the certifcates to different stanzas? I appreciate the responses.
@leykmekoo A tip for the future  | inputlookup your_lookup | eval your_wildcard_field=your_wildcard_field."*" | outputlookup your_lookup  
Extract and test for the day of the week similar to how date_hour was done. index=winsec source=WinEventLog:Security EventCode=6272 | eval date_hour = strftime(_time, "%H"), date_wday = strftime(_t... See more...
Extract and test for the day of the week similar to how date_hour was done. index=winsec source=WinEventLog:Security EventCode=6272 | eval date_hour = strftime(_time, "%H"), date_wday = strftime(_time, "%A") | where date_hour >= 19 OR date_hour <=06 OR date_wday = "Saturday" OR date_wday = "Sunday" | timechart count(src_user)
Hello,  my current search is  index=winsec source=WinEventLog:Security EventCode=6272 | eval date_hour = strftime(_time, "%H") | where date_hour >= 19 OR date_hour <=06 | timechart count(src_... See more...
Hello,  my current search is  index=winsec source=WinEventLog:Security EventCode=6272 | eval date_hour = strftime(_time, "%H") | where date_hour >= 19 OR date_hour <=06 | timechart count(src_user) This provides me with a graph of logins made after hours. I want to expand the acceptable items to include the entire days of saturday/sunday as well. When I attempt to add this, i get "no results" what would be the best way to include that? 
We want to discuss this with technical support.   As of this time it seems we are acting as QA when we need a fix.  
Hi,  I know this post is quite old but anyway, here is my stanza which is working fine: [WinEventLog://Microsoft-Windows-BitLocker/BitLocker Management] index = windows disabled = 0 renderXml = ... See more...
Hi,  I know this post is quite old but anyway, here is my stanza which is working fine: [WinEventLog://Microsoft-Windows-BitLocker/BitLocker Management] index = windows disabled = 0 renderXml = 1 evt_resolve_ad_obj= 1 start_from = oldest current_only = 0 checkpointInterval = 5 sourcetype=XmlWinEventLog   Did you check the Splunkd.log on you UF on start? Maybe the user running Splunk Forwarder Service is not able to access the logs. Or there are just no logs available? On my site it is not written very frequently.  The Splunk_TA_windows is also required on the UF. 
Thank you for the reply and example! Greatly appreciated. Ken
Hi @Vishnu Teja.Katta, Thanks for asking your question on the Community. Since it's been a few days with no reply, did you happen to find any new information you could share? If you are looking f... See more...
Hi @Vishnu Teja.Katta, Thanks for asking your question on the Community. Since it's been a few days with no reply, did you happen to find any new information you could share? If you are looking for help still, you can contact Cisco AppDynamics Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.  
Hi @Roberto.Barnes, Thanks for asking your question on the community. It seems no one was able to offer any info. I think it would be helpful to reach out to your AppD Rep for more information on t... See more...
Hi @Roberto.Barnes, Thanks for asking your question on the community. It seems no one was able to offer any info. I think it would be helpful to reach out to your AppD Rep for more information on this or reach out to AppD Call a Consultant. https://community.appdynamics.com/t5/Knowledge-Base/A-guide-to-AppDynamics-help-resources/ta-p/42353#call-a-consultant
Hello @Kamal.Manchanda, Since it's been a few days and the community did not jump in, did you happen to find a solution yourself you can share? If you still need help, you can learn more about c... See more...
Hello @Kamal.Manchanda, Since it's been a few days and the community did not jump in, did you happen to find a solution yourself you can share? If you still need help, you can learn more about contacting Cisco AppDynamics Support here: AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. 
The message appears because httpout is not configured.  The outputs.conf file shown defines tcpout, not httpout.  Since the [httpout] stanza is optional, these INFO messages can be ignored.
Hello @Surendra.Maddullapalli, It's been a few days with no reply from the community. Have you discovered a solution or have any further information you can share? If you are still looking for h... See more...
Hello @Surendra.Maddullapalli, It's been a few days with no reply from the community. Have you discovered a solution or have any further information you can share? If you are still looking for help, you can contact Cisco AppDynamics Support. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. 
Hello @Lidiane.Wiesner, @Terence.Chen had some follow-up questions for you to help find a solution to your question? If this is still an issue, reply with that info to keep the conversation going. 
I am using  Splunk Enterprise Version: 9.2.1 and installed IT Essentials Learn but getting error fetching use case families. Is ITSI a prerequisite for ITE? I installed the app using the GUI.
Joining the two searches would require some common field to join on. Since none exists in your example, you'll need to either add an identifier to all related logs at the source, or get creative with... See more...
Joining the two searches would require some common field to join on. Since none exists in your example, you'll need to either add an identifier to all related logs at the source, or get creative with a solution based on time that could get finicky. For example, in your sample data, you have 3 events. The transaction ID in event 1 occurs 2 seconds before the error log. If there can be more than one concurrent transaction, then there doesn't appear to be a way to be certain that the correct transaction ID will be found that corresponds to error. e.g. 240614 04:35:50 Algorithm: Al10: <=== Recv'd TRN: AAA (TQ_HOST -> TQ_HOST) 240614 04:35:51 Algorithm: Al10: <=== Recv'd TRN: BBB (TQ_HOST -> TQ_HOST) 240614 04:35:52 Algorithm: TSXXX hs_handle_base_rqst_msg: Error Executing CompareRBSrules Procedure. 240614 04:35:52 Algorithm: TSXXX hs_handle_base_rqst_msg: Details of ABC error ReSubResult:-1,FinalStatus:H,ErrorCode:-1,chLogMsg:SQL CODE IS -1 AND SQLERRM IS ORA-00001: unique constraint (INSTANCE.IDX_TS_UAT_ABC_ROW_ID) violated,LogDiscription: In this case, does the error belong to transaction AAA or BBB? The second issue will be how much time can elapse between the "Rec'd TRN" log, and any possible error. Without a field linking these logs, you'll have to use some fixed time range to try to bring logs together. Too short, and you'll fail to find the transaction ID, too long and you might find multiple IDs (leading to the issue mentioned above). IF you can assume that logs are synchronous, and there is no interleaving of transactions, then something like this should work:   index=test_index source=/test/instance | sort _time | rex field=_raw "<=== Recv'd TRN:\s+(?<transaction_id>\w+)" | eval failure=if(like(_raw, "%ORA-00001%"), 1, 0) | filldown transaction_id | where failure=1 | table transaction_id, failure, _raw  
Adding a wildcard to a 1000+ lookup table was a pain   but that seems to resolve the issue i was having.  It's a good lesson as well. Thank you and everyone for your recommendations!! 
In fact from https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/ it states that,  "In a distributed deployment, the location where a user installs a custom data input d... See more...
In fact from https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/ it states that,  "In a distributed deployment, the location where a user installs a custom data input depends on their Splunk Cloud Platform Experience (Classic or Victoria). In Classic Experience, custom data inputs run on the the Inputs Data Manager (IDM). If you deploy an app with a custom data input to the search head or indexer, the input does not run on these components. In Victoria Experience, custom data inputs run on the search head and don’t require the IDM."