All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello team, Am working with dovecot logs-- it's a mail logs. I managed to integrate it with Splunk through syslog. it gives me the logs in this format (Attached screenshot) Now, I want to... See more...
Hello team, Am working with dovecot logs-- it's a mail logs. I managed to integrate it with Splunk through syslog. it gives me the logs in this format (Attached screenshot) Now, I want to create a new field to have value of to/receiver From the screenshot the value of to/receiver is in lda(value) NOTE: on the below screenshot I dont have to/receiver values i just have from/sender and subject   Help me please !
No, I am not using that attribute in props.conf.
At first glance it looks relatively ok. Are you using indexed extractions?
@PickleRick  Props.conf setting  KV_MODE = xml NO_BINARY_CHECK = true CHARSET = UTF-8 LINE_BREAKER = <\/eqtext:EquipmentEvent>() NO_BINARY_CHECK = true SHOULD_LINEMERGE = false MAX_TIMESTAM... See more...
@PickleRick  Props.conf setting  KV_MODE = xml NO_BINARY_CHECK = true CHARSET = UTF-8 LINE_BREAKER = <\/eqtext:EquipmentEvent>() NO_BINARY_CHECK = true SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 650 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3QZ TIME_PREFIX = ((?<!ReceiverFmInstanceName>))<eqtext:EventTime> User time preference setting  
And your current settings are...?  
  Hello Splunkers!! Please help me to fix this time zone issue. Thanks in advance!!
Hi @gcusello  I tried to clone it but there is no option to clone dashboard studio to classic dashboard, dashboard studio can be cloned only to dashboard studio.
Hello Anees Ur.Rahman, Thanks for posting question to the community. You could create either a service unit file or an init script which is depent on your Linux OS. Currently, we don't provide ... See more...
Hello Anees Ur.Rahman, Thanks for posting question to the community. You could create either a service unit file or an init script which is depent on your Linux OS. Currently, we don't provide a direct way to run the EUM as a service in Linux by default. If you need help to create either a service unit file or an init script, please reach out to your Appdynamics account manager and engage someone from the professional services team who is responsible for implementing customers' environment to help you with your requirements. Best regards, Xiangning
The upper one (|eval ~) work! But when I refresh the page, the start_time and bubble_size work wrong. For Example, This is origin data, But when I refresh the page, It show like this. The... See more...
The upper one (|eval ~) work! But when I refresh the page, the start_time and bubble_size work wrong. For Example, This is origin data, But when I refresh the page, It show like this. The code is this. | eval start_time = starttime_data/1000 | eval duration = floor(duration_data/ 1000) | eval start_time_bucket = 5 * floor(start_time/5) | stats count by start_time_bucket, duration | eval bubble_size = count | table start_time_bucket, duration, bubble_size | rename start_time_bucket as "Start time" duration as "Duration"   Is this just server problem? or my Code problem?
Hi @Anonymous , I recreated the issue you described and managed to have both the Sender and Receiver Application processes shown on my AppDynamics dashboard. I didn’t add the Message Queue entry ... See more...
Hi @Anonymous , I recreated the issue you described and managed to have both the Sender and Receiver Application processes shown on my AppDynamics dashboard. I didn’t add the Message Queue entry points mentioned Above. But, I did these steps below to resolve the issue of Receiver info not being shown: Setting MSMQ Backends Monitoring for .NET: To enable downstream correlation for MSMQ, you must configure the agent. Documentation: MSMQ Backends for .NET. What to do: Go to “Tiers and Nodes” on Controller UI: Choose your Receiver Application Node. At the top right, select “Action” > “Configure App Server Agent”. Configure the App Server Agent: In the "App Server Agent Configuration" modal window, click on "Apps" from the tree on the right. Select “.NET Agent Configuration” on the right panel, then click on “+”. Define the MSMQ Correlation Field: Name:  msmq-correlation-field Description: Your description here Type: String Value: Label By following these steps, I was able to see the MQ details and transaction snapshots for both the sender and receiver applications in the AppDynamics controller. Hope this helps. Regards, Martina
hi @richgalloway , Even i tried with eval command but it did not work. But i tried as per  your query it worked, thank you.
Hello @Gustavo.Marconi , Sure I'm checking on running the curl command to check the connectivity issue. I remember asking you to update the controller url as https instead of http ? Have you made t... See more...
Hello @Gustavo.Marconi , Sure I'm checking on running the curl command to check the connectivity issue. I remember asking you to update the controller url as https instead of http ? Have you made these change and still facing the same issue ? Also can you please confirm if you have any proxy inbetween ? Best Regards, Rajesh Ganapavarapu
Use the eval command to replace the Error_Code value with the desired text.   index=testindex source=application.logs | rex "ErrorCode\:\[?<Error_Code>\d+]" | search Error_Code IN (200, 500, 400, 5... See more...
Use the eval command to replace the Error_Code value with the desired text.   index=testindex source=application.logs | rex "ErrorCode\:\[?<Error_Code>\d+]" | search Error_Code IN (200, 500, 400, 505) | stats count by Error_Code | eval Error_Code = "Application received with errorcode " + Error_Code | where count > 5    
Could you post the sanitized search from that panel? It likely has a broken reference to a lookup and/or an eventtype.
Ok, as for why your joins wouldn't work - you tried to join on a field host but your subsearch returned fields Host_Name and IP - there was no way to match that result set because it didn't have the ... See more...
Ok, as for why your joins wouldn't work - you tried to join on a field host but your subsearch returned fields Host_Name and IP - there was no way to match that result set because it didn't have the host field.
Sorry, I am too used to Linux. I believe the equivalent btool command on windows is: $splunkhome$/bin/splunk.exe btool web list settings | FINDSTR max_upload  
Let's start with 3 - we don't know your data and, frankly, since there are better ways to do it, people aren't very eager to troubleshoot join searches. Ok. Having that out of the way... 1. There i... See more...
Let's start with 3 - we don't know your data and, frankly, since there are better ways to do it, people aren't very eager to troubleshoot join searches. Ok. Having that out of the way... 1. There is a huge difference between lookup and doing a join on inputlookup-generated search. Lookup is a distributable streaming command which means that if you have a bigger environment as long as it's early enough in your search pipeline it can be performed by each indexer separately which parallelizes this work among search peers. If you do join, Splunk has to first run first search completely, return the results to the search-head running your main search and then do the join on the whole result set returned by your main search up to this point. It might not be a big difference in this particular case but in general there is a huge conceptual difference in how both those commands work and subsequently this may have a very big performance impact on your search. 2. If you want to do a lookup on either of two fields you simply do two lookups | lookup whatever.csv field1 output something as something1 | lookup whatever.csv field2 output something as something2 This way you'll get two fields on which you can do whatever you want. You can coalesce them, you can append one to the other in case both return results. You can also use this technique to filter events so that you only get those with matching entries from the lookup. <your base search> | lookup yourlookup.csv IP output IP | lookup yourlookup.csv Host_Name output Host_Name | search IP=* OR Host_Name=* This search also uses a neat trick of overwriting the field with its own contents matched from the lookup. This way if a match is found in a lookup field's value is retained otherwise field is getting cleared. This lets you filter easily in the last step to only retain those values which have a value in either of those fields. If you don't want this field cleaning to happen, return fields with another names. For example | lookup yourlookup.csv IP output IP as matched_IP And adjust the last search command accordingly
index=testindex source=application.logs |rex "ErrorCode\:\[?<Error_Code>\d+]" |search Error_Code IN(200, 500, 400, 505, 500) |stats count by Error_Code |Where count > 5 output: Error_Code count... See more...
index=testindex source=application.logs |rex "ErrorCode\:\[?<Error_Code>\d+]" |search Error_Code IN(200, 500, 400, 505, 500) |stats count by Error_Code |Where count > 5 output: Error_Code count 200 20 500 100 400 40 505 45 500 32 Instead of Errorcodes we want to display a custom text  as shown below. How can we do this?? Expected output: Error_Code count Application received with errorcode 200 20 Application received with errorcode 500 100 Application received with errorcode 400 40 Application received with errorcode 505 45 Application received with errorcode 500 32  
To make your app CIM-compliant, you should do the following: 1. Use EventTypes to apply the tags to the events so they end up in the correct data model. E.g. tag "network" and "communicate" to put i... See more...
To make your app CIM-compliant, you should do the following: 1. Use EventTypes to apply the tags to the events so they end up in the correct data model. E.g. tag "network" and "communicate" to put it in the NetworkTraffic data model. 2. Add field extractions, calculated fields, lookups, etc, to get values for the fields listed in the CIM model. The vladiator app is useful for this purpose: https://splunkbase.splunk.com/app/2968 Ref: https://docs.splunk.com/Documentation/CIM/5.3.2/User/NetworkTraffic
small note to add, since v9.x the password complexity is enforced in the user-seed.conf file as well.  So be sure the new password is at least 8ch long or whatever your complexity requirements are.  ... See more...
small note to add, since v9.x the password complexity is enforced in the user-seed.conf file as well.  So be sure the new password is at least 8ch long or whatever your complexity requirements are.  If the new etc/passwd file is not created, then check splunkd.log file for the failure reason.