All Posts

Top

All Posts

@Kim.Frazier, Thanks for asking your question on the community. Since it's been a few days with no reply, did you find a solution or any new discoveries you could share? If you are still looking... See more...
@Kim.Frazier, Thanks for asking your question on the community. Since it's been a few days with no reply, did you find a solution or any new discoveries you could share? If you are still looking for help, you can contact Cisco AppDynamics Support. How do I submit a Support ticket? An FAQ 
Hi @Justin.Matthew, Thanks for asking your question on the community. It's been a few days, have you discovered anything worth sharing? If not, you can contact Cisco AppDynamics Support for more h... See more...
Hi @Justin.Matthew, Thanks for asking your question on the community. It's been a few days, have you discovered anything worth sharing? If not, you can contact Cisco AppDynamics Support for more help. How do I submit a Support ticket? An FAQ 
3 members in the cluster, has not updated since I made the change yesterday, even on the instance I made the change on.
Haven't tried yet, but wanted to confirm if it works for POC. Thanks.
Just using the local Splunk authentication (username and password), nothing external.
Hi @Anthony.Dahanne, Thanks for asking your question on the community. Since it's been a few days, have you discovered a solution or anything worth sharing? 
Hi @iam_ironman , does it run in this way? Ciao. Giuseppe
Hi @Haleb , where did you run the Monitoring Console, on SH? it's better to use it on Cluster Manager or (better) on a dedicated server. Anyway, if this is a lab, you have to configure your Monico... See more...
Hi @Haleb , where did you run the Monitoring Console, on SH? it's better to use it on Cluster Manager or (better) on a dedicated server. Anyway, if this is a lab, you have to configure your Monicoring Console, accessing all the systems in your infrastructure as Search Peer. In other words, go in [Settings > Distributed Search > Add Peer] and add also the Cluster Manager as Search peer (on 8089 port) and you'll see it in the Monitoring Console. I did the same error some years ago! Ciao. Giuseppe
How to add a dummy row to the table in the Splunk dashboard. We are receiving 2 files everyday 4 times in between 6-7:30AM, 11-12:30 PM, 6-7:30PM, 9-10:05PM. I need output like below if received on... See more...
How to add a dummy row to the table in the Splunk dashboard. We are receiving 2 files everyday 4 times in between 6-7:30AM, 11-12:30 PM, 6-7:30PM, 9-10:05PM. I need output like below if received one file means has to display like missing other file. Using | makeresults command we can create a row but it is applicable while calculating the timings. Input :  File  Date TI7L 03-06-2024 06:52 TI7L 03-06-2024 06:55 TI8L 03-06-2024 11:51 TI8L 03-06-2024 11:50 TI9L 03-06-2024 19:06 TI9L 03-06-2024 19:10 TI5L 03-06-2024 22:16 TI5L 03-06-2024 22:20     Output:  File  Date TI7L 03-06-2024 06:52 Missing file Missing file TI8L 03-06-2024 11:50 TI9L 03-06-2024 19:06 Missing file TI5L 03-06-2024 22:16 Missing file
This article is the continuation of the “Combine multiline logs into a single event with SOCK - a step-by-step guide for newbies” blog, where we went through multiline processing for the default Kube... See more...
This article is the continuation of the “Combine multiline logs into a single event with SOCK - a step-by-step guide for newbies” blog, where we went through multiline processing for the default Kubernetes logs pipeline.  Let's take a closer look at how the multilineConfigs option functions. This way, you can easily customize it using the standard OTel configuration to fit your specific needs. To fully understand it, we'll go through the operators and break down the default SOCK's filelog configuration into its basic parts. Operators overview The filelog receiver is the critical part of the Splunk OTel collector for Kubernetes log collection mechanism. The receiver is already heavily configured in the helm chart - if you’re interested in how this section looks for your version, run:     kubectl describe cm/<helm-app-name>-otel-agent     And look for the filelog section. The config itself is very long, so today we’ll focus only on the operators section. Thanks to this capability we can create mini pipelines within the receiver itself to be able to process logs correctly based on certain criteria. It might seem like a bunch of complicated, not-understandable statements at first. Don’t worry, we’ll break it down one by one, starting with… Routers! Look at the snippet of the first of SOCK’s filelog receiver operators: Depending on the type of logs, we want to parse it accordingly. In the Kubernetes world, their format depends on the container runtime. In SOCK we support docker, cri-o, and containerd. The routes section is a simple substitute for the switch statement, well-known in the programming world. In the above example, if the log body matches the regex format ^\\{, the log is being passed to another operator with the id of parser-docker, for ^[^ Z]+ it is parser-crio, and ^[^ ]+ goes to parser-containerd. Let’s see one of them, parser-containerd: We can see it is a regex_parser that parses logs to gather fields like time and logtag. And that’s all, then filelog receiver executes operators one by one according to their sequence. But, if you’d like to pass the log to another operator based on your choice, you’d need only to specify the output field. The config would look like this: Parsers As we’ve shown in the previous example, regex_parser parses the string-type field configured in parse_from with regex created by the user. Thanks to that we can extract multiple attributes from one string in one operation. It’s one of many parsers available for filelog operators, another example would be json_parser which is used in SOCK to parse docker logs timestamp: In every parser, there are two options parse_from and parse_to. The default value of parse_from is body - which means a simple logline, and for parse_to it is an array of attributes. In case your logs follow some other popular format, check out other parsing operators like syslog parser or CSV parser. All of them are described here. Recombine Finally, we reached the main point in our operator journey - the recombine operator is very powerful whenever you want to combine consecutive logs into single logs based on simple expression rules. Let’s take one of the examples from our SOCK’s config: This means we combine logs whenever we encounter attributes.logtag set to F. logtag attribute was extracted from the log body before, in parser-containerd, you can go back to this example from the above. Alternatively to is_last_entry you can configure is_first_entry - it should depend on if it’s easier to define the beginning or the end of the multiline block. source_dentifier tells what field should be used to separate one source of logs for combining purposes. In this example, we do it based on the log’s file path. Multiline config for advanced users multilineConfigs setting is fairly easy to use and doesn’t require knowledge about operators, but the drawback is that you can use it only on Kubernetes logs from the default logs pipeline. If you want to set up multiline processing in the pipeline written by you using the extraFileLogs option, you need to configure operators by yourself. Let’s take a look at how the ultimate filelog config looks after applying the multilineConfigs rule, as we’ll need to manually do something similar. We’ll use a Java example here: This values.yaml config produces the following operators snippet: Which can be presented as a diagram:   In both operators, you can see the clean-up-log-record operation which moves attributes.log to body. This is necessary in the case of SOCK because of the processing it does at the beginning with parser-containerd, parser-docker, or parser-crio. The config will be even less complicated if you don’t use such a mechanism. extraFileLogs configuration Let’s start with the setup of a bare logs pipeline: NOTE: preserve_leading_whitespaces option is necessary whenever your processing rule is based on leading whitespaces, if not set OTel will automatically trim your whitespaces. Again we use the same Java log file that produces such a result in Splunk:   This time we need to apply multiline config as a part of filelog operators. Scenario 1 - only recombine In case we’re 100% sure our multiline config applies to all the logs collected by the pipeline, we can use one recombine operator without any complicated logic. For a directory composed of Java files, we can apply such a config: source_identifier is attributes["log.file.path"] as this is the only differentiator we have at this point. Applying this config results in correct log processing:   Scenario 2 - recombine and a router This time let’s combine two cases - java logs and the logs with a timestamp. Such a config requires defining a router to not unnecessarily run all the logs through the same operators. Additionally, in some cases running regex on a log that doesn’t match might result in runtime errors resulting in sending nothing to Splunk. After applying this config:   Both files were processed correctly:     Here an important thing to remember is that operators are being executed one by one, so we have to define noop (the operator that does not do anything) as an exit condition. The diagram looks like this:   General notes There are a few things to remember when creating your filelog pipeline: Every regex pattern must be double-backspaced. When the multilineConfigs option is used, this is automatically done by the Helm mechanism. Remember to do it manually or you might end up with such issues: failed to compile is_first_entry: invalid char escape (1:32) It is important to set output and default fields in routers. For example - if we didn’t do it in the previous example and the log would match the first expression it would be passed to the newline-processor, but then the timestamp-processor would be triggered as well - because it is next in a sequence. There are better ways to create expression patterns than attributes["log.file.path"] matches ".*java.*", we should generally avoid greedy regexes whenever possible. You can learn more about expression language to find out a better way that suits your needs.  
Mel Hall is a Splunk employee and he likes to create videos on his own time.  I try to implement his videos.  One of his videos is on a previous version of Splunk.  It did not work for me.  Attending... See more...
Mel Hall is a Splunk employee and he likes to create videos on his own time.  I try to implement his videos.  One of his videos is on a previous version of Splunk.  It did not work for me.  Attending a Udemy course will not give me a solution to the problem.   Udemy will greatly increase my knowledge but it will not necessarily answer questions that may arise when trying to implement other people's videos.
Good Day, On the below message. Adding the IP to the Server Settings. Does the Server Settings sit in PowerBI or in Splunk? To find List Management. I have exactly the same error trying to conn... See more...
Good Day, On the below message. Adding the IP to the Server Settings. Does the Server Settings sit in PowerBI or in Splunk? To find List Management. I have exactly the same error trying to connect to Splunk Cloud connection from PowerBI Any help would be appreciated - Thanks
Awesome, thanks for the help, much appreciated. This worked for me. Thanks again, Tom  
Hi, I tried to build Splunk environment with 1 SH and indexer cluster with 2 pears + manager node. When I go to Monitoring console -> Settings -> General Setup it shows me only my SH and pears withou... See more...
Hi, I tried to build Splunk environment with 1 SH and indexer cluster with 2 pears + manager node. When I go to Monitoring console -> Settings -> General Setup it shows me only my SH and pears without manager node But when I go to Distributed environment I can see my indexer manager configured I did something wrong or it should not be displayed in General Setup menu?
index="intau" host="server1" sourcetype="services_status.out.log" service="HTTP/1.1" status=* | eval status=if(status=502,200,status) | chart count by status
I try to import into the Observability platform, but I fail to follow your documentation. This page, https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.ht... See more...
I try to import into the Observability platform, but I fail to follow your documentation. This page, https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.html#admin-org-tokens, says Settings - Access Tokens exists, but it doesn't. (My home page https://prd-p-a9b9x.splunkcloud.com/en-US/manager/splunk_app_for_splunk_o11y_cloud/authentication/users). Settings - Tokens exists, but it doesn't create tokens with scopes. I don't know if that's a documentation error or an application error. I then tried running the code at https://docs.splunk.com/observability/en/gdi/other-ingestion-methods/rest-APIs-for-datapoints.html#start-sending-data-using-the-api, which says I need a realm. And a realm can be found at "your profile page in the user interface". But it's not in User Settings and it's not in Settings - User Interface. Your documentation doesn't seem to match your application. Am I on the wrong page, or your docs years out of date? Please help.
Yes, that's exactly what that is for. Still, consider what @gcusello already said - multiplying indexes is not always a good practice. There are different mechanisms for data "separation" depending o... See more...
Yes, that's exactly what that is for. Still, consider what @gcusello already said - multiplying indexes is not always a good practice. There are different mechanisms for data "separation" depending on your use case. Unless you need - different access permissions - different retention period or you have significantly different data characteristics (cardinatility, volume and "sparsity") you should leave the data in the same index and limit your searches by adding conditions.
What I meant by "dynamic" is that the value for index should be what regex finds and uses it for FORMAT. I know I can use static value but wanted to confirm it that is something possible using regex ... See more...
What I meant by "dynamic" is that the value for index should be what regex finds and uses it for FORMAT. I know I can use static value but wanted to confirm it that is something possible using regex to dynamically use correct index which is part to Source. Example of sources : phone-1234 , tablet-23456, pc-45623, pc-79954 [new_index] SOURCE_KEY = MetaData:Source REGEX = (\w+)\-\d+  FORMAT = $1                                       #This needs be either phone, tablet, pc etc. and don't want to make static DEST_KEY = _MetaData:Index WRITE_META = true
Hi KendallW,    This is the search:  index=_internal (host=`sim_indexer_url` OR host=`sim_si_url`) sourcetype=splunkd group=per_Index_thruput series!=_* | timechart minspan=30s per_second(kb) a... See more...
Hi KendallW,    This is the search:  index=_internal (host=`sim_indexer_url` OR host=`sim_si_url`) sourcetype=splunkd group=per_Index_thruput series!=_* | timechart minspan=30s per_second(kb) as kb by series   Then I selected 30 days on the time picker.  Also selected visualization. I have attached another screenshot. I hope it helps.  
I managed to solve it by looking at splunk doc and noticing i was using the wrong flags # configuring Splunk   msiexec.exe /i "C:\Installs\SplunkInstallation\splunkforwarder-9.2.0.1-d8ae995bf2... See more...
I managed to solve it by looking at splunk doc and noticing i was using the wrong flags # configuring Splunk   msiexec.exe /i "C:\Installs\SplunkInstallation\splunkforwarder-9.2.0.1-d8ae995bf219-x64-release.msi" SPLUNKUSERNAME=admin SPLUNKPASSWORD=**** DEPLOYMENT_SERVER="********:8089" AGREETOLICENSE=Yes /quiet