All Topics

Top

All Topics

Hi Can anyone please suggest where I can submit a bug report for dashboard visualisations? Thanks
The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest metrics, traces, and logs to the Splunk platform using an HEC. If you are a DevOps Engineer or... See more...
The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest metrics, traces, and logs to the Splunk platform using an HEC. If you are a DevOps Engineer or SRE, you may already be familiar with the OTel Collector’s flexibility, but for those less experienced, this blog post will serve as an introduction to routing logs.  The idea of OpenTelemetry as a whole is to unify the data so it's suitable for every input and output and put some processors in between to make it possible to perform operations on data (such as transforming and filtering). You may already see that one of the biggest advantages of OTel Collector is its flexibility - but sometimes figuring out how to use it in practice is a challenge. One of the most common cases in log processing is setting up the event’s index. If you’re familiar with the Splunk HEC receiver, you might recall this configuration snippet: This indicates that every event used by this exporter will be sent to the logs index.  As you may see, the logs index is specific to an exporter, so the intuition is to create as many splunk_hec exporters as you need, and additionally create multiple filelog receivers as well, so that we can filter which files go to which index. Using your imagination, visualize a scenario where all the logs go to the ordinary logs index, but some are only visible to people with higher permissions levels. These logs are gathered by filelog/security receiver and the pipeline structure would look like this one: But is it really the best solution? Let’s consider a few questions here: splunk_hec exporter config seems to be the same, the only difference is the index field. Does it make sense to copy the configuration over and over? filelog receiver gives a way of configuring a place to gather logs. What about other filtering options, like ones based on severity or a specific phrase in the log’s body? Everytime we create a new pipeline, a new process comes to life - doesn’t this consume too many resources? The solution: Dynamic index routing Today I’ll show you how to create a pipeline with dynamic index routing, meaning it is based on incoming logs and not statically set, with a transform processor and Splunk OpenTelemetry Collector for Kubernetes (SOCK). The idea is based on this attribute from Splunk HEC Exporter documentation: This means that we can specify com.splunk.index as a resource attribute for a log, and it will overwrite the default index. Let’s go through a few examples of how we can do it in SOCK. Viewing the pipelines configured in SOCK Before we cover how to overwrite your pipelines, let’s start with how you can view the pipeline. The final config is the result of your configuration in values.yaml, as well as the default configuration that is delivered by SOCK. The config’s yaml file is in the pod’s configmap. As logs are generated by the agent, you can look at the agent’s config, the command is: Where my-splunk-otel-collector-otel-agent is the configmap’s name - it might differ in your case, especially if you chose a different name for an installation versus one from the Getting Started docs. You can take a look at a configmaps you have with the command: An output example for a default namespace would be: After successfully running the describe command, scroll all the way down until you see the pipelines section. For logs, it looks more or less like this: Now you know what components your logs pipeline is made of!  Easy scenarios Now let’s get our hands dirty! Let’s see the easy examples of index routing based on real scenarios. Scenarios based on the log attribute The scenario: Let’s say we want to pass all the events with a log.iostream attribute stderr to error_index This would capture events emitted to the error stream and send them to their own index. The solution: This requires doing two things: Overwriting the default agent config with a new transform processor and adding it to the logs pipeline.  Setting up the new processor’s statement to specify the correct target index. Every transform processor consists of a set of statements. We need to create one that matches our use case, by defining what we need and writing it specifically for OTel. The logical statement here would be: set com.splunk.index value to be error_index for EVERY log from the pipeline whose attribute log.iostream is set to stderr Then the statement in the transform processor’s syntax described here looks like this: Next, we need to append the processor to the logs pipeline. To do that, we need to copy and paste the current processors under the agent.config section then insert our processor at the end. The whole config will be: After applying the config, the stderr events appear in the error_index: Scenarios based on specific log text The scenario Passing an event to a different index when something specific is written in the body of the log, for example, every log that contains [WARNING]: The solution All the keywords used here come from the transform processor documentation. We can use the  transform processor, this time using the following logic: Here are some sources that can be used to learn more about OpenTelemetry Transformation Language and its grammar. Then we repeat the steps described in the previous solutions section. The final config is: And the result in the Splunk Enterprise looks like this: How do I know what attributes I can use? At this point, you might think “Oh right, that looks easy, but how would I know what attributes to use?” The logs in the transform processor can use all the elements described here, but the most useful ones are: body - referring to the log body attributes - referring to the attributes specific to a single log resource.attributes - referring to the attributes specific to multiple logs from the same source You can see them in the Splunk Enterprise event preview: However, there’s no indication as to which dimensions are attributes and which are resource.attributes. You can see how it looks by running  your OTel agent with this config: This will produce all the information about log structure and which attributes are really the resource.attributes: From this snippet, you can see that only logtag and log.iostream are attributes, all the rest are part of the resource.attributes. The transform processor has many options aside from the ones described above, check them out here.  Complex Scenarios Let’s go even deeper and operate on two variables instead of one. Scenarios based on setting index-based annotations You may want to annotate the whole namespace with one splunk.com/index, but want specific pods from this namespace to redirect somewhere else. You can do this by using a transform processor to provide additional annotations to the pod of your choice. Let’s say the annotation is second_index. This is how it looks in kubectl describe of the pod: Transforming annotation into resource.attribute First, redirect logs from the pods according to the  second_index annotation to convert the annotation to a resource.attribute. This can be done with extraAttributes.fromAnnotations config:   tag_name is the identifier of an element in resource.attributes, it is optional. If you don’t configure it your attribute will look like this: k8s.pod.annotations.<key>is the output format. With tag_name you can decide how the name of your attribute, in this example it is the same as the key: Make OTel pass logs to the index Now that we have resource.attribute second_index set up, we can set the index destination for logs. We will use transform processor for this purpose: We will replace the com.splunk.index resource attribute with the second_index attribute, but only when the second_index attribute is present - so it doesn’t affect logs from other pods. Delete unnecessary attributes Once the attribute has been moved to the log's index, we can get rid of it. This requires adding another statement to the transform processor: Scenarios based on labels setting the index  This will work exactly the same as an annotation example from the previous section, the only difference is in how we’re transforming the label into resource.attribute. We now have the  second_index label on a pod: We can make it visible to the OTel collector with this config snippet: Conclusion In this article, I showed you how to route logs to different indexes. It is a commonly used feature and it can be used in many scenarios, as we can see in the examples. We will expand on other SOCK features in later articles, so stay tuned!
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | sta... See more...
I need to report hosts that are configured to receive app.log details and also report the ones that are missing. For this, I use the query "index=application sourcetype="application:server:log" | stats values(host) as hostnames by customer_name". This retrieves the hostnames for each customer_name from the sourcetype.   I get a result as: customer_name host customer1 server1 customer2 server2 server3   Then, I join the result by customer_name field from the second part of the query "[| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] which retrieves the hostnames for each customer_name from the server_info.csv lookup table."   Here I get result as: customer_name host customer_name host customer1 server1 server100 customer2 server2 server3 server101 Later, I expand both the multivalue fields and perform evaluation on both the fields to retrieve result as configured or not configured. The evaluation looks like this | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured   My final query looks like this: (index=application sourcetype="application:server:log) | stats values(host) as hostnames by customer_name | join customer_name [| inputlookup server_info.csv | rename customer as customer_name | stats values(host) as hostnames_lookup by customer_name] | mvexpand hostnames | mvexpand hostnames_lookup | eval not_configured = if(hostnames == hostnames_lookup, hostnames, null()) | eval configured = if(hostnames != hostnames_lookup, hostnames, null()) | fields customer_name, hostnames, hostnames_lookup, configured, not_configured However, in the result when the evaluation is completed the results are not as expected, the matching logic doesn't work and the resultant output is incorrect. There are no values evaluated in the not_configured column and the configured column only returns the values in hostnames. However, I'd expect the configured field to show results of all the servers configured to receive app.log and not configured to have hostnames that are present in lookup but are still not configured to receive logs.  Expected Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1 server100 customer2 server2 server3 server2 server3 server101 server2 server3 server101   Current Output: customer_name hostnames hostnames_lookup configured not_configured customer1 server1 server1 server100 server1   customer2 server2 server3 server2 server3 server101 server2 server3     Essentially customer1 and customer2 should display server1 as configured and server100 not_configured and likewise for customer2 as mentioned in expected output table. Which will mean that server100 and 101 are part of the lookup but are not configured to receive app.log How can I evaluate this differently, so that the comparison works as expected. Is it possible to compare the values in this fashion? Is there anything wrong with the current comparison logic? Should I not use mvexpand on the extracted fields so that they are compared expectedly?        
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it po... See more...
We have an issue with long JSON log events, which is longer than console width limit - they are splitted to 2 separate events, each of them is not a correct JSON. How to handle it correctly? Is it possible to restore broken messages on splunk side, or we need to reach logger to know about width limitation and chunk messages in a proper way? How to handle large JSON events?
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description,... See more...
Hello! I would like to run a search which would display all information regarding entities and services. For example, for Entities where could I find information stored for: Entity Description, Entity Information Field, Entity Title. For Services, where could I find information stored for: Service Description, Service Title, Service Tags What type of search query could I run to find this information? Thanks,
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i ... See more...
Thanks in Advance. I have four inputs Time,Environment,Application Name and Interface Name and two panels one is fiance and bank.Both panels has different applications name and interface names.So i tried to use depends and reject in the inputs.If i change one panel to another the inputs like dropdown and text box remains same but the values need to be change as per the panels. <row> <panel id="panel_layout"> <input id="input_link_split_by" type="link" token="tokSplit" searchWhenChanged="true"> <label></label> <choice value="Finance">OVERVIEW</choice> <choice value="BankIntegrations">BANKS</choice> <default>OVERVIEW</default> <initialValue>OVERVIEW</initialValue> <change> <condition label="Finance"> <set token="Finance">true</set> <unset token="BankIntegrations"></unset> </condition> <condition label="BankIntegrations"> <set token="BankIntegrations">true</set> <unset token="Finance"></unset> </condition> <row> <panel> <input type="time" token="time" searchWhenChanged="true"> <label>Time Interval</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Environment</label> <choice value="*">ALL</choice> <choice value="DEV">DEV</choice> <choice value="TEST">TEST</choice> <choice value="PRD">PRD</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>ApplicationName</label> <choice value="*">ALL</choice> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>"p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$Finance$" rejects="$BankIntegrations$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> <input type="dropdown" token="applicationName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>ApplicationName</label> <choice value="p-wd-finance-api">p-wd-finance-api</choice> <default>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</default> <initialValue>p-oracle-fin-processor","p-oracle-fin-processor-2","p-wd-finance-api</initialValue> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>ApplicationName</fieldForValue> </input> <input type="text" token="InterfaceName" searchWhenChanged="true" depends="$BankIntegrations$" rejects="$Finance$"> <label>InterfaceName</label> <default></default> <initialValue></initialValue> </input> </panel> </row>  
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am ge... See more...
Hi, I have the raw data/Event as below, the splunk gets the rawdata  every 2 hrs once and only 4 time a day. This runs at 11.36 AM ,13.36,15:36 PM, 17:36 PM splunk gets the rawdata. Per day i am getting ~2.5K events  Field:DATETIME , tells what time the job run 2024-04-15 21:36:58.960, DATETIME="2024-04-15 17:36:02", REGION="India", APPLICATION="webApp", CLIENT_CODE="ind", MARKET_CODE="SEBI", TRADE_COUNT="1" What I am looking is when i run the dashboard, where I want to monitor the trade count by market_code over latest DATETIME. For instance, if i run the dashboard at 14:00 hrs, the field DATETIME might have 11.36 (~600 events), 13.36(~600 events). I want to see only 13.36hrs 600 events, and metric would be TRADE_COUNT by MARKET_CODE Thanks, Selvam.
Hello Team, I am trying for a solution using multiselect input filter where the index token is passed to panels. From the below code, I currently see the filter values "Stack1", "Stack2" and "Stack... See more...
Hello Team, I am trying for a solution using multiselect input filter where the index token is passed to panels. From the below code, I currently see the filter values "Stack1", "Stack2" and "Stack3". But I face an issue that the value passed is from label. I need the index_tkn to hold value aws_stack02_p, aws_stack01_p, aws_stack01`_n. <input type="multiselect" token="index_tkn" searchWhenChanged="false"> <label>Select Stack</label> <valuePrefix>index="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>index</fieldForLabel> <fieldForValue>label</fieldForValue> <search> <query>index IN ({aws_stack02_p,aws_stack01_p,aws_stack01_n}) | eval label = case(index == "aws_stack02_p", "Stack1",index == "aws_stack01_p", "Stack2",index == "aws_stack01_n", "Stack3") |stats count by label</query> <earliest>$time_tkn.earliest$</earliest> <latest>$time_tkn.latest$</latest> </search> </input>  
i am using splunk cloud and design is UF > hf>splunk CLOUD  in HF"S we have outputs file like below      i have below splunk configuration in outputs.conf file in heavy forwarder here sslPasswor... See more...
i am using splunk cloud and design is UF > hf>splunk CLOUD  in HF"S we have outputs file like below      i have below splunk configuration in outputs.conf file in heavy forwarder here sslPassword is same for all HF"S if i am using multiple heavy forwarders root@hostname:/opt/splunk/etc/apps/100_stackname_splunkcloud/local # cat outputs.conf [tcpout] sslPassword = 27adhjwgde2y67dvff3tegd36scyctefd73****************** channelReapLowater = 10 channelTTL = 300000 dnsResolutionInterval = 300 negotiateNewProtocol = true socksResolveDNS = false useClientSSLCompression = true negotiateProtocolLevel = 0 channelReapInterval = 60000 tcpSendBufSz = 5120000 useACK = false [tcpout:splunkcloud] useClientSSLCompression = true maxQueueSize = 250MB autoLBFrequency = 300  
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS... See more...
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS Remaining =62751, VRS Remaining =0, InvetoryDate =4/15/2024 6:16:07 AM"} I want to extract fields from message and it will look like below. I tried the through rgex but I am unable to extract. Please help to create extract for    CPW Total SEQ Total EAS Total VRS Total CPW Remaining SEQ Remaining EAS Remaining VRS Remaining InvetoryDate 844961 244881 1248892 238 74572 22 62751 0 4/15/2024 6:16:07 AM  
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching exist... See more...
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching existing configuration when I'm about to make new changes .  Thus I would like to be able to create views showing me details from props, transforms and indexes on the search heads. My question is; do you see any potential pitfalls by having the configuration on search heads as well as the indexers?  Or, are there any other solution for being able to view configuration on the indexer peers from the search heads? Cheers!
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data t... See more...
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data to send agent with gateway. Can anyone guide me how to solve this issue
I'm sure someone here has worked on a powershell script to install splunk to different windows hosts remotely. Can I get help with that? my powershell skills are really weak.
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see not... See more...
Hello,  1. Is there an option (built in or manually built) for a container to view history of the older containers with the same artifacts and details ? It can make an analyst work easier to see notes and how the older case was solved.  2. by enabling “logging” for a playbook, where opt logs are stored to access later on (beside vie debugging in the UI..)   thank you in advance!
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: ... See more...
Hi   I finished upgrading Splunk ES to 7.3.0 on 1 of 2 non-clustered Search Heads and I receive this error on the Search Head Post Install Configuration wizard menu "Error in 'essinstall' command: Automatic SSL enablement is not permitted on the deployer". Splunk support have recommened to change the setting on web.conf to "splunkdConnectionTimeout = 3000", which I added to the system file and restarted the splunkd. Unforutnately this timeout setting does not help fix this "known issue". I have selected Enable SSL option in the Post Config Process as I know that SSL is enabled in both the Deployer and SH web configs. If anyone has a work around for this or can suggest how I can enable SSL after the post configuration of Splunk ES on both the SH and Deployer, it would be appreciated.   Thanks
index=app-logs sourcetype=app-data source=*app.logs*  host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host |where count < 100 |bin span=1m _time W... See more...
index=app-logs sourcetype=app-data source=*app.logs*  host=appdatajs01 OR host=appdatajs02 OR host=appdatajs03 OR host=appdatajs04 |stats count by host |where count < 100 |bin span=1m _time We have an alert with the above query,  Alert is getting triggered when the count of hosts is less than 100. but not getting triggered when the count of any  host is zero. How to make the alert to trigger even when the count=0
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has ... See more...
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has been received with the name test2.txt lob b:  The file has been found at the second destination C://user/test2.txt I am trying to write a query that captures the time between log a and log b without doing a subsearch, so far I have  index=a, env=a, account=a ("There is a file" OR "The file has been found")|field filename from log b | field filename2| eval Endtime = _time | ****Here is where I am lost, I was hoping to use if/match/like/eval to see to capture the start time where log b filename can be found in log a.  I have this so far******   | eval Starttime = if(match(filename,"There is%".filename2."%"),_time,0) I am not getting any 1s, just 0s.  I am pretty sure this is the problem "There is%".filename2."%", how do I correct it.
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from ... See more...
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from the search write the following: | rex field=event.url "^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+)" What should I add in the props.conf so that this instruction is fixed for the sourcetype "sec-web"?
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I h... See more...
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I have to wait for the changes t happen?
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with th... See more...
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with their associated entities as well as list associated alerts or severity. Is there a query that could achieve this? any pointers are very much appreciated! Also any pointers where I could potentially find the data and bring it together in a search would be very helpful too. Thanks!