All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please create a new question instead of continue with several years old accepted answer.
Quite possibly there are missing time format on your props.conf. For that reason splunk guess between mm/dd/yyyy and dad/mm/yyyy formats.
Maybe this helps you to find blocking spot? https://conf.splunk.com/files/2019/slides/FN1570.pdf
Attention, this is an AI generated answer and it wrong Moderator   @LearningGuy  Let’s delve into the differences between | summaryindex and | collect in Splunk: | summaryindex: Purpose: | sum... See more...
Attention, this is an AI generated answer and it wrong Moderator   @LearningGuy  Let’s delve into the differences between | summaryindex and | collect in Splunk: | summaryindex: Purpose: | summaryindex is primarily used for creating and managing summary indexes. A summary index is a pre-aggregated index that stores summarized data from your original events. It’s useful for speeding up searches and reducing the load on your search infrastructure. How It Works: When you use | summaryindex, it generates summary data based on existing reports. This means that you can create a summary index only from scheduled reports. Example Usage: If you have a scheduled report that summarizes data, you can pipe it | collect: Purpose: | collect is a versatile command that allows you to push data to a new index. Unlike | summaryindex, it’s not limited to existing reports. How It Works: You can use | collect to send specific data to an index of your choice. This is particularly useful when you want to extract relevant information from your search results and store it in a separate index. Name of the summary index where the events are added. The index must exist before the events are added. The index is not created automatically. Example Usage: Suppose you want to create a custom index called “test_summary” to store specific data. You can use | collect index=test_summary to achieve this. The testmode=false ensures that the data is actually indexed. In summary, while both commands involve indexing data, | summaryindex is tied to scheduled reports, whereas | collect provides more flexibility for pushing data to custom indexes regardless of report schedules. Remember that creating the summary index (whether through | summaryindex or | collect) requires defining the index specifications in indexes.conf beforehand. Happy Splunking! https://docs.splunk.com/Splexicon:Summaryindex  https://docs.splunk.com/Documentation/Splunk/9.2.0/Knowledge/Usesummaryindexing  https://docs.splunk.com/Documentation/Splunk/9.2.0/Knowledge/Managesummaryindexgapsandoverlaps  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/SearchReference/Collect 
Great! In that case, you can update the existing values.yaml file with following, and redeploy the helm chart:   In order to enable the gateway, set enabled to true in gateway section and adjust r... See more...
Great! In that case, you can update the existing values.yaml file with following, and redeploy the helm chart:   In order to enable the gateway, set enabled to true in gateway section and adjust replicaCount and other configurations associated with gateway - https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/values.yaml#L1056  Enable the agent logs via - https://github.com/signalfx/splunk-otel-collector-chart/blob/main/helm-charts/splunk-otel-collector/values.yaml#L572 Once you redeploy your helm chart with above changes, You will have gateway running as part of helm chart OTEL agents logs will be collected via daemonset and sent to your backend.
Make sure the props.conf settings for that sourcetype have the correct time settings.  Specifically, check the TIME_PREFIX, TIME_FORMAT, and MAX_TIMESTAMP_LOOKAHEAD values. Confirm the data source i... See more...
Make sure the props.conf settings for that sourcetype have the correct time settings.  Specifically, check the TIME_PREFIX, TIME_FORMAT, and MAX_TIMESTAMP_LOOKAHEAD values. Confirm the data source is sending the right events.
The current query will fetch all data from the index and then lookup the Server_name field.  To fetch only the hosts in the lookup file from the index, use a subsearch. index=Nagio sourcetype=nagios... See more...
The current query will fetch all data from the index and then lookup the Server_name field.  To fetch only the hosts in the lookup file from the index, use a subsearch. index=Nagio sourcetype=nagios:core:hard [ | inputlookup Win_inventory.CSV | fields Server_name | rename Server_name as host_name ]
Yes the Splunk Otel helm chart is already on our K8 cluster and already collecting logs from all agents   
Hi @uagraw01 , yes: I found many times that the stop in internal logs forwarding is usually caused by a queue issue from Splunk Side. Ciao. Giuseppe
@FPERVIL  https://www.tekstream.com/blog/route-data-to-multiple-destinations/  https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Routeandfilterdatad?_gl=1*l8fn8q*_ga*OTg0MDQwNjU1LjE3MDM... See more...
@FPERVIL  https://www.tekstream.com/blog/route-data-to-multiple-destinations/  https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Routeandfilterdatad?_gl=1*l8fn8q*_ga*OTg0MDQwNjU1LjE3MDM5Mjc3Mzk.*_ga_GS7YF8S63Y*MTcwOTIyNTA5Mi4xOS4xLjE3MDkyMjUxMzcuMTUuMC4w*_ga_5EPM2P39FV*MTcwOTIyNTA0MS4xOS4xLjE3MDkyMjUzMTMuMC4wLjA.&_ga=2.139033904.2099731396.1709147440-984040655.1703927739#Filter_and_route_event_data_to_target_groups  You can watch this video if you stuck anywhere  https://www.youtube.com/watch?v=AxHetwfLC0Y   
Sorry, I don't have a values.yaml file tailored for your use case. The values.yaml file you shared will only to setup a gateway on K8s cluster, which will not solve the problem you are looking to sol... See more...
Sorry, I don't have a values.yaml file tailored for your use case. The values.yaml file you shared will only to setup a gateway on K8s cluster, which will not solve the problem you are looking to solve. Let me ask you this, have you already installed Splunk OTEL helm chart on your K8s cluster, which is already running Daemonset and collecting logs?
@gcusello Queue issue from the Splunk side ??
@isoutamo Till to 27th we received all the internal index logs
I’m afraid that there haven’t this kind of information unless your data didn’t contain it.
Even you are using Kafka as a transport method, you should have your splunk infra’s internal logs there. What kind of setup yo have?
Hi @uagraw01, I didn't used Kafka, but when only internal indexes stop to arrive it's usually a queue issue. Check your queues. Ciao. Giuseppe
@gcusello Yes we are receiving the data from other indexes in Splunk. We are not using any UF we are using Kafka and it sends data to different indexes.  No no data in internal index creates any iss... See more...
@gcusello Yes we are receiving the data from other indexes in Splunk. We are not using any UF we are using Kafka and it sends data to different indexes.  No no data in internal index creates any issue? Because I want to see internal logs for last 24 hours but nothing registered there.
I've taken a few stares at that search, and I honestly think you are trying to do way too much in one search. There are several giveaways.  a) There's different binning/bucketing for one of them - t... See more...
I've taken a few stares at that search, and I honestly think you are trying to do way too much in one search. There are several giveaways.  a) There's different binning/bucketing for one of them - though I think the entire thing could be binned to 1s intervals to no actual detriment, b) results are joined together using 'join' which is rarely the right command to use because Splunk Is Not A Relational DB, and c) but you sort of *have* to use join simply because of how convoluted it would be to pile all these results into one table. But it's not necessarily the worst solution if (and only if!) there's only a single or only a few uri/fi's to actually do this for.  E.g. if it's a 1-1 mapping for a few dozen items, ... well, this is still inefficient but it's not going to be the end of the world.  A slight improvement would be to use append to get all the outputs into one table, then a final "| stats ... by fi" (or uri?)  to get the three sets of results back into a single row per fi/uri. Next best might be to rework the entire thing into one much bigger pile o' data, with a more complex stats that does all this work at once.  You'd likely have to do some conditional counting inside the stats, but though it may look a little odd, it's actually still going to be about a hundred times more efficient than your existing search, especially as it scales up to lots of uri/fi's.  https://docs.splunk.com/Documentation/Splunk/latest/Search/Usestatswithevalexpressionsandfunctions But I think best would be three simple searches with their results placed on a dashboard.  You'll have simpler searches that are much faster, far more easily edited/adjusted as things change, and likely way more robust as well. You'll also have a LOT more ways you could display the results. So a few other suggestions - I'll bet you are a DBA, are you not?  This looks similar to what someone familiar with any common RDBMS dropped into Splunk-land would do.  That's not a complaint or any slight, it's where lots of folks start.   In that case, you might want to start here: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk But even in the docs, Splunk still suggests join in Splunk to handle the left join from SQL and that I think is a disservice.  See Nick Mealy's talk  https://conf.splunk.com/watch/conf-online.html?search=mealy (and even better if you can find the 2016-2018 version of that talk which I think is even better and more detailed)
What are the inbound and outbound rules that need to be set for the EC2 (with the forwarder) and the splunk server/indexer (to receive data from forwarder)?
Not sure if you have seen it, but i posted an update. Unsure on how make it more visible. Any way, if you are using puppet, just ensure that the splunk user is created prior to installation. Then it... See more...
Not sure if you have seen it, but i posted an update. Unsure on how make it more visible. Any way, if you are using puppet, just ensure that the splunk user is created prior to installation. Then it should work fine. But yes it would be nice if the module was updated as well.