All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Make sure the props.conf settings for that sourcetype have the correct time settings.  Specifically, check the TIME_PREFIX, TIME_FORMAT, and MAX_TIMESTAMP_LOOKAHEAD values. Confirm the data source i... See more...
Make sure the props.conf settings for that sourcetype have the correct time settings.  Specifically, check the TIME_PREFIX, TIME_FORMAT, and MAX_TIMESTAMP_LOOKAHEAD values. Confirm the data source is sending the right events.
The current query will fetch all data from the index and then lookup the Server_name field.  To fetch only the hosts in the lookup file from the index, use a subsearch. index=Nagio sourcetype=nagios... See more...
The current query will fetch all data from the index and then lookup the Server_name field.  To fetch only the hosts in the lookup file from the index, use a subsearch. index=Nagio sourcetype=nagios:core:hard [ | inputlookup Win_inventory.CSV | fields Server_name | rename Server_name as host_name ]
Yes the Splunk Otel helm chart is already on our K8 cluster and already collecting logs from all agents   
Hi @uagraw01 , yes: I found many times that the stop in internal logs forwarding is usually caused by a queue issue from Splunk Side. Ciao. Giuseppe
@FPERVIL  https://www.tekstream.com/blog/route-data-to-multiple-destinations/  https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Routeandfilterdatad?_gl=1*l8fn8q*_ga*OTg0MDQwNjU1LjE3MDM... See more...
@FPERVIL  https://www.tekstream.com/blog/route-data-to-multiple-destinations/  https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Routeandfilterdatad?_gl=1*l8fn8q*_ga*OTg0MDQwNjU1LjE3MDM5Mjc3Mzk.*_ga_GS7YF8S63Y*MTcwOTIyNTA5Mi4xOS4xLjE3MDkyMjUxMzcuMTUuMC4w*_ga_5EPM2P39FV*MTcwOTIyNTA0MS4xOS4xLjE3MDkyMjUzMTMuMC4wLjA.&_ga=2.139033904.2099731396.1709147440-984040655.1703927739#Filter_and_route_event_data_to_target_groups  You can watch this video if you stuck anywhere  https://www.youtube.com/watch?v=AxHetwfLC0Y   
Sorry, I don't have a values.yaml file tailored for your use case. The values.yaml file you shared will only to setup a gateway on K8s cluster, which will not solve the problem you are looking to sol... See more...
Sorry, I don't have a values.yaml file tailored for your use case. The values.yaml file you shared will only to setup a gateway on K8s cluster, which will not solve the problem you are looking to solve. Let me ask you this, have you already installed Splunk OTEL helm chart on your K8s cluster, which is already running Daemonset and collecting logs?
@gcusello Queue issue from the Splunk side ??
@isoutamo Till to 27th we received all the internal index logs
I’m afraid that there haven’t this kind of information unless your data didn’t contain it.
Even you are using Kafka as a transport method, you should have your splunk infra’s internal logs there. What kind of setup yo have?
Hi @uagraw01, I didn't used Kafka, but when only internal indexes stop to arrive it's usually a queue issue. Check your queues. Ciao. Giuseppe
@gcusello Yes we are receiving the data from other indexes in Splunk. We are not using any UF we are using Kafka and it sends data to different indexes.  No no data in internal index creates any iss... See more...
@gcusello Yes we are receiving the data from other indexes in Splunk. We are not using any UF we are using Kafka and it sends data to different indexes.  No no data in internal index creates any issue? Because I want to see internal logs for last 24 hours but nothing registered there.
I've taken a few stares at that search, and I honestly think you are trying to do way too much in one search. There are several giveaways.  a) There's different binning/bucketing for one of them - t... See more...
I've taken a few stares at that search, and I honestly think you are trying to do way too much in one search. There are several giveaways.  a) There's different binning/bucketing for one of them - though I think the entire thing could be binned to 1s intervals to no actual detriment, b) results are joined together using 'join' which is rarely the right command to use because Splunk Is Not A Relational DB, and c) but you sort of *have* to use join simply because of how convoluted it would be to pile all these results into one table. But it's not necessarily the worst solution if (and only if!) there's only a single or only a few uri/fi's to actually do this for.  E.g. if it's a 1-1 mapping for a few dozen items, ... well, this is still inefficient but it's not going to be the end of the world.  A slight improvement would be to use append to get all the outputs into one table, then a final "| stats ... by fi" (or uri?)  to get the three sets of results back into a single row per fi/uri. Next best might be to rework the entire thing into one much bigger pile o' data, with a more complex stats that does all this work at once.  You'd likely have to do some conditional counting inside the stats, but though it may look a little odd, it's actually still going to be about a hundred times more efficient than your existing search, especially as it scales up to lots of uri/fi's.  https://docs.splunk.com/Documentation/Splunk/latest/Search/Usestatswithevalexpressionsandfunctions But I think best would be three simple searches with their results placed on a dashboard.  You'll have simpler searches that are much faster, far more easily edited/adjusted as things change, and likely way more robust as well. You'll also have a LOT more ways you could display the results. So a few other suggestions - I'll bet you are a DBA, are you not?  This looks similar to what someone familiar with any common RDBMS dropped into Splunk-land would do.  That's not a complaint or any slight, it's where lots of folks start.   In that case, you might want to start here: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk But even in the docs, Splunk still suggests join in Splunk to handle the left join from SQL and that I think is a disservice.  See Nick Mealy's talk  https://conf.splunk.com/watch/conf-online.html?search=mealy (and even better if you can find the 2016-2018 version of that talk which I think is even better and more detailed)
What are the inbound and outbound rules that need to be set for the EC2 (with the forwarder) and the splunk server/indexer (to receive data from forwarder)?
Not sure if you have seen it, but i posted an update. Unsure on how make it more visible. Any way, if you are using puppet, just ensure that the splunk user is created prior to installation. Then it... See more...
Not sure if you have seen it, but i posted an update. Unsure on how make it more visible. Any way, if you are using puppet, just ensure that the splunk user is created prior to installation. Then it should work fine. But yes it would be nice if the module was updated as well.
hi, need some help, i have this format type but it seems the word 'up' is not matching for whatever reason. there is no spaces or anything in the field value.  the field value is extracted using ... See more...
hi, need some help, i have this format type but it seems the word 'up' is not matching for whatever reason. there is no spaces or anything in the field value.  the field value is extracted using 'rex'. i have this working in other fields, but this one got me stuck.  any help will be appreciated.    <format type="color" field="state"> <colorPalette type="expression">if (value == "up","#Green", "#Yellow")</colorPalette> </format>    
can you provide an example values.yaml with what you have in mind?  Are you saying it should be this code below? if so, this yaml seems to output gateway logs but its not getting picked up and sent ... See more...
can you provide an example values.yaml with what you have in mind?  Are you saying it should be this code below? if so, this yaml seems to output gateway logs but its not getting picked up and sent through to splunk. clusterName: CHANGEME splunkObservability: realm: CHANGEME accessToken: CHANGEME gateway: enabled: true replicaCount: 1 resources: limits: cpu: 2 memory: 4Gi agent: enabled: false clusterReceiver: enabled: false logsCollection: containers: excludeAgentLogs: false  
Hi @FPERVIL, see this document that answers to your requisite: https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Routeandfilterdatad#Filter_and_route_event_data_to_target_groups Ciao. ... See more...
Hi @FPERVIL, see this document that answers to your requisite: https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Routeandfilterdatad#Filter_and_route_event_data_to_target_groups Ciao. Giuseppe
i at all, I'm ingesting data using HEC in a distributed infratructure using a Load Balancer to distribute traffic from many senders between our Heavy Forwarders. Now, I need to identify the sender ... See more...
i at all, I'm ingesting data using HEC in a distributed infratructure using a Load Balancer to distribute traffic from many senders between our Heavy Forwarders. Now, I need to identify the sender of each event, is there a meta-data that identify the hostname and IP address of each sender? I didn't find it in HEC documentation. Thank you for your support. Ciao. Giuseppe
I have a few servers that have universal forwarders that need to be updated where I can send the Application data to one Splunk environment and the OS logs to another environment.  I believe this is ... See more...
I have a few servers that have universal forwarders that need to be updated where I can send the Application data to one Splunk environment and the OS logs to another environment.  I believe this is possible but just want to know who to get this done.  I'm assuming the inputs.conf and outputs.conf need to be updated.  Just looking for guidance.