All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks, @PickleRick  I understand about the linear search nature of lookups. I was hoping there were perhaps some new commands on the horizon with the most recent (or future) versions of Splunk Ente... See more...
Thanks, @PickleRick  I understand about the linear search nature of lookups. I was hoping there were perhaps some new commands on the horizon with the most recent (or future) versions of Splunk Enterprise. Or someone might have experience with MLTK, or another Splunk product, to handle this use case. Thanks and God bless, Genesius
Hi @CHAUHAN812, In that case, in the indexes.conf file, you just need to adjust the frozenTimePeriodInSecs parameter in the 2 index stanzas. [index01] frozenTimePeriodInSecs = 34187400 [index02]... See more...
Hi @CHAUHAN812, In that case, in the indexes.conf file, you just need to adjust the frozenTimePeriodInSecs parameter in the 2 index stanzas. [index01] frozenTimePeriodInSecs = 34187400 [index02] frozenTimePeriodInSecs = 34187400 Restart Splunk after that
Hello, Thank you very much for all of the details, that did the trick and I can finally move on to the next task. Thanks again, Tom
Yes , I have an individual indexer which is installed on Linux machine. And I need to increase the frozenTimePeriodInSecs only for 2 of the indexes. So to increase the Frozen Time Period from 12 mon... See more...
Yes , I have an individual indexer which is installed on Linux machine. And I need to increase the frozenTimePeriodInSecs only for 2 of the indexes. So to increase the Frozen Time Period from 12 months to 13 months then I just need to update the frozenTimePeriodInSecs values to the indexes.conf file from the indexer server right ?   
If you have individual indexers then it is correct place. After change do reload for it. If you have indexer cluster then you must do this change on CM. Edit correct indexes.conf file somewhere under... See more...
If you have individual indexers then it is correct place. After change do reload for it. If you have indexer cluster then you must do this change on CM. Edit correct indexes.conf file somewhere under master-apps or manager-apps. After that apply cluster-bundle, when it has distributed into search peers.
I want to increase one of my index frozen Time Period from 12 months to 13 months. I have increased the Max Size of Entire Index from the Splunk indexer > Settings. But I know this is not enough as m... See more...
I want to increase one of my index frozen Time Period from 12 months to 13 months. I have increased the Max Size of Entire Index from the Splunk indexer > Settings. But I know this is not enough as my index frozen Time Period is set on 12 months period. So where should I update this value ? Should I need to update 'Indexes.conf' file for required indexes to the indexer server itself which is installed on Linux machine. What things I need to take care while updating this frozen Time Period.    
hi,   i have used personnel email, its been more than week, i didn't receive the email yet, i have checked spams as well, but nothing is in there.
  @samy335  How did you register for the Splunk Cloud free trial account? Did you use your business email or personal email? If you used a business email, check with your IT team to see if they are... See more...
  @samy335  How did you register for the Splunk Cloud free trial account? Did you use your business email or personal email? If you used a business email, check with your IT team to see if they are blocking any external emails.
I am assuming due to the way the query is being evaluated, it doesn't just take latest value, and hence due to using max, it gets the largest value stored of that field for however long you store dat... See more...
I am assuming due to the way the query is being evaluated, it doesn't just take latest value, and hence due to using max, it gets the largest value stored of that field for however long you store data in analytics. You can either try and change the max to min which should get the lowest value always but better would be to append the following clause to the query to ensure that only the last 5 minutes of data gets used to get the value - SINCE 5 minutes
@refahiati  Are you experiencing high resource usage on the Splunk Heavy Forwarder? If so, I suggest configuring syslog-ng or rsyslog on the Heavy Forwarder to collect logs and store them in a separa... See more...
@refahiati  Are you experiencing high resource usage on the Splunk Heavy Forwarder? If so, I suggest configuring syslog-ng or rsyslog on the Heavy Forwarder to collect logs and store them in a separate directory. You can then monitor that directory to forward the events to Splunk indexers. Additionally, review the queues in the metrics.log file for any potential issues. 
we are feeding data for every 5 mins and if you see the data its 229 all the time in metric graph where as when we execute the query its different 219 value. 16/12/2024 23:55:00,46,0,229,229,5 17/1... See more...
we are feeding data for every 5 mins and if you see the data its 229 all the time in metric graph where as when we execute the query its different 219 value. 16/12/2024 23:55:00,46,0,229,229,5 17/12/2024 23:55:00,46,0,229,229,5 18/12/2024 23:55:00,46,0,229,229,5
Hi Mario, Thanks for the response. When i added the query as metric, i m getting old value. For ex, the expiration days are 219 days. but it shows 229 days on the day when i created the metric. why... See more...
Hi Mario, Thanks for the response. When i added the query as metric, i m getting old value. For ex, the expiration days are 219 days. but it shows 229 days on the day when i created the metric. why is that it not showing the current value. the value is not changing.
Better to contact 1st to partner - https://www.splunk.com/en_us/partners.html - Find a Partner button - https://www.splunk.com/en_us/about-splunk/contact-us.html
What exactly are you trying to achieve and how are you doing that? What you've shown is an event from Windows Security eventlog which is apparently an audit entry informing you that a process has be... See more...
What exactly are you trying to achieve and how are you doing that? What you've shown is an event from Windows Security eventlog which is apparently an audit entry informing you that a process has been spawned on a machine. As far as I remember it doesn't capture command's output.
Thanks a lot for your Help and Answers, by the ways how to contact Local Splunk partner or directly to splunk 
Can you describe your question little bit more for us? I'm not sure what you are asking? Splunk can ingest more than PB per day. It just depends on how environment has build and what are it's capac... See more...
Can you describe your question little bit more for us? I'm not sure what you are asking? Splunk can ingest more than PB per day. It just depends on how environment has build and what are it's capacity. Data are stored into buckets on local disks or on S3 bucket e.g. in AWS or equivalent versions on GCP, Azure or onprem. All those are described on docs.splunk.com. If needed you can contact to your local Splunk Partner or directly to Splunk and they can present it to you. There are lots of videos, conf presentations etc. to tell more about Splunk.
How High is the Incoming Data Volume for Monitoring ??? Where are the Data stored ?
Can you share your javascript you generated and added to the app? Did you enable SPA Monitoring in the configuration within AppDynamics etc.?
"There's an app for that" Thank you!
Hello Everyone, I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud. I'm... See more...
Hello Everyone, I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud. I'm curious if there's an alternative method to send logs using the Splunk HTTP Event Collector (HEC) exporter. According to the documentation here, the Splunk HEC exporter allows the OpenTelemetry Collector to send traces, logs, and metrics to Splunk HEC endpoints, supporting traces, metrics, and logs. Is it also possible to use fluentforward, otlphttp, or signalfx or anything else for this purpose? Additionally, I have an EC2 instance running the splunk-otel-collector service, which successfully sends infrastructure metrics to the Splunk Observability Cloud. Can this service also facilitate sending logs to the Splunk Observability Cloud? According to the agent_config.yaml file provided bysplunk-otel-collector service, there are several pre-configured service settings related to logs, including logs/signalfx, logs/entities, and logs. These configurations utilize different exporters such as splunk_hec, splunk_hec/profiling, otlphttp/entities, and signalfx. Could you explain what each of these configurations is intended to do?   service: extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [otlphttp, signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway, signalfx] metrics: receivers: [hostmetrics, signalfx, statsd] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx, statsd] # Use instead when sending to gateway #exporters: [otlp/gateway] metrics/internal: receivers: [prometheus/internal] processors: [memory_limiter, batch, resourcedetection, resource/add_mode] # When sending to gateway, at least one metrics pipeline needs # to use signalfx exporter so host metadata gets emitted exporters: [signalfx] logs/signalfx: receivers: [signalfx, smartagent/processlist] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] logs/entities: # Receivers are dynamically added if discovery mode is enabled receivers: [nop] processors: [memory_limiter, batch, resourcedetection] exporters: [otlphttp/entities] logs: receivers: [fluentforward, otlp] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [splunk_hec, splunk_hec/profiling] # Use instead when sending to gateway #exporters: [otlp/gateway]   Thanks!