All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey! Yeah OTel file exporter can be used.  https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/fileexporter You could also configure otel to filter the data you... See more...
Hey! Yeah OTel file exporter can be used.  https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/fileexporter You could also configure otel to filter the data you want and send to file and hec simultaneously! https://github.com/signalfx/splunk-otel-collector/tree/main/examples/otel-logs-routing Splunk otel distro has these components: https://github.com/signalfx/splunk-otel-collector/blob/main/docs/components.md
Hi @scelikok  Hi I have done all the things that you have tell in this communication , still it is not sending logs , can it be an issue of time or cluster, or any other configuration we have to do ... See more...
Hi @scelikok  Hi I have done all the things that you have tell in this communication , still it is not sending logs , can it be an issue of time or cluster, or any other configuration we have to do in helm chart  Your help will be appreciated Thanks
As far as I know, this isn't a feature that's available built in. However, you can make dashboards about it using the notable index. https://lantern.splunk.com/?title=Security%2FUCE%2FPrioritized_Act... See more...
As far as I know, this isn't a feature that's available built in. However, you can make dashboards about it using the notable index. https://lantern.splunk.com/?title=Security%2FUCE%2FPrioritized_Actions%2FCyber_frameworks%2FAssessing_and_expanding_MITRE_ATT%26CK_coverage_in_Splunk_Enterprise_Security outlines some searches that would help you gather that information.
My organization has a handful of heavy forwarders that were configured to listen to syslog sources through udp://514. This was set up by a 3rd party, and now we are trying to understand the configura... See more...
My organization has a handful of heavy forwarders that were configured to listen to syslog sources through udp://514. This was set up by a 3rd party, and now we are trying to understand the configuration. Searching the heavy forwarders' /etc/* recursively for "514", "tcp", "udp", "syslog", or "SC4S" returns no relevant results. We know syslog is working, because we have multiple sources that are pointed at the heavy forwarders using udp over port 514 and their data is being indexed. Curiously, when a new syslog source is pointed at the HFs, a new index with a random name pops up in our LastChanceIndex. We have no idea how any of this is configured - the index selection, or the syslog listener. We usually create an index that matches the name given, since we've never been able to find the config to set it manually. Any suggestions on how syslog might be set up, or what else I could try searching for?
Hi, I'm trying to run this below sample code to send msg to Splunk, but getting error - Host not found. Am I doing right? I'm able to ping the splunk server (171.134.154.114) from my dev Linux ser... See more...
Hi, I'm trying to run this below sample code to send msg to Splunk, but getting error - Host not found. Am I doing right? I'm able to ping the splunk server (171.134.154.114) from my dev Linux server. However, I'm able to use Curl command successfully and able to see my msg in Splunk dashboard. doc/html/boost_asio/example/http/client/sync_client.cpp - 1.47.0 ./sync_client 171.134.154.114 /services/collector arg[1]:171.134.154.114 Exception: resolve: Host not found (authoritative) [asio.netdb:1]  
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity... See more...
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity Dashboards show data from two custom entity types with some relation to each other. I want to create a navigation between the two Dashboards. I did create a normal drilldown action to call the related Dashboard. This works somehow, but the Token is not handled correctly. for example I defined Token Parameters: host = $click.value2$ and in the target dashboard I see |search host=$click.value2$ instead of the real value that should have been handed over in the token. When I use the Dashboards outside of ITSI, the drilldown action works fine. Looks to me that in ITSI some scripts are used and the handover is not directly to the other Entity Dashboard, but somehow through the Entity (_key) and the defined entity type. Great if somebody could shed some insights on that!
Thanks .Used coalesce and transaction to get the data. 
Hello, I have Splunk distributed deployment (cca 20 servers + cca 100 UFs). On servers, I configured SSL encryption of management traffic and TLS certificate host name validation: server.conf   [... See more...
Hello, I have Splunk distributed deployment (cca 20 servers + cca 100 UFs). On servers, I configured SSL encryption of management traffic and TLS certificate host name validation: server.conf   [sslConfig] enableSplunkdSSL = true serverCert = <path_to_the_server_certificate> sslVerifyServerCert = true sslVerifyServerName = true sslRootCAPath = <path_to_the_CA_certificate>   Everything is working well - servers communicate each other. But my question is: I use Deployment server for pushing config to UFs and I am little bit surprised that management traffic between UFs and Deployment server is still flowing (I see all UFs phoning home, I can push config) even I did not configure encryption nor hostname validation on any UF. Is it OK? Does it mean that hostname validation for management traffic cannot be configured on UF? Or there is a way how to config hostname validation on UFs? I found only how to configure hostname validation on UF in outputs.conf for sending collected data to Indexer, but nothing about management traffic. Thank you for any hint. Best regards Lukas Mecir
Hi @Ivan.Tamayo, I was doing some digging and found this Documentation I wanted to share: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/net-agent... See more...
Hi @Ivan.Tamayo, I was doing some digging and found this Documentation I wanted to share: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/unattended-installation-for-net If I find anything else, I'll be sure to share it with you. 
I have a query that returns 2 values . . . | stats max(gb) as GB by metric_name metric_name GB storage_current 99 storage_limit 100   Now I want to be able to reference the current... See more...
I have a query that returns 2 values . . . | stats max(gb) as GB by metric_name metric_name GB storage_current 99 storage_limit 100   Now I want to be able to reference the current and limit values in a radial gauge, how can I covert that table into key value pairs so I can say that the value of the radial is "storage_current"? something like  |eval {metric_name}={GB}
Last I checked it is still the case that all apps have access to the secret store and have to use controls to limit access.  Came up when working with 1Password on their app. Will poke around and ... See more...
Last I checked it is still the case that all apps have access to the secret store and have to use controls to limit access.  Came up when working with 1Password on their app. Will poke around and see if any hardening or rework is in the cards this year.... Dev docs have seem to have been updated. https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage/secretstoragerbac/#Configure-role-based-access-to-secret-storage If still think more is needed, hit the feedback link on this doc. dev team has heard this before. glad they have added the docs, but if still need more, good place to start. 
Hi @Eduardo.Rosa, Since the Community has not jumped in, I wanted to share this AppD Documentation that goes into great detail about BTs. https://docs.appdynamics.com/appd/22.x/latest/en/applicat... See more...
Hi @Eduardo.Rosa, Since the Community has not jumped in, I wanted to share this AppD Documentation that goes into great detail about BTs. https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/business-transactions You may need to adjust the documentation for your Controller version number. This can be done at the top right of the page.
Hi @mjohnson_rq - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommen... See more...
Hi @mjohnson_rq - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Maybe try and check that you define `$SPLUNK_HOME` environment variable correctly or just point it to the absolute path? Doest like your "bucket path"
This would heavily depend on what your events look like as you would simply extract fields that represent "remote" and "local" fields.  Got an example event?
So the issue you are having is the index it lands in correct? It is likely that SC4S hec client is sending a default index.  Hec clients settings in the payload override the settings you put on... See more...
So the issue you are having is the index it lands in correct? It is likely that SC4S hec client is sending a default index.  Hec clients settings in the payload override the settings you put on the token. Think of those as "if not set by the hec client".  https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/FormateventsforHTTPEventCollector#Event_metadata Would check sc4s docs on setting indexes: https://splunk.github.io/splunk-connect-for-syslog/main/configuration/#log-path-overrides-of-index-or-metadata    
Hi Fellow Splunkers, So a more general question: what are the best practices for upgrades around security patching and deployment in a distributed production environment. We have SHs and ndexers cl... See more...
Hi Fellow Splunkers, So a more general question: what are the best practices for upgrades around security patching and deployment in a distributed production environment. We have SHs and ndexers clustered. I can clarify if this is unclear. Appreciate any advice and shared experiences.
Long shot given how old this post is, but I'm in a similar situation with 2016 servers. Did you figure out if the hub transport TA worked for newer versions?
"should I do in chunk"? - Yes, use the date ranges to reduce your date range and restore in multiple chunks.  No it will not "reindex it" - https://docs.splunk.com/Documentation/SplunkCloud/9.1.23... See more...
"should I do in chunk"? - Yes, use the date ranges to reduce your date range and restore in multiple chunks.  No it will not "reindex it" - https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/DataArchiver#Restore_archived_data_to_Splunk_Cloud_Platform You can use the "check size" button to make sure your span is under your entitlement. Remember Dynamic Data Active Archive (DDAA)  it is 10% of your Dynamic Data Active Searchable (DDAS), NOT your daily ingest entitlement. Check "cloud monitoring console> license usage > storage summary" Span too wide! too many buckets!: shorten the span, now i can restore!:   reduce your chunk size to under your limit, restore that data, search it, then in the table below you can clear it and restore you next chunk.  Data quality matters here, as if your timestamps are all over the place it can be suprizing how many buckets you have to restore to bring back any give date.  it will not take multiple days to restore this. if you just shrink your window you can do it in steps.  restore > search (tip use collect command to help move what you want to another index) > clear restore > repeat
The LDAP app is compatible with Splunk Cloud so you should be able to install it.  You will need the app on the Cloud SHs for your existing searches to work.  Keep in mind, however, that it requires ... See more...
The LDAP app is compatible with Splunk Cloud so you should be able to install it.  You will need the app on the Cloud SHs for your existing searches to work.  Keep in mind, however, that it requires access to your Active Directory system, which means your AD team must be willing to allow access from your Splunk Cloud search heads (the Internet).  Many AD teams won't allow that.