All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity... See more...
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity Dashboards show data from two custom entity types with some relation to each other. I want to create a navigation between the two Dashboards. I did create a normal drilldown action to call the related Dashboard. This works somehow, but the Token is not handled correctly. for example I defined Token Parameters: host = $click.value2$ and in the target dashboard I see |search host=$click.value2$ instead of the real value that should have been handed over in the token. When I use the Dashboards outside of ITSI, the drilldown action works fine. Looks to me that in ITSI some scripts are used and the handover is not directly to the other Entity Dashboard, but somehow through the Entity (_key) and the defined entity type. Great if somebody could shed some insights on that!
Thanks .Used coalesce and transaction to get the data. 
Hello, I have Splunk distributed deployment (cca 20 servers + cca 100 UFs). On servers, I configured SSL encryption of management traffic and TLS certificate host name validation: server.conf   [... See more...
Hello, I have Splunk distributed deployment (cca 20 servers + cca 100 UFs). On servers, I configured SSL encryption of management traffic and TLS certificate host name validation: server.conf   [sslConfig] enableSplunkdSSL = true serverCert = <path_to_the_server_certificate> sslVerifyServerCert = true sslVerifyServerName = true sslRootCAPath = <path_to_the_CA_certificate>   Everything is working well - servers communicate each other. But my question is: I use Deployment server for pushing config to UFs and I am little bit surprised that management traffic between UFs and Deployment server is still flowing (I see all UFs phoning home, I can push config) even I did not configure encryption nor hostname validation on any UF. Is it OK? Does it mean that hostname validation for management traffic cannot be configured on UF? Or there is a way how to config hostname validation on UFs? I found only how to configure hostname validation on UF in outputs.conf for sending collected data to Indexer, but nothing about management traffic. Thank you for any hint. Best regards Lukas Mecir
Hi @Ivan.Tamayo, I was doing some digging and found this Documentation I wanted to share: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/net-agent... See more...
Hi @Ivan.Tamayo, I was doing some digging and found this Documentation I wanted to share: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/unattended-installation-for-net If I find anything else, I'll be sure to share it with you. 
I have a query that returns 2 values . . . | stats max(gb) as GB by metric_name metric_name GB storage_current 99 storage_limit 100   Now I want to be able to reference the current... See more...
I have a query that returns 2 values . . . | stats max(gb) as GB by metric_name metric_name GB storage_current 99 storage_limit 100   Now I want to be able to reference the current and limit values in a radial gauge, how can I covert that table into key value pairs so I can say that the value of the radial is "storage_current"? something like  |eval {metric_name}={GB}
Last I checked it is still the case that all apps have access to the secret store and have to use controls to limit access.  Came up when working with 1Password on their app. Will poke around and ... See more...
Last I checked it is still the case that all apps have access to the secret store and have to use controls to limit access.  Came up when working with 1Password on their app. Will poke around and see if any hardening or rework is in the cards this year.... Dev docs have seem to have been updated. https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/secretstorage/secretstoragerbac/#Configure-role-based-access-to-secret-storage If still think more is needed, hit the feedback link on this doc. dev team has heard this before. glad they have added the docs, but if still need more, good place to start. 
Hi @Eduardo.Rosa, Since the Community has not jumped in, I wanted to share this AppD Documentation that goes into great detail about BTs. https://docs.appdynamics.com/appd/22.x/latest/en/applicat... See more...
Hi @Eduardo.Rosa, Since the Community has not jumped in, I wanted to share this AppD Documentation that goes into great detail about BTs. https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/business-transactions You may need to adjust the documentation for your Controller version number. This can be done at the top right of the page.
Hi @mjohnson_rq - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommen... See more...
Hi @mjohnson_rq - I’m a Community Moderator in the Splunk Community.  This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Maybe try and check that you define `$SPLUNK_HOME` environment variable correctly or just point it to the absolute path? Doest like your "bucket path"
This would heavily depend on what your events look like as you would simply extract fields that represent "remote" and "local" fields.  Got an example event?
So the issue you are having is the index it lands in correct? It is likely that SC4S hec client is sending a default index.  Hec clients settings in the payload override the settings you put on... See more...
So the issue you are having is the index it lands in correct? It is likely that SC4S hec client is sending a default index.  Hec clients settings in the payload override the settings you put on the token. Think of those as "if not set by the hec client".  https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/FormateventsforHTTPEventCollector#Event_metadata Would check sc4s docs on setting indexes: https://splunk.github.io/splunk-connect-for-syslog/main/configuration/#log-path-overrides-of-index-or-metadata    
Hi Fellow Splunkers, So a more general question: what are the best practices for upgrades around security patching and deployment in a distributed production environment. We have SHs and ndexers cl... See more...
Hi Fellow Splunkers, So a more general question: what are the best practices for upgrades around security patching and deployment in a distributed production environment. We have SHs and ndexers clustered. I can clarify if this is unclear. Appreciate any advice and shared experiences.
Long shot given how old this post is, but I'm in a similar situation with 2016 servers. Did you figure out if the hub transport TA worked for newer versions?
"should I do in chunk"? - Yes, use the date ranges to reduce your date range and restore in multiple chunks.  No it will not "reindex it" - https://docs.splunk.com/Documentation/SplunkCloud/9.1.23... See more...
"should I do in chunk"? - Yes, use the date ranges to reduce your date range and restore in multiple chunks.  No it will not "reindex it" - https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/DataArchiver#Restore_archived_data_to_Splunk_Cloud_Platform You can use the "check size" button to make sure your span is under your entitlement. Remember Dynamic Data Active Archive (DDAA)  it is 10% of your Dynamic Data Active Searchable (DDAS), NOT your daily ingest entitlement. Check "cloud monitoring console> license usage > storage summary" Span too wide! too many buckets!: shorten the span, now i can restore!:   reduce your chunk size to under your limit, restore that data, search it, then in the table below you can clear it and restore you next chunk.  Data quality matters here, as if your timestamps are all over the place it can be suprizing how many buckets you have to restore to bring back any give date.  it will not take multiple days to restore this. if you just shrink your window you can do it in steps.  restore > search (tip use collect command to help move what you want to another index) > clear restore > repeat
The LDAP app is compatible with Splunk Cloud so you should be able to install it.  You will need the app on the Cloud SHs for your existing searches to work.  Keep in mind, however, that it requires ... See more...
The LDAP app is compatible with Splunk Cloud so you should be able to install it.  You will need the app on the Cloud SHs for your existing searches to work.  Keep in mind, however, that it requires access to your Active Directory system, which means your AD team must be willing to allow access from your Splunk Cloud search heads (the Internet).  Many AD teams won't allow that.
@isoutamo  :Yes it is not getting sent like I want. I want all the emails recipients to be in "to" field with my email-id in Cc. There are around 100 email address returned from search. If emails ... See more...
@isoutamo  :Yes it is not getting sent like I want. I want all the emails recipients to be in "to" field with my email-id in Cc. There are around 100 email address returned from search. If emails are sent separately, then by inbox will be bombarded with 100 emails  This makes me difficult to follow up as well. So, I want to send one email . This is my requirement. Regards, PNV
Hi @kate, did you tried to use the Monitoring Console? You already have all the dashboards you need. for more information see at https://docs.splunk.com/Documentation/Splunk/9.1.2/DMC/DMCoverview ... See more...
Hi @kate, did you tried to use the Monitoring Console? You already have all the dashboards you need. for more information see at https://docs.splunk.com/Documentation/Splunk/9.1.2/DMC/DMCoverview  Ciao. Giuseppe
Hi @sankardevarajan, could you better describe your question, because it isn't so clear: are you speaking of Splunk Enterprise or Splunk Enterprise Security? if you want to know triggered alerts y... See more...
Hi @sankardevarajan, could you better describe your question, because it isn't so clear: are you speaking of Splunk Enterprise or Splunk Enterprise Security? if you want to know triggered alerts you could use something like this: index=_audit action=alert_fired ss_app=* | eval ttl=expiration-now() | search ttl>0 | convert ctime(trigger_time) | table trigger_time ss_name severity | rename trigger_time AS "Alert Time" ss_name AS "Alert Name" severity AS "Severity" but it depends on the retention of your _audit index. Ciao. Giuseppe
Hello Everyone,  We are trying to restore the DDSS data stored in S3 bucket to our Splunk Enterprise. We follow the step : https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/DataSel... See more...
Hello Everyone,  We are trying to restore the DDSS data stored in S3 bucket to our Splunk Enterprise. We follow the step : https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/DataSelfStorage?_gl=1*gl0ykt*_ga*MTYyNzI0MDcwMC4xNzA1NDc1ODQ4*_ga_GS7YF8S63Y*MTcwNjAxMjQxMi4yLjEuMTcwNjAxMjUwNC42MC4wLjA.*_ga_5EPM2P39FV*MTcwNjAxMDg1NS4zLjEuMTcwNjAxMjUwNC4wLjAuMA..&_ga=2.175504743.433835015.1705989189-1627240700.1705475848#Restore_indexed_data_from_an_AWS_S3_bucket But we facing error like below? Any thought what might be the root cause? We did upload the data in the instructed directory but when rebuilding. we keep facing this error. 
Can you clarify what "webmethod" means to you? Splunk Can ingest logs from S3 in a variety of ways. We have addons like the AWS TA that can poll the data from s3 using sqs.  https://docs.splunk.... See more...
Can you clarify what "webmethod" means to you? Splunk Can ingest logs from S3 in a variety of ways. We have addons like the AWS TA that can poll the data from s3 using sqs.  https://docs.splunk.com/Documentation/AddOns/released/AWS/SQS-basedS3 We also  have options like Splunk Cloud Data Manager or AWS Firehose, as well as 3rd party collectors that can pickup s3 then send to splunk over HTTP Event Collector.  https://docs.splunk.com/Documentation/DM/1.8.2/User/AWSPrerequisites#AWS_S3_data_source_prerequisites Hopefully this helps get you pointed in the right direction!