All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@isoutamo  :Yes it is not getting sent like I want. I want all the emails recipients to be in "to" field with my email-id in Cc. There are around 100 email address returned from search. If emails ... See more...
@isoutamo  :Yes it is not getting sent like I want. I want all the emails recipients to be in "to" field with my email-id in Cc. There are around 100 email address returned from search. If emails are sent separately, then by inbox will be bombarded with 100 emails  This makes me difficult to follow up as well. So, I want to send one email . This is my requirement. Regards, PNV
Hi @kate, did you tried to use the Monitoring Console? You already have all the dashboards you need. for more information see at https://docs.splunk.com/Documentation/Splunk/9.1.2/DMC/DMCoverview ... See more...
Hi @kate, did you tried to use the Monitoring Console? You already have all the dashboards you need. for more information see at https://docs.splunk.com/Documentation/Splunk/9.1.2/DMC/DMCoverview  Ciao. Giuseppe
Hi @sankardevarajan, could you better describe your question, because it isn't so clear: are you speaking of Splunk Enterprise or Splunk Enterprise Security? if you want to know triggered alerts y... See more...
Hi @sankardevarajan, could you better describe your question, because it isn't so clear: are you speaking of Splunk Enterprise or Splunk Enterprise Security? if you want to know triggered alerts you could use something like this: index=_audit action=alert_fired ss_app=* | eval ttl=expiration-now() | search ttl>0 | convert ctime(trigger_time) | table trigger_time ss_name severity | rename trigger_time AS "Alert Time" ss_name AS "Alert Name" severity AS "Severity" but it depends on the retention of your _audit index. Ciao. Giuseppe
Hello Everyone,  We are trying to restore the DDSS data stored in S3 bucket to our Splunk Enterprise. We follow the step : https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/DataSel... See more...
Hello Everyone,  We are trying to restore the DDSS data stored in S3 bucket to our Splunk Enterprise. We follow the step : https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/DataSelfStorage?_gl=1*gl0ykt*_ga*MTYyNzI0MDcwMC4xNzA1NDc1ODQ4*_ga_GS7YF8S63Y*MTcwNjAxMjQxMi4yLjEuMTcwNjAxMjUwNC42MC4wLjA.*_ga_5EPM2P39FV*MTcwNjAxMDg1NS4zLjEuMTcwNjAxMjUwNC4wLjAuMA..&_ga=2.175504743.433835015.1705989189-1627240700.1705475848#Restore_indexed_data_from_an_AWS_S3_bucket But we facing error like below? Any thought what might be the root cause? We did upload the data in the instructed directory but when rebuilding. we keep facing this error. 
Can you clarify what "webmethod" means to you? Splunk Can ingest logs from S3 in a variety of ways. We have addons like the AWS TA that can poll the data from s3 using sqs.  https://docs.splunk.... See more...
Can you clarify what "webmethod" means to you? Splunk Can ingest logs from S3 in a variety of ways. We have addons like the AWS TA that can poll the data from s3 using sqs.  https://docs.splunk.com/Documentation/AddOns/released/AWS/SQS-basedS3 We also  have options like Splunk Cloud Data Manager or AWS Firehose, as well as 3rd party collectors that can pickup s3 then send to splunk over HTTP Event Collector.  https://docs.splunk.com/Documentation/DM/1.8.2/User/AWSPrerequisites#AWS_S3_data_source_prerequisites Hopefully this helps get you pointed in the right direction!
How can I calculate CPU of the splunk server in percentage from the data in internal index? The data in internal index is as below where source = /opt/splunk/var/log/splunk/metrics.log 01-25-20... See more...
How can I calculate CPU of the splunk server in percentage from the data in internal index? The data in internal index is as below where source = /opt/splunk/var/log/splunk/metrics.log 01-25-2024 15:47:42.528 +0000 INFO Metrics - group=pipeline, name=dev-null, processor=nullqueue, cpu_seconds=0.001, executes=4445, cumulative_hits=9717713 01-25-2024 15:47:42.527 +0000 INFO Metrics - group=workload_management, name=workload-statistics, workload_pool=standard_perf, mem_limit_in_bytes=71715885056, cpu_shares=358 01-25-2024 15:47:42.525 +0000 INFO Metrics - group=conf, action=acquire_mutex, count=20, wallclock_ms_total=0, wallclock_ms_max=0, cpu_total=0.000, cpu_max=0.000
Cloud monitoring console should provide a great start on analyzing your storage needs.  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/MonitoringLicenseUsage#Monitor_the_Storage_Su... See more...
Cloud monitoring console should provide a great start on analyzing your storage needs.  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/MonitoringLicenseUsage#Monitor_the_Storage_Summary_dashboard The key concept you must be familiar with, is the Splunk bucket lifecycle, as buckets are the smallest form of storage in Splunk, and impacts greatly how and when your buckets move from active searchable to active archive.  I wouldnt over complicate it with compression. While Splunk does compress data, your entitlements are on raw data ingested, so just closely analyze your daily ingest in your biggest indexes and poke around with the `dbinspect` command and the monitoring console to ensure your bucket health is good.  Data onboarding and data quality is key to ensure your timestamps dont pollute your buckets with timestamps way in the past or future, because a bucket can only migrate to archive when ALL EVENTS in the bucket meet the time/size criteria. https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/MonitoringHealth#Health_indicator_information_and_additional_resources:~:text=Search%20Manual.-,Bucket%20size%20and%20range,-An%20index%20typically https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/MonitoringIndexing#Verify_data_quality Also even going back and reading Splunk Enterprise docs on "smartstore" will help provide you with some good background, or work with your account team to go through it and ensure you have a good handle on it. 
1. Is there a way to directly install Custom Apps / Add-ons (that are originally built for Splunk Enterprise), in Splunk Cloud? We were thinking about compatibility issues, and if the apps would work... See more...
1. Is there a way to directly install Custom Apps / Add-ons (that are originally built for Splunk Enterprise), in Splunk Cloud? We were thinking about compatibility issues, and if the apps would work the same way.  Yes, see "installing Private apps" on Splunk Cloud Platform. -https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/PrivateApps 2.  Is there a way to gauge whether or not the quantity of data that we want to send from external sources, would require us to install a Heavy / Universal Forwarder? (We are trying to avoid additional costs by taking Splunk Cloud, so we were wondering if we could do without them) As long as your cloud deployment is sized correctly around how much ingest and search you plan to do, you can absolutley use cloud without the need for HFs or on-premesis infra. It's always an option when needed.  The main place to get familiar with is the Splunk Cloud Service Description. It lays out the service and any limits or recommends we have. For example,  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Service/SplunkCloudservice#Experience_designations       Victoria Classic Modular and scripted inputs Modular and scripted inputs can now run directly on the search tier without the additional overhead of a separate IDM instance. Review pull based service limits below: Up to 500GB/day for entitlement of less than 166 SVC or 1 TB Up to 1.5TB/day for more than 166 SVC or 1 TB Modular and scripted inputs must run on a separate IDM instance or customer-managed heavy forwarder. Victoria runs the inputs on the SH tier to allow self service. Classic runs the "HF"s for you as "IDM"s, but way less self service. So depends on what you value more.  The free cloud trial instances wont be what you want for actual testing etc. Have your Sales Engineer spin up a demo stack internally and you can play with them or do a full blown POC.  How much ingest do you plan to do? Check out the Splunk Cloud Migration Assessment app for help translating requirements.  https://splunkbase.splunk.com/app/4974 Hope that helps! Feel free to join others on splunk cloud in the splunk_cloud room on community slack too! splk.it/slack
I have this vulnerability on all our instances on the last version of splunkforwarder The version of OpenSSL installed on the remote host is prior to 1.0.2zf. It is, therefore, affected by a vulne... See more...
I have this vulnerability on all our instances on the last version of splunkforwarder The version of OpenSSL installed on the remote host is prior to 1.0.2zf. It is, therefore, affected by a vulnerability as referenced in the 1.0.2zf advisory. identified in CVE-2022-1292, the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068) Any recommendation here
It's so long time when I have done this last time, that I cannot be 100% sure that it was that way. But at least https://docs.splunk.com/Documentation/Splunk/latest/Indexer/ConfiguresearchheadwithCLI ... See more...
It's so long time when I have done this last time, that I cannot be 100% sure that it was that way. But at least https://docs.splunk.com/Documentation/Splunk/latest/Indexer/ConfiguresearchheadwithCLI didn't mention that there is anything else what you need to do for individual peers. Also this https://docs.splunk.com/Documentation/Splunk/latest/DMC/Addinstancesassearchpeers said "Do not add clustered indexers. If you are monitoring an indexer cluster and you are hosting the monitoring console on an instance other than the cluster manager, you must add the cluster manager as a search peer and configure the monitoring console instance as a search-head in that cluster."
The way to go is with the OpenTelemetry Helm Chart. Wrote a lil quickstart here https://github.com/matthewmodestino/otel-quickstart/blob/main/kubernetes/0-quickstart-home.md#kubernetes-otel-quicksta... See more...
The way to go is with the OpenTelemetry Helm Chart. Wrote a lil quickstart here https://github.com/matthewmodestino/otel-quickstart/blob/main/kubernetes/0-quickstart-home.md#kubernetes-otel-quickstart-home See docs and validated architecture for more! https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/OtelCollectorKubernetes https://docs.splunk.com/Documentation/SVA/current/Architectures/OTelKubernetes If you run into issues reach out to your SE, we have workshops or jump into the community slack channel splk.it/slack! holler at me in the kubernetes channel, or opentelemetry channels, (mattymo)
That's interesting, because I added all the Search heads to the new MC, plus the current cluster master and I don't see the indexers listed on the distributed mode. I guess it may come after I've com... See more...
That's interesting, because I added all the Search heads to the new MC, plus the current cluster master and I don't see the indexers listed on the distributed mode. I guess it may come after I've completed the setup of distributed mode, but I need to make the new instance a search head first according to the documentation, so I'll start there.
Hi @splunksumman  - I’m a Community Moderator in the Splunk Community.  This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recomme... See more...
Hi @splunksumman  - I’m a Community Moderator in the Splunk Community.  This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you!
Hi all, We are going to the Splunk cloud and want to keep the LDAP search also in cloud. Today we have install the app on a search head and with working commands. I know how to forward the data ... See more...
Hi all, We are going to the Splunk cloud and want to keep the LDAP search also in cloud. Today we have install the app on a search head and with working commands. I know how to forward the data to splunk cloud from a HF, but what about the ldap command? Like ldapgroup etc? do we need to install the app in Cloud also to get the commands to work? //Jan
I have enterprise network and we have Splunk enterprise license.  Question: while troubleshooting source type or host if checking, it needs to show past history of particular user or source in the d... See more...
I have enterprise network and we have Splunk enterprise license.  Question: while troubleshooting source type or host if checking, it needs to show past history of particular user or source in the dashboard. Past history like how many alerts triggered the same user, those alert details if click the link it must be show past troubleshooting history.
Hi Nasser I am taking the same course I tried multiple queries nothing worked can you help me source="3--المصدر-الداعم-الثالثسجل-الملفات.csv" host="Ghaidas-MBP" index="main" sourcetype="stc_logs" a... See more...
Hi Nasser I am taking the same course I tried multiple queries nothing worked can you help me source="3--المصدر-الداعم-الثالثسجل-الملفات.csv" host="Ghaidas-MBP" index="main" sourcetype="stc_logs" action="blocked"                     I used this Query as well to count action  source="3--المصدر-الداعم-الثالثسجل-الملفات.csv" host="Ghaidas-MBP" index="main" sourcetype="stc_logs" | stats count by action   but neither queries have yielded any results 
Yes you said that you want to use it to modify a report, but you didn't define how it should modify. Basically add normal SPL after your report and use token with $data$ how ever you want to use it.
I'm not sure if I was unclear, but anyway, I wanted to use the token to change, if possible, the data reported by a search that is saved as a report.
Hi you don't say how you want to use that token. Without that information we cannot directly tell it to you! You could found from https://docs.splunk.com/Documentation/Splunk/latest/Viz/tokens how ... See more...
Hi you don't say how you want to use that token. Without that information we cannot directly tell it to you! You could found from https://docs.splunk.com/Documentation/Splunk/latest/Viz/tokens how to use tokens in dashboards. I'm not sure if you try to use token on that report (checkpoint1) or not? Unfortunately you cannot use tokens in other places than this dashboards or other output links/dashboards. One app which you should install and use when you are developing dashboards with tokens is https://splunkbase.splunk.com/app/1603. With it you can automatically see what tokens you have defined and what values those have. r. Ismo 
any ideas how can I use foreach to "collect" all changes (using mvappend)? My current attempt works only if I restrict foreach to one specific field (e.g. "a") and even then it shows just one change ... See more...
any ideas how can I use foreach to "collect" all changes (using mvappend)? My current attempt works only if I restrict foreach to one specific field (e.g. "a") and even then it shows just one change pro id: | foreach a [ eval changed = if ( previous_<> != <> , mvappend(changed, "<>") , 0) ] | search changed!=0 | stats values(changed) values(id) by _time