All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Which of those steps failed when you tried them?  What error messages do you get?
Hi @of, you could follow the Splunk Search Tutorial (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial) and/ot Splunk Free Courses (https://www.splun... See more...
Hi @of, you could follow the Splunk Search Tutorial (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial) and/ot Splunk Free Courses (https://www.splunk.com/en_us/training/free-courses/overview.html?locale=en_us) or videos in the Splunk YouTube channel (https://www.youtube.com/@Splunkofficial). In addition there are many other courses. Ciao. Giuseppe
Hi, I see some saved searches and knowledge objects created under user local profile like below /opt/splunk/etc/users/username/search/local/savedsearches Can I append above "savedsearches" file to... See more...
Hi, I see some saved searches and knowledge objects created under user local profile like below /opt/splunk/etc/users/username/search/local/savedsearches Can I append above "savedsearches" file to the "savedsearch" file under app folder like /opt/splunk/etc/apps/search/local/ ? As we are migrating our Splunk infra to a new one, I am trying to clean up things and this effort is part of the migration.  Not sure if this makes sense but I would want all the savedsearches at one location which is /opt/splunk/etc/apps/.   If this is possible, how can I implement it and will there be any impact ?
Have you seen this guide for integrating Vectra with Splunk?  https://support.vectra.ai/s/article/KB-VS-1585  
Hi, I need help generating search queries using SPL, especially in my new role as a SOC Analyst. I would like to know if you can guide me towards any other training programs on SPL. While I did take... See more...
Hi, I need help generating search queries using SPL, especially in my new role as a SOC Analyst. I would like to know if you can guide me towards any other training programs on SPL. While I did take some training from the Splunk website, I still needed to meet my expectations. I would appreciate any advice you could give me. Thank you for your time and support. I wish you a wonderful holiday season and a happy new year. Best regards, Osama Faheem  
Hi @syaseensplunk, as I said, for my knowledge in props.conf you can use only sourcetype or source or host, not kubernetes namespace. And syntax is the following: sourcetype [mysourcetype] sourc... See more...
Hi @syaseensplunk, as I said, for my knowledge in props.conf you can use only sourcetype or source or host, not kubernetes namespace. And syntax is the following: sourcetype [mysourcetype] source: [source::my_source] host: [host::my_host] Ciao. Giuseppe
There are no heavy forwarders.. . Below is the summary of things for your understanding.. I've successfully configured Splunk Connect for Kubernetes and are ingesting data into the "events" index. ... See more...
There are no heavy forwarders.. . Below is the summary of things for your understanding.. I've successfully configured Splunk Connect for Kubernetes and are ingesting data into the "events" index. I'd like to redirect this data into more meaningful indexes based on specific field values, such as the "namespace" field. I've been able to achieve rerouting using sourcetype configurations in props.conf and transforms.conf. But using other fields like "namespace" configuration in transform.conf and props.conf file, log data is not redirected to other meaning full indexes.
Hi @syaseensplunk, As I supposed, probably the issue is the location of the conf files: they must be in the first full Splunk instance they pass throgh. In other words, in the Heavy Forwarder (if p... See more...
Hi @syaseensplunk, As I supposed, probably the issue is the location of the conf files: they must be in the first full Splunk instance they pass throgh. In other words, in the Heavy Forwarder (if present) used to extract logs from Kubernetes or in the Indexers, not installed in the Cluster Master. If you have to install them in the Indexers, you have to use the Cluster Master to deploy them to the Indexers, but not installing in the folder you said, you have to copy them in $SPLUNK_HOME/etc/manager-apps and deploy them as described at https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Manageappdeployment . Ciao. Giuseppe
Ismo, Thanks for the guidance. The systemd worked.   V/r, Hector
My files are located on the indexer/indexer's/cluster-master under "/opt/splunk/etc/apps/appName/local". Yes, it works with sourcetype. However, it seems the sourcetype spec doesn't accept wildcard.... See more...
My files are located on the indexer/indexer's/cluster-master under "/opt/splunk/etc/apps/appName/local". Yes, it works with sourcetype. However, it seems the sourcetype spec doesn't accept wildcard. [kube:container:*] - is there a way I can make it work? I need every source with "kube:container:<container_name>" to be accepted in props.conf   Secondly, in my transforms.conf , I want to route any event with "namespace="drnt0-retail-sabbnetservices"" to my already existing index created separately to receive this events data. - Please help me with this.    Regards, Yaseen.
@im_bharath - I think this is being done by your email security system rather than Splunk.   I hope this helps!!!
I want to send custom logs to Splunk Enterprise from Apigee API proxy. I have installed the trial version of Splunk Enterprise. I am following the method with HEC token explained in this article: htt... See more...
I want to send custom logs to Splunk Enterprise from Apigee API proxy. I have installed the trial version of Splunk Enterprise. I am following the method with HEC token explained in this article: https://community.splunk.com/t5/Getting-Data-In/How-to-connect-Apigee-Edge-to-Splunk/m-p/546923. However, I am unable to send logs to Splunk. Any help in this regard will be appreciated.
@raghunandan1 - Please search in the index=_internal to see if you have errors related to file monitoring from those hosts.   I hope this helps!!!
@madhav_dholakia - Did this resolve your query? If yes then please mark the answer as "Accepted" for other community users.
@Bisho-Fouad - The short answer is to create the same index on the HF. (It will not be used for storing data, the purpose is just that you will see the index name while configuring the inputs.)   ... See more...
@Bisho-Fouad - The short answer is to create the same index on the HF. (It will not be used for storing data, the purpose is just that you will see the index name while configuring the inputs.)   I hope this helps!!! Kindly upvote if it does!!!
Hi @jbv, yes: identify the technoloiges to ingest, choose the correct Add-On and (reading the documentation9 assign the correct sourcetype to the inputs. Ciao. Giuseppe
With those old datasets you must use "earliest=1" for all searches or "All time" option.
Hi when you are creating a new index on Cluster Master, it just create that on cluster peers not anywhere else. If/when you want that index definition also on HF you must add it there manually. Base... See more...
Hi when you are creating a new index on Cluster Master, it just create that on cluster peers not anywhere else. If/when you want that index definition also on HF you must add it there manually. Based on your environment there are couple of things to remember and you must modify those when you are adding indexes on HF and/or your SH(s). if you are using volumes that must update that to point correct volume definition and/or add "dummy" volume definition to other host when you are using non cluster environment on other nodes you should update repFactor from auto to 0 My proposal is that you have separate definitions for volumes and repFactor as default values for cluster and other nodes. Then own file/TA/SA for real index definitions. In that way you could use same index definition files on all nodes instead of update it every time. Just store it to git and then deploy it from there. And of course  you must deploy that another app/files which contains specific definitions once before real index definitions can deployed. If you forget this then your nodes didn't start as they haven't have valid index definitions. r. Ismo
Hi, I have a botsv1 dataset uploaded in Splunk simulated environment. But when I search "index=botsv1" , it returns 0 events. Although I have seen the dataset in apps folder. Also it can be seen in ... See more...
Hi, I have a botsv1 dataset uploaded in Splunk simulated environment. But when I search "index=botsv1" , it returns 0 events. Although I have seen the dataset in apps folder. Also it can be seen in indexes in settings section. Nothing  can be searched using keyword botsv1. I have tried various search options, but all failed. Please help me. Thanks in advance.
Ensure that you have set Sharing as global for that app + other needed KOs. Then it should work as any other visualisation.