All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have the following transforms.conf: [REPLACEMENT_COST] CLEAN_KEYS = 0 FORMAT = $1"REPLACEMENT_COST2":"$2$s"$3 REGEX = (.*)"REPLACEMENT_COST":([^,]+)(.*) #SOURCE_KEY = REPLACEMENT_COST DEST_... See more...
Hi, I have the following transforms.conf: [REPLACEMENT_COST] CLEAN_KEYS = 0 FORMAT = $1"REPLACEMENT_COST2":"$2$s"$3 REGEX = (.*)"REPLACEMENT_COST":([^,]+)(.*) #SOURCE_KEY = REPLACEMENT_COST DEST_KEY = _raw I had to write s in the FORMAT field right after $, since otherwise, it does nothing. Is there any option to escape the dollar sign in this field? The relevant props.conf is: [json_multiline] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) MAX_DAYS_AGO = 10000 NO_BINARY_CHECK = true TIMESTAMP_FIELDS = LAST_UPDATE TIME_FORMAT = %m/%e/%y %H:%M category = Custom pulldown_type = 1 disabled = false KV_MODE = none EVAL-DESCRIPTION = replace(DESCRIPTION, "([A-Z])", " \1") EVAL-SPECIAL_FEATURES = split(replace(SPECIAL_FEATURES, "([A-Z])", " \1"), ",") LOOKUP-LANGUAGE = LANGUAGE.csv LANGUAGE_ID TRANSFORMS-REPLACEMENT = REPLACEMENT_COST Thanks
Hi as @VatsalJagani said this syntax ab.cd+something@foo.bar is used by some mail systems to reroute addresses to correct recipient by that "+something". So you must contact to your email provider t... See more...
Hi as @VatsalJagani said this syntax ab.cd+something@foo.bar is used by some mail systems to reroute addresses to correct recipient by that "+something". So you must contact to your email provider to add that new domain to trusted ones. r. Ismo
Hi You should try something like this. index=abc sourcetype=123 (source="allocation" TERM("1=1") OR TERM("2=2") TERM("3=C") Sender=aaa TERM("4=region")) OR ( source=*block* TERM("1=1") OR TERM("2... See more...
Hi You should try something like this. index=abc sourcetype=123 (source="allocation" TERM("1=1") OR TERM("2=2") TERM("3=C") Sender=aaa TERM("4=region")) OR ( source=*block* TERM("1=1") OR TERM("2=2")) | dedup source ExecId | stats count Just test if dedup is correct for your case. r. Ismo
Hi, I have below spl query and trying to combine them together. please could you suggest? Expected count is 13919 spl 1: index=abc sourcetype=123 source="allocation" TERM("1=1") OR TERM("2=2"... See more...
Hi, I have below spl query and trying to combine them together. please could you suggest? Expected count is 13919 spl 1: index=abc sourcetype=123 source="allocation" TERM("1=1") OR TERM("2=2") TERM("3=C") Sender=aaa TERM("4=region") | dedup ExecId | stats count ## Results Count = 4698 spl 2: index=abc sourcetype=123 source=*block* TERM("1=1") OR TERM("2=2") | dedup ExecId | stats count ## Results Count = 9221
When users change the permissions on their knowledge objects from private to app-level sharing, Splunk will move the object to the selected app and change the metadata files appropriately.  Splunk al... See more...
When users change the permissions on their knowledge objects from private to app-level sharing, Splunk will move the object to the selected app and change the metadata files appropriately.  Splunk also will make sure there are no duplicate KO names in the same app.  What you suggest will work (use a custom app rather than search), but I recommend letting Splunk (and your users) do the work.
Hi if/when you have enough capability (like admin role) you could move those to another app and also give permission to app or even global. You could try Settings -> All Configurations then Push "R... See more...
Hi if/when you have enough capability (like admin role) you could move those to another app and also give permission to app or even global. You could try Settings -> All Configurations then Push "Reassign Knowledge Objects". Just select correct one and reassign it as you want. There is also some python scripts which you could use for this like https://github.com/harsmarvania57/splunk-ko-change r. Ismo
Which of those steps failed when you tried them?  What error messages do you get?
Hi @of, you could follow the Splunk Search Tutorial (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial) and/ot Splunk Free Courses (https://www.splun... See more...
Hi @of, you could follow the Splunk Search Tutorial (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchTutorial/WelcometotheSearchTutorial) and/ot Splunk Free Courses (https://www.splunk.com/en_us/training/free-courses/overview.html?locale=en_us) or videos in the Splunk YouTube channel (https://www.youtube.com/@Splunkofficial). In addition there are many other courses. Ciao. Giuseppe
Hi, I see some saved searches and knowledge objects created under user local profile like below /opt/splunk/etc/users/username/search/local/savedsearches Can I append above "savedsearches" file to... See more...
Hi, I see some saved searches and knowledge objects created under user local profile like below /opt/splunk/etc/users/username/search/local/savedsearches Can I append above "savedsearches" file to the "savedsearch" file under app folder like /opt/splunk/etc/apps/search/local/ ? As we are migrating our Splunk infra to a new one, I am trying to clean up things and this effort is part of the migration.  Not sure if this makes sense but I would want all the savedsearches at one location which is /opt/splunk/etc/apps/.   If this is possible, how can I implement it and will there be any impact ?
Have you seen this guide for integrating Vectra with Splunk?  https://support.vectra.ai/s/article/KB-VS-1585  
Hi, I need help generating search queries using SPL, especially in my new role as a SOC Analyst. I would like to know if you can guide me towards any other training programs on SPL. While I did take... See more...
Hi, I need help generating search queries using SPL, especially in my new role as a SOC Analyst. I would like to know if you can guide me towards any other training programs on SPL. While I did take some training from the Splunk website, I still needed to meet my expectations. I would appreciate any advice you could give me. Thank you for your time and support. I wish you a wonderful holiday season and a happy new year. Best regards, Osama Faheem  
Hi @syaseensplunk, as I said, for my knowledge in props.conf you can use only sourcetype or source or host, not kubernetes namespace. And syntax is the following: sourcetype [mysourcetype] sourc... See more...
Hi @syaseensplunk, as I said, for my knowledge in props.conf you can use only sourcetype or source or host, not kubernetes namespace. And syntax is the following: sourcetype [mysourcetype] source: [source::my_source] host: [host::my_host] Ciao. Giuseppe
There are no heavy forwarders.. . Below is the summary of things for your understanding.. I've successfully configured Splunk Connect for Kubernetes and are ingesting data into the "events" index. ... See more...
There are no heavy forwarders.. . Below is the summary of things for your understanding.. I've successfully configured Splunk Connect for Kubernetes and are ingesting data into the "events" index. I'd like to redirect this data into more meaningful indexes based on specific field values, such as the "namespace" field. I've been able to achieve rerouting using sourcetype configurations in props.conf and transforms.conf. But using other fields like "namespace" configuration in transform.conf and props.conf file, log data is not redirected to other meaning full indexes.
Hi @syaseensplunk, As I supposed, probably the issue is the location of the conf files: they must be in the first full Splunk instance they pass throgh. In other words, in the Heavy Forwarder (if p... See more...
Hi @syaseensplunk, As I supposed, probably the issue is the location of the conf files: they must be in the first full Splunk instance they pass throgh. In other words, in the Heavy Forwarder (if present) used to extract logs from Kubernetes or in the Indexers, not installed in the Cluster Master. If you have to install them in the Indexers, you have to use the Cluster Master to deploy them to the Indexers, but not installing in the folder you said, you have to copy them in $SPLUNK_HOME/etc/manager-apps and deploy them as described at https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Manageappdeployment . Ciao. Giuseppe
Ismo, Thanks for the guidance. The systemd worked.   V/r, Hector
My files are located on the indexer/indexer's/cluster-master under "/opt/splunk/etc/apps/appName/local". Yes, it works with sourcetype. However, it seems the sourcetype spec doesn't accept wildcard.... See more...
My files are located on the indexer/indexer's/cluster-master under "/opt/splunk/etc/apps/appName/local". Yes, it works with sourcetype. However, it seems the sourcetype spec doesn't accept wildcard. [kube:container:*] - is there a way I can make it work? I need every source with "kube:container:<container_name>" to be accepted in props.conf   Secondly, in my transforms.conf , I want to route any event with "namespace="drnt0-retail-sabbnetservices"" to my already existing index created separately to receive this events data. - Please help me with this.    Regards, Yaseen.
@im_bharath - I think this is being done by your email security system rather than Splunk.   I hope this helps!!!
I want to send custom logs to Splunk Enterprise from Apigee API proxy. I have installed the trial version of Splunk Enterprise. I am following the method with HEC token explained in this article: htt... See more...
I want to send custom logs to Splunk Enterprise from Apigee API proxy. I have installed the trial version of Splunk Enterprise. I am following the method with HEC token explained in this article: https://community.splunk.com/t5/Getting-Data-In/How-to-connect-Apigee-Edge-to-Splunk/m-p/546923. However, I am unable to send logs to Splunk. Any help in this regard will be appreciated.
@raghunandan1 - Please search in the index=_internal to see if you have errors related to file monitoring from those hosts.   I hope this helps!!!
@madhav_dholakia - Did this resolve your query? If yes then please mark the answer as "Accepted" for other community users.