All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your info token equates to a short code yet your search is converting the code to a friendlier term before you search, could this be why your search is not working? | eval "Info Transaction CI HUB"=... See more...
Your info token equates to a short code yet your search is converting the code to a friendlier term before you search, could this be why your search is not working? | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" | search "Info Transaction CI HUB"="$info$"
Oh yes , sorry I gave wrong search . This is the seach | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master ... See more...
Oh yes , sorry I gave wrong search . This is the seach | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW Asset_Gruppe Scan-Company Scanner Scan-Location Location "DNS Name" as dns_name Betriebssystem as "Operation System" | lookup lkp-GlobalIpRange.csv 3-Letter-Code as Location OUTPUTNEW "Company Code" | eval is_solved=if(lastchecked>lastfound OR lastchecked == 1,1,0),blacklisted=if(isnull(blfinding),0,1),timeval=strftime(lastchecked,"%Y-%m-%d") | fillnull value="NA" "Company Code", Scan-Location | search is_solved=0 blacklisted=0 Scan-Location="*" "Company Code"="*" severity="high" | fields "Company Code" timeval ip dns "Operation System" severity pluginname timeval Scan-Location is_solved blacklisted | sort severity
Hi @uagraw01 , what about using only* instead *,csv? then, did you tried with whitelist option instead of inserting the file in the input stanza? Ciao. Giuseppe
Dear Splunkers!! I am facing an issue with Splunk file monitoring configuration. When I define the complete absolute path in the inputs.conf file, Splunk successfully monitors the files. Below are... See more...
Dear Splunkers!! I am facing an issue with Splunk file monitoring configuration. When I define the complete absolute path in the inputs.conf file, Splunk successfully monitors the files. Below are two examples of working stanza configurations: Working Configurations: [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\0000000002783979-2025-03-27T07-39-33-128Z-SZC.VIT.BaptoEvents.50301.csv] [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\0000000002783446-2025-03-27T05-09-20-566Z-SZC.VIT.BaptoEvents.50296.csv] However, since more than 200 files are generated, specifying absolute paths for each file is not feasible. To automate this, I attempted to use a wildcard pattern in the stanza, as shown below: Non-Working Configuration: [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\*.csv] Unfortunately, this approach does not ingest any files into Splunk. I would appreciate your guidance on resolving this issue. Looking forward to your insights.
Hi Morelz, Any news / progress on this?
Hi @Leonardo1998  In order to index this as a lowercase field, we need to establish how its derived.  Checking the app's props/transforms there are a number of REGEX which extract "subscription_id"... See more...
Hi @Leonardo1998  In order to index this as a lowercase field, we need to establish how its derived.  Checking the app's props/transforms there are a number of REGEX which extract "subscription_id" from various fields. such as below, however like you mentioned - this are subscription_id not subscriptionId! [mscs_extract_subscription_id_and_resource_group] SOURCE_KEY = AzureResourceId REGEX = (?i:subscriptions)\/([^\/]+)(?:\/(?i:resourceGroups)\/([^\/]+))? FORMAT = subscription_id::$1 resource_group::$2 [mscs_extract_subscription_id_and_resource_group_from_id] SOURCE_KEY = id REGEX = (?i:subscriptions)\/([^\/]+)(?:\/(?i:resourceGroups)\/([^\/]+))? FORMAT = subscription_id::$1 resource_group::$2   However.. I did find this: [azure_data_share_extract_from_properties] SOURCE_KEY = properties REGEX = \"(\w+)\":\"({.*}|.*?)\" FORMAT = $1::$2 Which extracts keyvalue pairs from properties and I *think* subscriptionId and subscriptionid get extracted from, based on this: coalesce('subscriptionId', 'properties.subscriptionId', 'properties.subscriptionid', SUBSCRIPTIONS) It looks like the source data contains different cased fields...not ideal! Anyway - If you let me know the sourcetype you are looking at I can try and help put together an index-time props/transforms to index this...or...the other thing you might like to do is an eval field to coalesce them at search-time so you have a consistent value. You might actually find that "vendor_account" already does this, but if not you could do this: [yourSourcetype] EVAL-subscriptionId=COALESCE(subscriptionId,subscriptionid) However would need to check the order of execution for the EVAL - or just see if it works Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
This result doesn't look like the output of the search you shared, the values aggregation function already does a dedup i.e. you should only have unique values in the field, and the fields listed in ... See more...
This result doesn't look like the output of the search you shared, the values aggregation function already does a dedup i.e. you should only have unique values in the field, and the fields listed in the by clause of the stats command would appear first. Please clarify what your search was and the output you got from it.
Hi @goji  Having checked the python code within this app - it looks like it is forcing SSL Verification when connecting to the OpenCTI endpoint.  response = helper.send_http_request(url, method, pa... See more...
Hi @goji  Having checked the python code within this app - it looks like it is forcing SSL Verification when connecting to the OpenCTI endpoint.  response = helper.send_http_request(url, method, parameters=None, payload=None, headers=None, cookies=None, verify=True, cert=None, timeout=None, use_proxy=True) This means that you would need to provide a OpenCTI URL on a DNS name with a valid SSL Certificate.  When you tried to connect using curl, did you need to pass param like "-k" to skip SSL Verification? Are you able to use a DNS name and add a valid SSL certificate to the OpenCTI server? If not then I think the only other option would be to modify the script to turn off SSL verification (Its a shame the app author hasnt provided this option). The issue with this is it can leave you with a fragile environment, in that if you upgrade the app in the future then it will override your changes. If you want to test this approach then you can try making the following modifications - but remember the caveats (This is obviously sub-optimal!) TA-opencti-add-on/bin/input_module_opencti_indicators.py - Lines 224-226 response = helper.send_http_request(url, method, parameters=None, payload=None, headers=None, cookies=None, verify=True, cert=None, timeout=None, use_proxy=True) Change verify=True to verify=False And the modalert: TA-opencti-add-on/bin/ta_opencti_add_on/alert_actions_base.py - Line 108 def send_http_request(self, url, method, parameters=None, payload=None, headers=None, cookies=None, verify=True, cert=None, timeout=None, use_proxy=True): Again, change verify=True to verify=False Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @cbiraris  Check out https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Setting_data_retention_rules_in_Splunk_Cloud_Platform#:~:text=kinds%20of%20problems.-,Solution,data%20... See more...
Hi @cbiraris  Check out https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Setting_data_retention_rules_in_Splunk_Cloud_Platform#:~:text=kinds%20of%20problems.-,Solution,data%2035%20months%20too%20early. for some guidance on best practices for retention. When you consider which index should collect a data source, remember that you set retention policies by index. If you have two data sources, one that you need to keep for 3 years and one that you can discard after 30 days, send them to separate indexes. Otherwise, you will be paying to store 35 months of data you don’t really want, or discarding data 35 months too early. Essentially you should split your data into different indexes when you have different retention, permissions or usecase/category. Its a good idea to use a naming convention to achieve this so you can easily distinguish between different types. Such as adding a _nonprod or _prod suffix for non-production/production data - which might have different RBAC / Users. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @SN1 , you coult to use mvdedup: | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUT... See more...
Hi @SN1 , you coult to use mvdedup: | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW Asset_Gruppe Scan-Company Scanner Scan-Location Location "DNS Name" as dns_name Betriebssystem as "Operation System" | lookup lkp-GlobalIpRange.csv 3-Letter-Code as Location OUTPUTNEW "Company Code" | dedup finding, dns_name, ip | stats values("Company Code") as "Company Code" by finding, dns_name, ip, Asset_Gruppe, Scan-Company, Scanner, Scan-Location, Location, Betriebssystem | eval "Company Code"=mvdedup("Company Code") Ciao. Giuseppe
Hi @Sultan77 , if you want to open in drilldown only the events related to the results of the correlation search, you have to insert in the drilldown search a subsearch containing the correlation se... See more...
Hi @Sultan77 , if you want to open in drilldown only the events related to the results of the correlation search, you have to insert in the drilldown search a subsearch containing the correlation search. in other words, if your correlation search lists some hosts, you should use a drilldown search like the following: <the_same_search_conditions_of_the_correlation_search> [ search <the_full_correlation_search> | fields host ] Ciao. Giuseppe
Hi @cbiraris , in Splunk retention is only defined at index level, so the only way is to store the longer sourcetype in a different index. Ciao. Giuseppe
Hi @cbiraris, unfortunately retention time can only be applied to indexes.  
Hi team, i have a index with 4 sourcetype.  index has searchable retention of 4 months. is there any way we can keep same retention for 3 sourcetype and 1sourcetype can be increased to 8 months... See more...
Hi team, i have a index with 4 sourcetype.  index has searchable retention of 4 months. is there any way we can keep same retention for 3 sourcetype and 1sourcetype can be increased to 8 months ? For example: Index=xyz sourcetype = 1 searchable retention 4 Months   sourcetype = 2 searchable retention 4 Months   sourcetype = 3 searchable retention 4 Months   sourcetype = 4 searchable retention 8 Months
Hi, I just want to input OpenCTI feed from OpenCTI to Splunk. I followed installation instruction. https://splunkbase.splunk.com/app/7485 But, there is an error in _internal index as follows.... See more...
Hi, I just want to input OpenCTI feed from OpenCTI to Splunk. I followed installation instruction. https://splunkbase.splunk.com/app/7485 But, there is an error in _internal index as follows. 2025-03-27 16:50:02,889 ERROR pid=17581 tid=MainThread file=base_modinput.py:log_error:309 | Error in ListenStream loop, exit, reason: HTTPSConnectionPool(host='192.168.0.15', port=8080): Max retries exceeded with url: /stream/2cfe507d-1345-402d-82c7-eb8939228bf0?recover=2025-03-27T07:50:02Z (Caused by SSLError(SSLError(1, '[SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)'))) And I was able to access OpenCTI feeds using curl in Splunk enfironement and browser as well, but I can't access the OpenCTI stream using StreamID from Splunk to fetch the data. I think SSL is one of the issues. Please tell me if you know how to fetch the OpenCTI data to Splunk
Dear @livehybrid  You're right, that's exactly what I'm attempting to do. As for limiting the events returned, I'm working on specifying something distinctive, like the host that triggered the Eve... See more...
Dear @livehybrid  You're right, that's exactly what I'm attempting to do. As for limiting the events returned, I'm working on specifying something distinctive, like the host that triggered the Event ID or the user involved.
hello i have this search | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW A... See more...
hello i have this search | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW Asset_Gruppe Scan-Company Scanner Scan-Location Location "DNS Name" as dns_name Betriebssystem as "Operation System" | lookup lkp-GlobalIpRange.csv 3-Letter-Code as Location OUTPUTNEW "Company Code" | dedup finding, dns_name, ip | stats values("Company Code") as "Company Code" by finding, dns_name, ip, Asset_Gruppe, Scan-Company, Scanner, Scan-Location, Location, Betriebssystem now this is the result. now i have tried mvexpand , stats as well but it gives multiples values. The problem is let say for NessusHost "slo-svenessus01.emea.durr.int" there are let say 20 nessus host with this name now it is duplicating 20 "company code " (HHDE) in every single field for each Nessushost with this name and same for others as well.
Hi @feichinger , your solution has the limit of 50,000 results in the subsearch, so I hint to reverse your searches: index=perfmon counter="% Processor Time" | stats count BY host | append [ ... See more...
Hi @feichinger , your solution has the limit of 50,000 results in the subsearch, so I hint to reverse your searches: index=perfmon counter="% Processor Time" | stats count BY host | append [ | inputlookup domaincontrollers.csv | rename Name AS host | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe
I do have a solution for this, but I just wonder if there is a more straight forward approach to get a better understanding of multi search scenarios. I want to monitor which Windows forwarders have... See more...
I do have a solution for this, but I just wonder if there is a more straight forward approach to get a better understanding of multi search scenarios. I want to monitor which Windows forwarders have broken performance counters or are just not sending in performance counters for whatever reason. There's a CSV lookup file with the server names I want to monitor, and my idea was to have the search give me a table of all the servers in that lookup file which come back with 0 results for a given search. My working solution is this: | inputlookup domaincontrollers.csv | table Name | eval count=0 | append [search index=perfmon counter="% Processor Time" | rename host as Name | stats count by Name] | stats sum(count) by Name | rename "sum(count)" as activity | where activity=0 I had played with appendcols, but found that it would only merge the servers with results in the subsearch, and not list the others in the results. Is there any search method I should read up on, for a scenario like this? thanks
Hi everyone, I'm using Splunk Cloud with the Splunk Add-on for Microsoft Cloud Services  to manage two Azure subscriptions. As a result, I have duplicated inputs, and I need a way to reference each ... See more...
Hi everyone, I'm using Splunk Cloud with the Splunk Add-on for Microsoft Cloud Services  to manage two Azure subscriptions. As a result, I have duplicated inputs, and I need a way to reference each subscription within my queries. I noticed that the subscriptionId field exists, but it contains four variations: two in lowercase and two in uppercase. I'd like to normalize this field to lowercase at ingest time, so I don't have to handle it manually in every query. I checked the Field Transformations, but I couldn't find any mention of subscriptionId (I only see subscription_id). Has anyone dealt with a similar issue, or can anyone suggest the best approach? Thanks in advance for your help! (P.S. I'm relatively new to Splunk and Splunk Cloud, so any guidance is greatly appreciated!)