All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This result doesn't look like the output of the search you shared, the values aggregation function already does a dedup i.e. you should only have unique values in the field, and the fields listed in ... See more...
This result doesn't look like the output of the search you shared, the values aggregation function already does a dedup i.e. you should only have unique values in the field, and the fields listed in the by clause of the stats command would appear first. Please clarify what your search was and the output you got from it.
Hi @goji  Having checked the python code within this app - it looks like it is forcing SSL Verification when connecting to the OpenCTI endpoint.  response = helper.send_http_request(url, method, pa... See more...
Hi @goji  Having checked the python code within this app - it looks like it is forcing SSL Verification when connecting to the OpenCTI endpoint.  response = helper.send_http_request(url, method, parameters=None, payload=None, headers=None, cookies=None, verify=True, cert=None, timeout=None, use_proxy=True) This means that you would need to provide a OpenCTI URL on a DNS name with a valid SSL Certificate.  When you tried to connect using curl, did you need to pass param like "-k" to skip SSL Verification? Are you able to use a DNS name and add a valid SSL certificate to the OpenCTI server? If not then I think the only other option would be to modify the script to turn off SSL verification (Its a shame the app author hasnt provided this option). The issue with this is it can leave you with a fragile environment, in that if you upgrade the app in the future then it will override your changes. If you want to test this approach then you can try making the following modifications - but remember the caveats (This is obviously sub-optimal!) TA-opencti-add-on/bin/input_module_opencti_indicators.py - Lines 224-226 response = helper.send_http_request(url, method, parameters=None, payload=None, headers=None, cookies=None, verify=True, cert=None, timeout=None, use_proxy=True) Change verify=True to verify=False And the modalert: TA-opencti-add-on/bin/ta_opencti_add_on/alert_actions_base.py - Line 108 def send_http_request(self, url, method, parameters=None, payload=None, headers=None, cookies=None, verify=True, cert=None, timeout=None, use_proxy=True): Again, change verify=True to verify=False Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @cbiraris  Check out https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Setting_data_retention_rules_in_Splunk_Cloud_Platform#:~:text=kinds%20of%20problems.-,Solution,data%20... See more...
Hi @cbiraris  Check out https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Setting_data_retention_rules_in_Splunk_Cloud_Platform#:~:text=kinds%20of%20problems.-,Solution,data%2035%20months%20too%20early. for some guidance on best practices for retention. When you consider which index should collect a data source, remember that you set retention policies by index. If you have two data sources, one that you need to keep for 3 years and one that you can discard after 30 days, send them to separate indexes. Otherwise, you will be paying to store 35 months of data you don’t really want, or discarding data 35 months too early. Essentially you should split your data into different indexes when you have different retention, permissions or usecase/category. Its a good idea to use a naming convention to achieve this so you can easily distinguish between different types. Such as adding a _nonprod or _prod suffix for non-production/production data - which might have different RBAC / Users. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @SN1 , you coult to use mvdedup: | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUT... See more...
Hi @SN1 , you coult to use mvdedup: | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW Asset_Gruppe Scan-Company Scanner Scan-Location Location "DNS Name" as dns_name Betriebssystem as "Operation System" | lookup lkp-GlobalIpRange.csv 3-Letter-Code as Location OUTPUTNEW "Company Code" | dedup finding, dns_name, ip | stats values("Company Code") as "Company Code" by finding, dns_name, ip, Asset_Gruppe, Scan-Company, Scanner, Scan-Location, Location, Betriebssystem | eval "Company Code"=mvdedup("Company Code") Ciao. Giuseppe
Hi @Sultan77 , if you want to open in drilldown only the events related to the results of the correlation search, you have to insert in the drilldown search a subsearch containing the correlation se... See more...
Hi @Sultan77 , if you want to open in drilldown only the events related to the results of the correlation search, you have to insert in the drilldown search a subsearch containing the correlation search. in other words, if your correlation search lists some hosts, you should use a drilldown search like the following: <the_same_search_conditions_of_the_correlation_search> [ search <the_full_correlation_search> | fields host ] Ciao. Giuseppe
Hi @cbiraris , in Splunk retention is only defined at index level, so the only way is to store the longer sourcetype in a different index. Ciao. Giuseppe
Hi @cbiraris, unfortunately retention time can only be applied to indexes.  
Hi team, i have a index with 4 sourcetype.  index has searchable retention of 4 months. is there any way we can keep same retention for 3 sourcetype and 1sourcetype can be increased to 8 months... See more...
Hi team, i have a index with 4 sourcetype.  index has searchable retention of 4 months. is there any way we can keep same retention for 3 sourcetype and 1sourcetype can be increased to 8 months ? For example: Index=xyz sourcetype = 1 searchable retention 4 Months   sourcetype = 2 searchable retention 4 Months   sourcetype = 3 searchable retention 4 Months   sourcetype = 4 searchable retention 8 Months
Hi, I just want to input OpenCTI feed from OpenCTI to Splunk. I followed installation instruction. https://splunkbase.splunk.com/app/7485 But, there is an error in _internal index as follows.... See more...
Hi, I just want to input OpenCTI feed from OpenCTI to Splunk. I followed installation instruction. https://splunkbase.splunk.com/app/7485 But, there is an error in _internal index as follows. 2025-03-27 16:50:02,889 ERROR pid=17581 tid=MainThread file=base_modinput.py:log_error:309 | Error in ListenStream loop, exit, reason: HTTPSConnectionPool(host='192.168.0.15', port=8080): Max retries exceeded with url: /stream/2cfe507d-1345-402d-82c7-eb8939228bf0?recover=2025-03-27T07:50:02Z (Caused by SSLError(SSLError(1, '[SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)'))) And I was able to access OpenCTI feeds using curl in Splunk enfironement and browser as well, but I can't access the OpenCTI stream using StreamID from Splunk to fetch the data. I think SSL is one of the issues. Please tell me if you know how to fetch the OpenCTI data to Splunk
Dear @livehybrid  You're right, that's exactly what I'm attempting to do. As for limiting the events returned, I'm working on specifying something distinctive, like the host that triggered the Eve... See more...
Dear @livehybrid  You're right, that's exactly what I'm attempting to do. As for limiting the events returned, I'm working on specifying something distinctive, like the host that triggered the Event ID or the user involved.
hello i have this search | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW A... See more...
hello i have this search | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW Asset_Gruppe Scan-Company Scanner Scan-Location Location "DNS Name" as dns_name Betriebssystem as "Operation System" | lookup lkp-GlobalIpRange.csv 3-Letter-Code as Location OUTPUTNEW "Company Code" | dedup finding, dns_name, ip | stats values("Company Code") as "Company Code" by finding, dns_name, ip, Asset_Gruppe, Scan-Company, Scanner, Scan-Location, Location, Betriebssystem now this is the result. now i have tried mvexpand , stats as well but it gives multiples values. The problem is let say for NessusHost "slo-svenessus01.emea.durr.int" there are let say 20 nessus host with this name now it is duplicating 20 "company code " (HHDE) in every single field for each Nessushost with this name and same for others as well.
Hi @feichinger , your solution has the limit of 50,000 results in the subsearch, so I hint to reverse your searches: index=perfmon counter="% Processor Time" | stats count BY host | append [ ... See more...
Hi @feichinger , your solution has the limit of 50,000 results in the subsearch, so I hint to reverse your searches: index=perfmon counter="% Processor Time" | stats count BY host | append [ | inputlookup domaincontrollers.csv | rename Name AS host | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe
I do have a solution for this, but I just wonder if there is a more straight forward approach to get a better understanding of multi search scenarios. I want to monitor which Windows forwarders have... See more...
I do have a solution for this, but I just wonder if there is a more straight forward approach to get a better understanding of multi search scenarios. I want to monitor which Windows forwarders have broken performance counters or are just not sending in performance counters for whatever reason. There's a CSV lookup file with the server names I want to monitor, and my idea was to have the search give me a table of all the servers in that lookup file which come back with 0 results for a given search. My working solution is this: | inputlookup domaincontrollers.csv | table Name | eval count=0 | append [search index=perfmon counter="% Processor Time" | rename host as Name | stats count by Name] | stats sum(count) by Name | rename "sum(count)" as activity | where activity=0 I had played with appendcols, but found that it would only merge the servers with results in the subsearch, and not list the others in the results. Is there any search method I should read up on, for a scenario like this? thanks
Hi everyone, I'm using Splunk Cloud with the Splunk Add-on for Microsoft Cloud Services  to manage two Azure subscriptions. As a result, I have duplicated inputs, and I need a way to reference each ... See more...
Hi everyone, I'm using Splunk Cloud with the Splunk Add-on for Microsoft Cloud Services  to manage two Azure subscriptions. As a result, I have duplicated inputs, and I need a way to reference each subscription within my queries. I noticed that the subscriptionId field exists, but it contains four variations: two in lowercase and two in uppercase. I'd like to normalize this field to lowercase at ingest time, so I don't have to handle it manually in every query. I checked the Field Transformations, but I couldn't find any mention of subscriptionId (I only see subscription_id). Has anyone dealt with a similar issue, or can anyone suggest the best approach? Thanks in advance for your help! (P.S. I'm relatively new to Splunk and Splunk Cloud, so any guidance is greatly appreciated!)
  @kiran_panchavat  But everytime I make changes through webGUI it makes serverclass reload and all other serverclasses lose bundle before I make full server reload 
Yes. There is this mark and select approach but it requires Splunk to not only scan all events from the initial search timerange, it also requires it to hold them as immediate results for the purpose... See more...
Yes. There is this mark and select approach but it requires Splunk to not only scan all events from the initial search timerange, it also requires it to hold them as immediate results for the purpose of reversing.  So it's not really a practical solution. But yes, it can be done this way.
Yup. That shouldn't have had anything to do with already indexed data - it's on "the other end" of Splunk. There is also another possibility - especially if there are more people involved in your en... See more...
Yup. That shouldn't have had anything to do with already indexed data - it's on "the other end" of Splunk. There is also another possibility - especially if there are more people involved in your environment. While the immediate change might have been in one place (inputs.conf) there could have been some changed made earlier in the config files but not commited to the runtime configuration. And when you restarted your Splunk instance new config file versions were read and applied. Anyway, if you're on a fairly modern Splunk version, you can check the _configtracker index to see what changes were made to your environment around that time you edited the inputs.conf.
@Alan_Chan  You could change the From email address in Mail Server Settings in Email Settings : https://docs.splunk.com/Documentation/SplunkCloud/latest/Alert/Emailnotification  If you want to send... See more...
@Alan_Chan  You could change the From email address in Mail Server Settings in Email Settings : https://docs.splunk.com/Documentation/SplunkCloud/latest/Alert/Emailnotification  If you want to send each mail from a different "from" address, then probably sendemail command https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/SearchReference/Sendemail  First, check if you can modify the "Send emails as" field under Email Settings in your Splunk Cloud instance. If you can’t, or if the change doesn’t take effect (e.g., due to domain restrictions), then yes, you should raise a support ticket. Refer:- https://docs.splunk.com/Documentation/SplunkCloud/latest/Alert/Emailnotification#Steps_for_Splunk_Cloud_PlatformEmail notification action - Splunk Documentation  https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-change-the-quot-From-quot-address-when-an-alert-email-is/m-p/479230 
Sorry for being vague I am trying to build the app using the Splunk Add-On-Builder using a rest api call. The problem I am having is the logs are coming in, in one big blob and I have tried multiple ... See more...
Sorry for being vague I am trying to build the app using the Splunk Add-On-Builder using a rest api call. The problem I am having is the logs are coming in, in one big blob and I have tried multiple line_breaker options and tested them in regex101.  With respect to the streaming mode. I checked all the .py files associated with the app and could not find any instances of  <streaming_mode>xml</streaming_mode>  or  <streaming_mode>simple</streaming_mode>   in any of them. is it one of the cases where i have to add it?  Does Splunk default to XML?  
We received all alerts from Splunk Cloud with sender alerts@splunkcloud.com. Can we change the sender to other domain? E.g. xxx@xxx.abc Do we need to raise a support ticket to have a change reque... See more...
We received all alerts from Splunk Cloud with sender alerts@splunkcloud.com. Can we change the sender to other domain? E.g. xxx@xxx.abc Do we need to raise a support ticket to have a change request on it?