All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @rahulkumar , check if the fields you used in json_extract are correct (they should be): you can do this in Splunk Search. Ciao. Giuseppe
Hi @tscroggins - thanks for the pointer - I removed datasources { ... } from this defaults section and kept only tokens { ... } - and it worked. 
Hi @danielbb , I don't think it's possible with that ProofPoint, due to a problem at the source of it. I have integrated many ProofPoints, but honestly I couldn't tell you what version or type of P... See more...
Hi @danielbb , I don't think it's possible with that ProofPoint, due to a problem at the source of it. I have integrated many ProofPoints, but honestly I couldn't tell you what version or type of PP there was. Ciao. Giuseppe
Hi @rpfutrell  If possible, run a btool ($SPLUNK_HOME/bin/splunk btool inputs list --debug) on your UF which should give you an output of all inputs configured on that host.  Have a look through th... See more...
Hi @rpfutrell  If possible, run a btool ($SPLUNK_HOME/bin/splunk btool inputs list --debug) on your UF which should give you an output of all inputs configured on that host.  Have a look through the output to see if you can see any references to the logs you're looking for. By applying --debug to the command you will also see, on the left, which file/folder the configuration came from - this should help you track down the app responsible for these inputs and allow you to update accordingly. If the app is controlled by your DS then you can head over to the DS ($SPLUNK_HOME/etc/deployment-apps/<appName> and update the configuration there. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @zksvc  Try adding ` | addinfo` to the end of your search, this will add the info_* fields to the results and should let you use them within your drilldown.   Please let me know how you get on ... See more...
Hi @zksvc  Try adding ` | addinfo` to the end of your search, this will add the info_* fields to the results and should let you use them within your drilldown.   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
First, thank you for illustrate sample events and clearly state desired output and the logic.  Before I foray into action, I'm deeply curious: Who is asking for this transformation in Splunk?  Your b... See more...
First, thank you for illustrate sample events and clearly state desired output and the logic.  Before I foray into action, I'm deeply curious: Who is asking for this transformation in Splunk?  Your boss?  You be your own boss?  Homework?  If it's your boss, ask for a raise because semantic transformation is best done with real language transformers such as DeepSeek.   If it's homework, tell them they are insane. This said, I have done a lot of limited-vocabulary, limited-grammar transformations to satisfy myself.  The key to the solution is to study elements (both vocabulary and concepts) and linguistic constraints.  Most limited-vocabulary, limited-grammar problems can be solved with lookups.  In my code below, I use JSON structure for this purpose but lookups are easier to maintain, and result in more readable code. (Using inline JSON has the advantage of reducing the amount of lookups, as you will see.)   | fillnull Status value=Success ``` deal with lack of Status in Logout; this can be refined if blanket success is unwarranted ``` | eval status_adverb = json_object("Success", "succeeded to ", "Failure", "failed to ") | eval action_verb = json_object("Login", "login from " . IPAddress . " (" . Location . ")", "Logout", "logout", "ProfileUpdate", "update " . lower(ElementUpdated), "ItemPurchase", "buy " . ItemName . " for " . Amount) | eval EventDescription = mvappend("User " . json_extract(status_adverb, Status) . json_extract(action_verb, ActionType), if(isnull(FailureReason), null(), "(" . FailureReason . ")")) | table _time SessionId ActionType EventDescription   Output from your sample data is _time SessionId ActionType EventDescription 2025-02-10 01:09:00 123abc Logout User succeeded to logout 2025-02-10 01:08:00 123abc ItemPurchase User failed to buy Item2 for 200.00 (Not enough funds) 2025-02-10 01:07:00 123abc ItemPurchase User succeeded to buy Item1 for 500.00 2025-02-10 01:06:00 123abc ProfileUpdate User failed to update password (Password too short) 2025-02-10 01:05:00 123abc ProfileUpdate User succeeded to update email 2025-02-10 01:04:00 123abc Login User succeeded to login from 10.99.99.99 (California) Here, instead of jumping between indefinite and adverb forms, I adhere to indefinite for both success and failure. Note: If the sample events are as you have shown, you shouldn't need to extract any more field.  Splunk should have extracted everything I referred to in the code.  Here is an emulation of the samples.  Play with it and compare with real data. (Also note that you misplaced purchase failure to the success event.  Below emulation corrects that.)   | makeresults | fields - _time | eval data = mvappend("2025-02-10 01:09:00, EventId=\"6\", SessionId=\"123abc\", ActionType=\"Logout\"", "2025-02-10 01:08:00, EventId=\"5\", SessionId=\"123abc\", ActionType=\"ItemPurchase\", ItemName=\"Item2\", Amount=\"200.00\", Status=\"Failure\", FailureReason=\"Not enough funds\"", "2025-02-10 01:07:00, EventId=\"4\", SessionId=\"123abc\", ActionType=\"ItemPurchase\", ItemName=\"Item1\", Amount=\"500.00\", Status=\"Success\"", "2025-02-10 01:06:00, EventId=\"3\", SessionId=\"123abc\" ActionType=\"ProfileUpdate\", ElementUpdated=\"Password\", NewValue=\"*******\", OldValue=\"***********\", Status=\"Failure\", FailureReason=\"Password too short\"", "2025-02-10 01:05:00, EventId=\"2\", SessionId=\"123abc\" ActionType=\"ProfileUpdate\", ElementUpdated=\"Email\", NewValue=\"NewEmail@somenewdomain.com\", OldValue=\"OldEmail@someolddomain.com\", Status=\"Success\"", "2025-02-10 01:04:00, EventId=\"1\", SessionId=\"123abc\", ActionType=\"Login\", IPAddress=\"10.99.99.99\", Location=\"California\", Status=\"Success\"") | mvexpand data | rename data as _raw | extract | rex "^(?<_time>[^,]+)" ``` data emulation above ```    
@dataisbeautiful Tried to use the below query also but no luck.  searchquery_blocking = '''search index=sample source="*sample*" AND host="v*lu*" OR host="s*mple*" | search httpcode="500" ''' still... See more...
@dataisbeautiful Tried to use the below query also but no luck.  searchquery_blocking = '''search index=sample source="*sample*" AND host="v*lu*" OR host="s*mple*" | search httpcode="500" ''' still not getting any results. Its strange. I have been stuck on this for three days. 
Hi Everyone, in default correlation search the name "Excessive Failed Logins" my drilldown cannot define $info_min_time$ and $info_max_time$ and it make when click drilldown searching in All-Time. If... See more...
Hi Everyone, in default correlation search the name "Excessive Failed Logins" my drilldown cannot define $info_min_time$ and $info_max_time$ and it make when click drilldown searching in All-Time. If in every correlation search drilldown is matching the time when it trigger in correlation search, why this one searching in All-Time mode?        
My apologies if my explanation is confusing. You are right, the csr has been signed, so right now it's a certificate which is in .pem format.  But the rather, the root ca certificate is in .cer form... See more...
My apologies if my explanation is confusing. You are right, the csr has been signed, so right now it's a certificate which is in .pem format.  But the rather, the root ca certificate is in .cer format.  And for my testing environment, the root ca certificate is in .pem format.  My next step is trying to convert it but unsure will it work.
How to efficiently unfreeze data back if cluster data is frozen
Hi @livehybrid , Apologies for the late reply, here's a copy of the code I'm using to generate the result from the API, maybe you can help if there's an issue on my code, thank you! # encodin... See more...
Hi @livehybrid , Apologies for the late reply, here's a copy of the code I'm using to generate the result from the API, maybe you can help if there's an issue on my code, thank you! # encoding = utf-8 import requests import json import time from datetime import datetime def validate_input(helper, definition): """Validate input stanza configurations in Splunk Add-on Builder.""" organization_id = definition.parameters.get('organization_id') api_key = definition.parameters.get('api_key') if not organization_id or not api_key: raise ValueError("Both 'organization_id' and 'api_key' are required.") def fetch_data(helper, start, organization_id, api_key): """Fetch data from the API with pagination while handling errors properly.""" url = f"https://xxx/xxx/xx/xxxxx/{organization_id}/xxxxx/availabilities?startingAfter={start}&perPage=1000" headers = {'API-Key-xxx': api_key, 'Content-Type': 'application/json'} try: helper.log_info(f"Fetching data with startingAfter: {start}") response = requests.get(url, headers=headers, timeout=10) # Set timeout for API call response.raise_for_status() data = response.json() helper.log_debug(f"Response Data: {json.dumps(data)[:500]}...") # Log partial data return data except requests.exceptions.Timeout: helper.log_error("Request timed out, stopping further requests to avoid infinite loops.") return None except requests.exceptions.RequestException as e: helper.log_error(f"Error during API request: {e}") return None def collect_events(helper, ew): """Collect events and send to Splunk Cloud while ensuring AppInspect compatibility.""" organization_id = helper.get_arg('organization_id') api_key = helper.get_arg('api_key') last_serial = "0000-0000-0000" results = [] while True: result = fetch_data(helper, last_serial, organization_id, api_key) if result and isinstance(result, list): current_date = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S') for item in result: item['current_date'] = current_date for item in result: event = helper.new_event( json.dumps(item), time=None, host="xxx", index=helper.get_output_index(), source=helper.get_input_type(), sourcetype="xxxxx" ) ew.write_event(event) if len(result) > 0 and 'serial' in result[-1]: last_serial = result[-1]['serial'] else: helper.log_info("No more data available, stopping collection.") break else: helper.log_warning("Empty response or error encountered, stopping.") break time.sleep(1) # Avoid hitting API rate limits helper.log_info("Data collection completed.")
There has been a problem with the implementation of a requirement. Previously, using a map resulted in the loss of statistical results. Is there a better solution For example, if the start date is T0,... See more...
There has been a problem with the implementation of a requirement. Previously, using a map resulted in the loss of statistical results. Is there a better solution For example, if the start date is T0, the end date is TD, the cycle is N days, and the trigger days are M days, the system should calculate whether each user has accessed the same sensitive account more than M times continuously within T0 to T0+N days, and then calculate the number of visits from T1 to T0+1+N days, T0+2 to T0+2+N days... T0+D to T0+D+N days (each user who accesses the same sensitive account multiple times a day is recorded as 1 time and does not accumulate between different users). How to implement using SPL?
Will the Splunk DB connection task stop when the index is full
Hi I have splunk servers (full deployment with index cluster, sh cluster) running on redhat 9. Now we want to harden the server following cis standard. Will this have any impact on Splunk applicati... See more...
Hi I have splunk servers (full deployment with index cluster, sh cluster) running on redhat 9. Now we want to harden the server following cis standard. Will this have any impact on Splunk application? Any exception need to be made?  Thanks
I'm trying to discover my source input.conf file that is responsible for pulling in the WinEventLogs.  Our original implementation was back in 2019, and completed by another SME that has since moved ... See more...
I'm trying to discover my source input.conf file that is responsible for pulling in the WinEventLogs.  Our original implementation was back in 2019, and completed by another SME that has since moved on.   When we implemented Splunk Cloud there was many other onsite components implemented, incuding an IDM server.  Since moving to the Victoria Experience we no longer utilize an IDM server, but have the rest of the resources in placed as shown in my attached..  That said, I'm just trying to confirm where to filter my oswin logs from, but not convinced I have identified the source.  While I found the inputs.conf file under Splunk_TA_windows (where I'd expect it to be) on the deployment server, I'm not confident it's responsible for this data input. This is because all my entries in the  stanza specific for WinEventLog ... has a disable = 1.  So while I want to believe, I cannot.  I've look over mulmore importantly where are my WinEventLogs truly being sourced from (which inputs.conf)?  I've review my resources on the Deployment Server, DMZ Forwarder and Syslog UFW Server  and not finding anything else that would be responsible, nor anything installed regarding Splunk_TA_windows, however I am indeed getting plenty of data, and trying to be more efficient with our ingest and looking to filter some of these type of logs out.  TIA   
I am having the same issue.  I have tried all the recommendations above.  Thank you in advance for any assistance.
I'm wondering if anyone could advise on how to best standardize a log of events with different fields. Basically, I have a log with about 50 transaction types (same source and sourcetype), and each e... See more...
I'm wondering if anyone could advise on how to best standardize a log of events with different fields. Basically, I have a log with about 50 transaction types (same source and sourcetype), and each event can have up to 20 different fields based on a specific field, ActionType. Here are a few sample events with some sample/generated data: 2025-02-10 01:09:00, EventId="6", SessionId="123abc",  ActionType="Logout" 2025-02-10 01:08:00, EventId="5", SessionId="123abc", ActionType="ItemPurchase", ItemName="Item2",  Amount="200.00", Status="Failure" 2025-02-10 01:07:00, EventId="4", SessionId="123abc", ActionType="ItemPurchase", ItemName="Item1", Amount="500.00", Status="Success", FailureReason="Not enough funds" 2025-02-10 01:06:00, EventId="3", SessionId="123abc" ActionType="ProfileUpdate", ElementUpdated="Password", NewValue="*******", OldValue="***********", Status="Failure", FailureReason="Password too short" 2025-02-10 01:05:00, EventId="2", SessionId="123abc" ActionType="ProfileUpdate", ElementUpdated="Email", NewValue="NewEmail@somenewdomain.com", OldValue="OldEmail@someolddomain.com", Status="Success" 2025-02-10 01:04:00, EventId="1", SessionId="123abc", ActionType="Login", IPAddress="10.99.99.99", Location="California", Status="Success" I'd like to put together a table with user-friendly EventDescription, like below: Time: SessionId Action EventDescription 2025-02-10 01:04:00 123abc LogIn User successfully logged in from IP 10.99.99.99 (California). 2025-02-10 01:05:00 123abc ProfileUpdate User failed to update password (Password too short) 2025-02-10 01:06:00 123abc ProfileUpdate User successfully updated email from NewEmail@somenewdomain.com to OldEmail@someolddomain.com 2025-02-10 01:07:00 123abc ItemPurchase User successfully purchased item1 for $500.00 2025-02-10 01:08:00 123abc ItemPurchase User failed to purchase item2 for $200.00 (insufficient funds) 2025-02-10 01:09:00 123abc LogOut User logged out successfully   Given that each action will have different fields, what's the best way to approach this, given that there could be about 50 different events (possibly more in the future).  I was initially thinking this can be done using a series of case statements, like the one below.  However, this approach doesn't seem too scalable or maintainable given the number of events and possible fields for each one: eval EventDescription=case(EventId="LogIn", case(Status="Success", "User successfully logged in from IP ".IpAddress." (Location)", 1=1, "User failed to login"), EventId="Logout......etc I was also thinking of using a macro to extract the field and compose an EventDescription, which would be easier to maintain since the code for each Action would be isolated, but I don't think execution 50 macros in one search is the best way to go.  Is there a better way to do this?  Thanks!
Try like this - note that your search needs to use the $app_name_choice$ token not $app_name$ <input type="multiselect" token="app_name"> <label>Application Name</label> <choice valu... See more...
Try like this - note that your search needs to use the $app_name_choice$ token not $app_name$ <input type="multiselect" token="app_name"> <label>Application Name</label> <choice value="All">All</choice> <default>All</default> <initialValue>*</initialValue> <fieldForLabel>app_name</fieldForLabel> <fieldForValue>app_name</fieldForValue> <search base="base_search"> <query> |stats count by app_name </query> </search> <valuePrefix>app_name="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <change> <eval token="form.app_name">case(mvcount('form.app_name')=0,"All",mvcount('form.app_name')&gt;1 AND mvfind('form.app_name',"All")&gt;0,"All",mvcount('form.app_name')&gt;1 AND mvfind('form.app_name',"All")=0,mvfilter('form.app_name'!="All"),1==1,'form.app_name')</eval> <eval token="app_name_choice">if('form.app_name'=="All","app_name=\"*\"",'app_name')</eval> </change> </input>
Yes there is at least one firewall between the client network and the Intermediate forward network. I did a quick and dirty test like you did by making a powershell script that ran on the client subn... See more...
Yes there is at least one firewall between the client network and the Intermediate forward network. I did a quick and dirty test like you did by making a powershell script that ran on the client subnet and simply opened as many connection to the IF as it could.  I created a corresponding server script to listen on a port.  As expected the server maxed out at 16000 connections.  This confirms that there is not a networking device  between the client network and the IF network that would limit the total number of connections.   The inputs and outputs that you have are effective the same as what I have.  I am not doing anything special with them and it is just about as basic as it comes. The next hop from the IF to the indexers needs to go through a NAT as my IF is a private address and the indexers are public.  I don't suspect that the IF server would not allow more that 1k connections if the upstream is limiting the connections but I don't have a easy way to verify this.  I don't control the indexers and so I cant do a similar end to end connection test with a lot of port. I am still scratching my head on this and like I said I am not satisfied with the suggestion of just building more IF servers and limiting them to 1k clients each.
That's a warning, not an error.  The file will be ingested, but while Splunk is busy with it other monitored files are ignored. Consider standing up a separate UF on that server just for the large f... See more...
That's a warning, not an error.  The file will be ingested, but while Splunk is busy with it other monitored files are ignored. Consider standing up a separate UF on that server just for the large files. Also, make sure maxKBps in limits.conf is set to 0 or the largest value the network can support.