All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@PickleRick Yes. I have web UI access also. When I search the query in Web splunk,  I get the results. the same query when i execute it in splunk rest api via python script, not getting any results. ... See more...
@PickleRick Yes. I have web UI access also. When I search the query in Web splunk,  I get the results. the same query when i execute it in splunk rest api via python script, not getting any results. I dont know why.   
I was using the Microsoft 365 App for Splunk and all of a sudden it stopped working and receiving any events or logs, I have tried everything and went back and backtracked all the installation steps,... See more...
I was using the Microsoft 365 App for Splunk and all of a sudden it stopped working and receiving any events or logs, I have tried everything and went back and backtracked all the installation steps, everything seems to be in order, but I still do not receive any new information
@dataisbeautifulThis is not needed. The string is defined with triple single quotes as long string and therefore double quotes do not need to be escaped. @BalajiRaju  If your base search returns val... See more...
@dataisbeautifulThis is not needed. The string is defined with triple single quotes as long string and therefore double quotes do not need to be escaped. @BalajiRaju  If your base search returns values and your filtering part causes it to not return any events at all, that would mean that you're filtering it wrong. There can be several reasons, most obvious would be that the httpcode field isn't properly extracted from the events (or simply your data doesn't have any 500 results). Do you have any webui access or is REST the only way you're accessing your Splunk installation?
Proofpoint Essentials is - as far as I remember - a simplified Proofpoint on Demand service. Proofpoint Enterprise can be deployed as either Proofpoint-managed Proofpoint on Demand service or an on-... See more...
Proofpoint Essentials is - as far as I remember - a simplified Proofpoint on Demand service. Proofpoint Enterprise can be deployed as either Proofpoint-managed Proofpoint on Demand service or an on-premise Proofpoint Protection Server installation. As I understand, you're using Essentials so you're not interested in an on-premise installation. So your only way to get the detailed email flow info would be to upgrade to Enterprise and license the Remote Syslog Forwarding feature. Then you can set up your own TLS-secured "syslog" receiver and push the events from your PoD instance. Essentials is a simplified service for small businesses and therefore doesn't have all the bells and whistles that "full" Enterprise setup has. But is way cheaper as I remember.
@BalajiRaju  When wrapping your query in quotes, do you escape the ones contained inside? For example query=" index=\"name\" "  
Thank you again for the support @gcusello  I currently don´t have visibility on _audit index in splunk. Do you maybe know if it is possible as well to filter the data based on the user type ? like f... See more...
Thank you again for the support @gcusello  I currently don´t have visibility on _audit index in splunk. Do you maybe know if it is possible as well to filter the data based on the user type ? like for example : user=admin ? what other users in splunk would exist with administrative privileges as well ? Are there any standard fields that exist in the _audit index that you think are enough to be archived while delivering the important details of the audit event ? I would really appreciate any help !
Hi @rahulkumar , check if the fields you used in json_extract are correct (they should be): you can do this in Splunk Search. Ciao. Giuseppe
Hi @tscroggins - thanks for the pointer - I removed datasources { ... } from this defaults section and kept only tokens { ... } - and it worked. 
Hi @danielbb , I don't think it's possible with that ProofPoint, due to a problem at the source of it. I have integrated many ProofPoints, but honestly I couldn't tell you what version or type of P... See more...
Hi @danielbb , I don't think it's possible with that ProofPoint, due to a problem at the source of it. I have integrated many ProofPoints, but honestly I couldn't tell you what version or type of PP there was. Ciao. Giuseppe
Hi @rpfutrell  If possible, run a btool ($SPLUNK_HOME/bin/splunk btool inputs list --debug) on your UF which should give you an output of all inputs configured on that host.  Have a look through th... See more...
Hi @rpfutrell  If possible, run a btool ($SPLUNK_HOME/bin/splunk btool inputs list --debug) on your UF which should give you an output of all inputs configured on that host.  Have a look through the output to see if you can see any references to the logs you're looking for. By applying --debug to the command you will also see, on the left, which file/folder the configuration came from - this should help you track down the app responsible for these inputs and allow you to update accordingly. If the app is controlled by your DS then you can head over to the DS ($SPLUNK_HOME/etc/deployment-apps/<appName> and update the configuration there. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @zksvc  Try adding ` | addinfo` to the end of your search, this will add the info_* fields to the results and should let you use them within your drilldown.   Please let me know how you get on ... See more...
Hi @zksvc  Try adding ` | addinfo` to the end of your search, this will add the info_* fields to the results and should let you use them within your drilldown.   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
First, thank you for illustrate sample events and clearly state desired output and the logic.  Before I foray into action, I'm deeply curious: Who is asking for this transformation in Splunk?  Your b... See more...
First, thank you for illustrate sample events and clearly state desired output and the logic.  Before I foray into action, I'm deeply curious: Who is asking for this transformation in Splunk?  Your boss?  You be your own boss?  Homework?  If it's your boss, ask for a raise because semantic transformation is best done with real language transformers such as DeepSeek.   If it's homework, tell them they are insane. This said, I have done a lot of limited-vocabulary, limited-grammar transformations to satisfy myself.  The key to the solution is to study elements (both vocabulary and concepts) and linguistic constraints.  Most limited-vocabulary, limited-grammar problems can be solved with lookups.  In my code below, I use JSON structure for this purpose but lookups are easier to maintain, and result in more readable code. (Using inline JSON has the advantage of reducing the amount of lookups, as you will see.)   | fillnull Status value=Success ``` deal with lack of Status in Logout; this can be refined if blanket success is unwarranted ``` | eval status_adverb = json_object("Success", "succeeded to ", "Failure", "failed to ") | eval action_verb = json_object("Login", "login from " . IPAddress . " (" . Location . ")", "Logout", "logout", "ProfileUpdate", "update " . lower(ElementUpdated), "ItemPurchase", "buy " . ItemName . " for " . Amount) | eval EventDescription = mvappend("User " . json_extract(status_adverb, Status) . json_extract(action_verb, ActionType), if(isnull(FailureReason), null(), "(" . FailureReason . ")")) | table _time SessionId ActionType EventDescription   Output from your sample data is _time SessionId ActionType EventDescription 2025-02-10 01:09:00 123abc Logout User succeeded to logout 2025-02-10 01:08:00 123abc ItemPurchase User failed to buy Item2 for 200.00 (Not enough funds) 2025-02-10 01:07:00 123abc ItemPurchase User succeeded to buy Item1 for 500.00 2025-02-10 01:06:00 123abc ProfileUpdate User failed to update password (Password too short) 2025-02-10 01:05:00 123abc ProfileUpdate User succeeded to update email 2025-02-10 01:04:00 123abc Login User succeeded to login from 10.99.99.99 (California) Here, instead of jumping between indefinite and adverb forms, I adhere to indefinite for both success and failure. Note: If the sample events are as you have shown, you shouldn't need to extract any more field.  Splunk should have extracted everything I referred to in the code.  Here is an emulation of the samples.  Play with it and compare with real data. (Also note that you misplaced purchase failure to the success event.  Below emulation corrects that.)   | makeresults | fields - _time | eval data = mvappend("2025-02-10 01:09:00, EventId=\"6\", SessionId=\"123abc\", ActionType=\"Logout\"", "2025-02-10 01:08:00, EventId=\"5\", SessionId=\"123abc\", ActionType=\"ItemPurchase\", ItemName=\"Item2\", Amount=\"200.00\", Status=\"Failure\", FailureReason=\"Not enough funds\"", "2025-02-10 01:07:00, EventId=\"4\", SessionId=\"123abc\", ActionType=\"ItemPurchase\", ItemName=\"Item1\", Amount=\"500.00\", Status=\"Success\"", "2025-02-10 01:06:00, EventId=\"3\", SessionId=\"123abc\" ActionType=\"ProfileUpdate\", ElementUpdated=\"Password\", NewValue=\"*******\", OldValue=\"***********\", Status=\"Failure\", FailureReason=\"Password too short\"", "2025-02-10 01:05:00, EventId=\"2\", SessionId=\"123abc\" ActionType=\"ProfileUpdate\", ElementUpdated=\"Email\", NewValue=\"NewEmail@somenewdomain.com\", OldValue=\"OldEmail@someolddomain.com\", Status=\"Success\"", "2025-02-10 01:04:00, EventId=\"1\", SessionId=\"123abc\", ActionType=\"Login\", IPAddress=\"10.99.99.99\", Location=\"California\", Status=\"Success\"") | mvexpand data | rename data as _raw | extract | rex "^(?<_time>[^,]+)" ``` data emulation above ```    
@dataisbeautiful Tried to use the below query also but no luck.  searchquery_blocking = '''search index=sample source="*sample*" AND host="v*lu*" OR host="s*mple*" | search httpcode="500" ''' still... See more...
@dataisbeautiful Tried to use the below query also but no luck.  searchquery_blocking = '''search index=sample source="*sample*" AND host="v*lu*" OR host="s*mple*" | search httpcode="500" ''' still not getting any results. Its strange. I have been stuck on this for three days. 
Hi Everyone, in default correlation search the name "Excessive Failed Logins" my drilldown cannot define $info_min_time$ and $info_max_time$ and it make when click drilldown searching in All-Time. If... See more...
Hi Everyone, in default correlation search the name "Excessive Failed Logins" my drilldown cannot define $info_min_time$ and $info_max_time$ and it make when click drilldown searching in All-Time. If in every correlation search drilldown is matching the time when it trigger in correlation search, why this one searching in All-Time mode?        
My apologies if my explanation is confusing. You are right, the csr has been signed, so right now it's a certificate which is in .pem format.  But the rather, the root ca certificate is in .cer form... See more...
My apologies if my explanation is confusing. You are right, the csr has been signed, so right now it's a certificate which is in .pem format.  But the rather, the root ca certificate is in .cer format.  And for my testing environment, the root ca certificate is in .pem format.  My next step is trying to convert it but unsure will it work.
How to efficiently unfreeze data back if cluster data is frozen
Hi @livehybrid , Apologies for the late reply, here's a copy of the code I'm using to generate the result from the API, maybe you can help if there's an issue on my code, thank you! # encodin... See more...
Hi @livehybrid , Apologies for the late reply, here's a copy of the code I'm using to generate the result from the API, maybe you can help if there's an issue on my code, thank you! # encoding = utf-8 import requests import json import time from datetime import datetime def validate_input(helper, definition): """Validate input stanza configurations in Splunk Add-on Builder.""" organization_id = definition.parameters.get('organization_id') api_key = definition.parameters.get('api_key') if not organization_id or not api_key: raise ValueError("Both 'organization_id' and 'api_key' are required.") def fetch_data(helper, start, organization_id, api_key): """Fetch data from the API with pagination while handling errors properly.""" url = f"https://xxx/xxx/xx/xxxxx/{organization_id}/xxxxx/availabilities?startingAfter={start}&perPage=1000" headers = {'API-Key-xxx': api_key, 'Content-Type': 'application/json'} try: helper.log_info(f"Fetching data with startingAfter: {start}") response = requests.get(url, headers=headers, timeout=10) # Set timeout for API call response.raise_for_status() data = response.json() helper.log_debug(f"Response Data: {json.dumps(data)[:500]}...") # Log partial data return data except requests.exceptions.Timeout: helper.log_error("Request timed out, stopping further requests to avoid infinite loops.") return None except requests.exceptions.RequestException as e: helper.log_error(f"Error during API request: {e}") return None def collect_events(helper, ew): """Collect events and send to Splunk Cloud while ensuring AppInspect compatibility.""" organization_id = helper.get_arg('organization_id') api_key = helper.get_arg('api_key') last_serial = "0000-0000-0000" results = [] while True: result = fetch_data(helper, last_serial, organization_id, api_key) if result and isinstance(result, list): current_date = datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S') for item in result: item['current_date'] = current_date for item in result: event = helper.new_event( json.dumps(item), time=None, host="xxx", index=helper.get_output_index(), source=helper.get_input_type(), sourcetype="xxxxx" ) ew.write_event(event) if len(result) > 0 and 'serial' in result[-1]: last_serial = result[-1]['serial'] else: helper.log_info("No more data available, stopping collection.") break else: helper.log_warning("Empty response or error encountered, stopping.") break time.sleep(1) # Avoid hitting API rate limits helper.log_info("Data collection completed.")
There has been a problem with the implementation of a requirement. Previously, using a map resulted in the loss of statistical results. Is there a better solution For example, if the start date is T0,... See more...
There has been a problem with the implementation of a requirement. Previously, using a map resulted in the loss of statistical results. Is there a better solution For example, if the start date is T0, the end date is TD, the cycle is N days, and the trigger days are M days, the system should calculate whether each user has accessed the same sensitive account more than M times continuously within T0 to T0+N days, and then calculate the number of visits from T1 to T0+1+N days, T0+2 to T0+2+N days... T0+D to T0+D+N days (each user who accesses the same sensitive account multiple times a day is recorded as 1 time and does not accumulate between different users). How to implement using SPL?
Will the Splunk DB connection task stop when the index is full
Hi I have splunk servers (full deployment with index cluster, sh cluster) running on redhat 9. Now we want to harden the server following cis standard. Will this have any impact on Splunk applicati... See more...
Hi I have splunk servers (full deployment with index cluster, sh cluster) running on redhat 9. Now we want to harden the server following cis standard. Will this have any impact on Splunk application? Any exception need to be made?  Thanks