All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@livehybrid  Able to break down the events , but still can't extract the date-time information ,getting error     
Ah sorry about that! Leave it with me, just working on it locally to check.
I will keep it short. We found a solution to the errors.  If you just restart the Indexer or Searchhead which throws the errors or make a new connection as search peer won´t help. We shut down the... See more...
I will keep it short. We found a solution to the errors.  If you just restart the Indexer or Searchhead which throws the errors or make a new connection as search peer won´t help. We shut down the whole Splunk farm. Indexer, SH, Licence Server, Deploymentserver etc... When all server are off you can start them again.  Everything resumed to working fine without errors.
@livehybrid  Now it came like in 1 event   
Hi @Praz_123  Under Advanced try setting a LINE_BREAKER to "predictions"\s*:\s*\[|}\s*,\s*{|}\s*\]?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marki... See more...
Hi @Praz_123  Under Advanced try setting a LINE_BREAKER to "predictions"\s*:\s*\[|}\s*,\s*{|}\s*\]?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks kiran for the support
Need to know while am adding the data in splunk am getting the below error  Same data would be like :- { "version": "200", "predictions": [ { "ds": "2023-01-01T01:00:00", "y"... See more...
Need to know while am adding the data in splunk am getting the below error  Same data would be like :- { "version": "200", "predictions": [ { "ds": "2023-01-01T01:00:00", "y": 25727, "yhat_lower": 23595.643771045987, "yhat_upper": 26531.786203915904, "marginal_upper": 26838.980030149163, "marginal_lower": 23183.715141246714, "anomaly": false }, { "ds": "2023-01-01T02:00:00", "y": 24710, "yhat_lower": 21984.478022195697, "yhat_upper": 24966.416390280523, "marginal_upper": 25457.020250925423, "marginal_lower": 21744.743048120385, "anomaly": false }, { "ds": "2023-01-01T03:00:00", "y": 23908, "yhat_lower": 21181.498740796877, "yhat_upper": 24172.09825724038, "marginal_upper": 24449.705257711226, "marginal_lower": 20726.645610860345, "anomaly": false },
Thank you for the response I tried your solution but still have results only for one day.  I wonder maybe this line may affect the unwanted one-day results:  status latest(test) as tests l... See more...
Thank you for the response I tried your solution but still have results only for one day.  I wonder maybe this line may affect the unwanted one-day results:  status latest(test) as tests latest(_time) as _time maybe I shouldn't use 'latest' agg function for 'test' and '_time'? But I don't know how to pass these values in a different way to 'timechart' function.
Hi @Karthickb2308  As others have mentioned, there arent currently any Splunkbase apps to write back to ManageEngine ITSM with Splunk for CMDB synchronization and automated ticket creation from Ente... See more...
Hi @Karthickb2308  As others have mentioned, there arent currently any Splunkbase apps to write back to ManageEngine ITSM with Splunk for CMDB synchronization and automated ticket creation from Enterprise Security alerts, however you can achieve this in a couple of ways: Custom App - You could use the ManageEngine API (https://www.manageengine.com/products/service-desk/sdpod-v3-api/SDPOD-V3-API.html) to build a custom app using Splunk UCC Framework - UCC is a great way to start building inputs (to import your CMDB data) and also create modular alert actions (to raise incidents from Enterprise Security).  Also see https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/ for more background on creating inputs. Use the REST API Modular Input add-on app to use the same Manage Engine API from within SPL, you can use scheduled searches to utilise the app's "curl" command against ManageEngine's REST API to fetch CMDB data. You could create a macro to write incidents using the same command and call this at the end of searches where you would normally create an alert action. Note - the curl command doesnt actually use curl, so not every parameter is supported, it uses python requests under-the-hood (see https://www.baboonbones.com/php/markdown.php?document=rest/README.md) Hopefully one of these two options helps you move forwards with your integration with ManageEngine into Splunk - please let me know you have any questions  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
To clarify, there are two distinct aspects in your requirements: If the date of the event matches that in the lookup, do not send alert no matter what search result is. Only on days that do not ma... See more...
To clarify, there are two distinct aspects in your requirements: If the date of the event matches that in the lookup, do not send alert no matter what search result is. Only on days that do not match any date in the lookup, send alert if search result is 0 or greater than 1. If this is true, event count must be before date match or together with date match. index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | lookup Date_Test.csv HDate output HDate as match | stats count values(match) as match by HDate | where isnull(match) AND count != 1 The by HDate clause is to validate event date in case the search crosses calendar dates.
@Karthickb2308  No one-click integration for CMDB or ticketing, but REST API and Splunk alert actions make it achievable. Use the ServiceDeskPlus Splunk app for supported ticket actions(If you have... See more...
@Karthickb2308  No one-click integration for CMDB or ticketing, but REST API and Splunk alert actions make it achievable. Use the ServiceDeskPlus Splunk app for supported ticket actions(If you have Splunk SOAR), or build your own with Python/REST. For CMDB, use exports/API to sync data into Splunk for enrichment and correlation. Also a simple alternative -If you can’t use the API, configure Splunk to send alert emails to ManageEngine’s ticket creation email address (less flexible, but simple).
Thanks @PrewinThomas , Do you have sample custom response handler which outputs both status code and body.
@smuderasi  Splunk’s REST Modular Input allows you to ingest data from REST APIs. By default, only the response body (e.g., JSON) is indexed. To also capture the HTTP status code, you need a custo... See more...
@smuderasi  Splunk’s REST Modular Input allows you to ingest data from REST APIs. By default, only the response body (e.g., JSON) is indexed. To also capture the HTTP status code, you need a custom response handler—a Python class that processes the HTTP response and outputs both the status code and the body.
@Benny87  Some dashboards, saved searches, or macros reference the wineventlog_security eventtype globally—even if your current search is for non-Windows data like firewalls or switches If the ev... See more...
@Benny87  Some dashboards, saved searches, or macros reference the wineventlog_security eventtype globally—even if your current search is for non-Windows data like firewalls or switches If the event type is missing, disabled, or its permissions are not set to "global," Splunk throws this error regardless of the actual index being searched This can also happen after app upgrades, permission changes, or if the Splunk_TA_windows is not deployed on all relevant search heads and indexers  
@Karthickb2308  There is no out of the box feature that lets you do this. However, If you have a script that can create tickets in Manage Engine Service Desk, You can have your Splunk alert call th... See more...
@Karthickb2308  There is no out of the box feature that lets you do this. However, If you have a script that can create tickets in Manage Engine Service Desk, You can have your Splunk alert call that python script when the alert triggers https://help.servicedeskplus.com/api/rest-api.html  ManageEngine ServiceDesk Plus supports ticket creation via its REST API (endpoint: /api/v3/requests).
@Karthickb2308  To integrate ManageEngine ServiceDesk Plus CMDB with Splunk, the goal is typically to sync asset and configuration item (CI) data between the two systems for better incident context ... See more...
@Karthickb2308  To integrate ManageEngine ServiceDesk Plus CMDB with Splunk, the goal is typically to sync asset and configuration item (CI) data between the two systems for better incident context and correlation. Since no direct Splunk app exists for ManageEngine CMDB.   Log Forwarding from ManageEngine to Splunk   https://www.manageengine.com/products/self-service-password/adselfservice-plus-integrations.html    https://www.manageengine.com/products/ad-manager/help/admin-settings/third-party-integrations/splunk.html   
Hi Team,    I need help with Manage engine ticketing tool integration with Splunk i have researched in Google did not find any exact document please provide your inputs if anyone has integrated the... See more...
Hi Team,    I need help with Manage engine ticketing tool integration with Splunk i have researched in Google did not find any exact document please provide your inputs if anyone has integrated these one.   Goal    1) CMDB integration  2) Automatically create a ticket for each splunk enterprise security alerts
Thank you for your answer. We are using HAProxy as a load balancer because we want to have two Heavy Forwarders, so if one fails, the other remains active. I have researched and found that the PROX... See more...
Thank you for your answer. We are using HAProxy as a load balancer because we want to have two Heavy Forwarders, so if one fails, the other remains active. I have researched and found that the PROXY protocol in HAProxy adds a header containing the client's IP address. However, it seems that Splunk Heavy Forwarder does not natively support or understand this header. As you mentioned, does this mean there is no reliable way to use HAProxy as a load balancer and still have access to the original client IP in the Splunk Heavy Forwarder? Also, I have one more question: Is it true that the log format of each client (when HAProxy is acting as a middle-man sending logs to the HF) may be different, depending on the client source? Thank you very much for your help.
Facing same issue, Was this resolved?
You're very welcome for the help. I believe both the Splunk Base app and the log format you referred to are related to HAProxy's internal logs. However, what I'm looking for is a method to capture th... See more...
You're very welcome for the help. I believe both the Splunk Base app and the log format you referred to are related to HAProxy's internal logs. However, what I'm looking for is a method to capture the IP addresses of external clients connecting through HAProxy. In fact we have an HAProxy server that receives logs from various clients and forwards them to a Splunk Heavy Forwarder. Each client have its own log format. The problem is that HAProxy replaces the client's IP address with its own (in TCP). The question is: how can we have the client's IP address for each log in the Splunk Heavy Forwarder?