All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for the response I tried your solution but still have results only for one day.  I wonder maybe this line may affect the unwanted one-day results:  status latest(test) as tests l... See more...
Thank you for the response I tried your solution but still have results only for one day.  I wonder maybe this line may affect the unwanted one-day results:  status latest(test) as tests latest(_time) as _time maybe I shouldn't use 'latest' agg function for 'test' and '_time'? But I don't know how to pass these values in a different way to 'timechart' function.
Hi @Karthickb2308  As others have mentioned, there arent currently any Splunkbase apps to write back to ManageEngine ITSM with Splunk for CMDB synchronization and automated ticket creation from Ente... See more...
Hi @Karthickb2308  As others have mentioned, there arent currently any Splunkbase apps to write back to ManageEngine ITSM with Splunk for CMDB synchronization and automated ticket creation from Enterprise Security alerts, however you can achieve this in a couple of ways: Custom App - You could use the ManageEngine API (https://www.manageengine.com/products/service-desk/sdpod-v3-api/SDPOD-V3-API.html) to build a custom app using Splunk UCC Framework - UCC is a great way to start building inputs (to import your CMDB data) and also create modular alert actions (to raise incidents from Enterprise Security).  Also see https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtocreatemodpy/ for more background on creating inputs. Use the REST API Modular Input add-on app to use the same Manage Engine API from within SPL, you can use scheduled searches to utilise the app's "curl" command against ManageEngine's REST API to fetch CMDB data. You could create a macro to write incidents using the same command and call this at the end of searches where you would normally create an alert action. Note - the curl command doesnt actually use curl, so not every parameter is supported, it uses python requests under-the-hood (see https://www.baboonbones.com/php/markdown.php?document=rest/README.md) Hopefully one of these two options helps you move forwards with your integration with ManageEngine into Splunk - please let me know you have any questions  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
To clarify, there are two distinct aspects in your requirements: If the date of the event matches that in the lookup, do not send alert no matter what search result is. Only on days that do not ma... See more...
To clarify, there are two distinct aspects in your requirements: If the date of the event matches that in the lookup, do not send alert no matter what search result is. Only on days that do not match any date in the lookup, send alert if search result is 0 or greater than 1. If this is true, event count must be before date match or together with date match. index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | lookup Date_Test.csv HDate output HDate as match | stats count values(match) as match by HDate | where isnull(match) AND count != 1 The by HDate clause is to validate event date in case the search crosses calendar dates.
@Karthickb2308  No one-click integration for CMDB or ticketing, but REST API and Splunk alert actions make it achievable. Use the ServiceDeskPlus Splunk app for supported ticket actions(If you have... See more...
@Karthickb2308  No one-click integration for CMDB or ticketing, but REST API and Splunk alert actions make it achievable. Use the ServiceDeskPlus Splunk app for supported ticket actions(If you have Splunk SOAR), or build your own with Python/REST. For CMDB, use exports/API to sync data into Splunk for enrichment and correlation. Also a simple alternative -If you can’t use the API, configure Splunk to send alert emails to ManageEngine’s ticket creation email address (less flexible, but simple).
Thanks @PrewinThomas , Do you have sample custom response handler which outputs both status code and body.
@smuderasi  Splunk’s REST Modular Input allows you to ingest data from REST APIs. By default, only the response body (e.g., JSON) is indexed. To also capture the HTTP status code, you need a custo... See more...
@smuderasi  Splunk’s REST Modular Input allows you to ingest data from REST APIs. By default, only the response body (e.g., JSON) is indexed. To also capture the HTTP status code, you need a custom response handler—a Python class that processes the HTTP response and outputs both the status code and the body.
@Benny87  Some dashboards, saved searches, or macros reference the wineventlog_security eventtype globally—even if your current search is for non-Windows data like firewalls or switches If the ev... See more...
@Benny87  Some dashboards, saved searches, or macros reference the wineventlog_security eventtype globally—even if your current search is for non-Windows data like firewalls or switches If the event type is missing, disabled, or its permissions are not set to "global," Splunk throws this error regardless of the actual index being searched This can also happen after app upgrades, permission changes, or if the Splunk_TA_windows is not deployed on all relevant search heads and indexers  
@Karthickb2308  There is no out of the box feature that lets you do this. However, If you have a script that can create tickets in Manage Engine Service Desk, You can have your Splunk alert call th... See more...
@Karthickb2308  There is no out of the box feature that lets you do this. However, If you have a script that can create tickets in Manage Engine Service Desk, You can have your Splunk alert call that python script when the alert triggers https://help.servicedeskplus.com/api/rest-api.html  ManageEngine ServiceDesk Plus supports ticket creation via its REST API (endpoint: /api/v3/requests).
@Karthickb2308  To integrate ManageEngine ServiceDesk Plus CMDB with Splunk, the goal is typically to sync asset and configuration item (CI) data between the two systems for better incident context ... See more...
@Karthickb2308  To integrate ManageEngine ServiceDesk Plus CMDB with Splunk, the goal is typically to sync asset and configuration item (CI) data between the two systems for better incident context and correlation. Since no direct Splunk app exists for ManageEngine CMDB.   Log Forwarding from ManageEngine to Splunk   https://www.manageengine.com/products/self-service-password/adselfservice-plus-integrations.html    https://www.manageengine.com/products/ad-manager/help/admin-settings/third-party-integrations/splunk.html   
Hi Team,    I need help with Manage engine ticketing tool integration with Splunk i have researched in Google did not find any exact document please provide your inputs if anyone has integrated the... See more...
Hi Team,    I need help with Manage engine ticketing tool integration with Splunk i have researched in Google did not find any exact document please provide your inputs if anyone has integrated these one.   Goal    1) CMDB integration  2) Automatically create a ticket for each splunk enterprise security alerts
Thank you for your answer. We are using HAProxy as a load balancer because we want to have two Heavy Forwarders, so if one fails, the other remains active. I have researched and found that the PROX... See more...
Thank you for your answer. We are using HAProxy as a load balancer because we want to have two Heavy Forwarders, so if one fails, the other remains active. I have researched and found that the PROXY protocol in HAProxy adds a header containing the client's IP address. However, it seems that Splunk Heavy Forwarder does not natively support or understand this header. As you mentioned, does this mean there is no reliable way to use HAProxy as a load balancer and still have access to the original client IP in the Splunk Heavy Forwarder? Also, I have one more question: Is it true that the log format of each client (when HAProxy is acting as a middle-man sending logs to the HF) may be different, depending on the client source? Thank you very much for your help.
Facing same issue, Was this resolved?
You're very welcome for the help. I believe both the Splunk Base app and the log format you referred to are related to HAProxy's internal logs. However, what I'm looking for is a method to capture th... See more...
You're very welcome for the help. I believe both the Splunk Base app and the log format you referred to are related to HAProxy's internal logs. However, what I'm looking for is a method to capture the IP addresses of external clients connecting through HAProxy. In fact we have an HAProxy server that receives logs from various clients and forwards them to a Splunk Heavy Forwarder. Each client have its own log format. The problem is that HAProxy replaces the client's IP address with its own (in TCP). The question is: how can we have the client's IP address for each log in the Splunk Heavy Forwarder?
As you are using Dashboard Studio instead of classic there is no depends on those panels. You must look from here https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/DashStudio/showHide how it... See more...
As you are using Dashboard Studio instead of classic there is no depends on those panels. You must look from here https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/DashStudio/showHide how it will do with DS.
If you refer to the value of the host field assigned to an event when connection_host=ip (and only if it's not overwritten later by transforms), then no - you cannot do that directly within Splunk. ... See more...
If you refer to the value of the host field assigned to an event when connection_host=ip (and only if it's not overwritten later by transforms), then no - you cannot do that directly within Splunk. As HAProxy works as a "middle-man" - it is the originator of all your logging TCP connections. It receives events from the remote hosts and then sends all events to your HF within a connection initiated by itself. So obviously the origin of the event is lost. This is one of the reasons why you should _not_ receive syslogs directly on Splunk. Ideally, you should replace your haproxy with a syslog receiver which would track the source addresses and either write events to files to be picked up by a forwarder or forward them to HEC.
Hi @Benny87  What kind of architecture do you have? Do you have multiple indexers? Please could you do a btool to check the eventtypes on the SH and an Indexer: $SPLUNK_HOME/bin/splunk btool eventt... See more...
Hi @Benny87  What kind of architecture do you have? Do you have multiple indexers? Please could you do a btool to check the eventtypes on the SH and an Indexer: $SPLUNK_HOME/bin/splunk btool eventtypes list --debug wineventlog_security Im also wondering if something that matches another eventtype for your firewall data is also referencing wineventlog_security... if you do the same btool output as above without the final "wineventlog_security" do you see "wineventlog_security" within any other stanzas?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Ashish0405  Just add another <format> under the existing one such as: <format type="color" field="Severity"> <colorPalette type="map">{"Critical":#D93F3C,"Informational":#31A3... See more...
Hi @Ashish0405  Just add another <format> under the existing one such as: <format type="color" field="Severity"> <colorPalette type="map">{"Critical":#D93F3C,"Informational":#31A35F}</colorPalette> </format>   Full example: <dashboard version="1.1"> <label>Demo</label> <row> <panel> <table> <search> <query>|makeresults | eval Status="failed", Severity="Critical" | append [makeresults | eval Status="finished", Severity="Informational"]</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="Status"> <colorPalette type="map">{"failed":#D93F3C,"finished":#31A35F,"Critical":#D93F3C,"Informational":#31A35F}</colorPalette> </format> <format type="color" field="Severity"> <colorPalette type="map">{"Critical":#D93F3C,"Informational":#31A35F}</colorPalette> </format> </table> </panel> </row> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing Edit - Sorry just seen the other replies which I hadnt noticed before, not meaning to step on others toes! 
Hi @hv64  Is this a dashboard that you have created yourself with a custom visualisation or a visualisation from another Splunkbase app, such as Process Flow Diagram App? If so please could you let... See more...
Hi @hv64  Is this a dashboard that you have created yourself with a custom visualisation or a visualisation from another Splunkbase app, such as Process Flow Diagram App? If so please could you let us know which version of any custom Viz app you are using along with Splunk Enterprise/Cloud version? Are you able to share the dashboard XML?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Nrsch  Are you talking about the "host" field in Splunk? It is typical for this field to be the device which is sending the logs. Instead you would want to extract a field called something like ... See more...
Hi @Nrsch  Are you talking about the "host" field in Splunk? It is typical for this field to be the device which is sending the logs. Instead you would want to extract a field called something like "src_ip" or "client_ip" which would be the IP address of the client system making the web request. If you're able to share a few sample/redacted events then I'd be happy to help create the relevant extractions you need. There is also a Splunkbase app for HAProxy (https://splunkbase.splunk.com/app/3135) which is designed to take a syslog input however the field extractions could well be the same if you're sending to a file and then forwarding with a Splunk forwarder?  Alternatively you could look to set a custom HAProxy log format (since you wouldnt be using the off-the-shelf addon) and can then set key=value pairs for the log event components, e.g. client_ip=%ci for client IP. See https://www.haproxy.com/blog/haproxy-log-customization for more info on that.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SN1  Try the following: require([ 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!', 'jquery' ], function(mvc, SearchMa... See more...
Hi @SN1  Try the following: require([ 'splunkjs/mvc', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!', 'jquery' ], function(mvc, SearchManager, TableView, ignored, $) { // Define a simple cell renderer with a button var ActionButtonRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return cell.field === 'rowKey'; }, render: function($td, cell) { $td.addClass('button-cell'); var rowKey = cell.value; var $btn = $('<button>').text('Unsolved'); $btn.on('click', function(e) { e.preventDefault(); e.stopPropagation(); var searchQuery = `| inputlookup sbc_warning.csv | eval rowKey=tostring(rowKey) | eval solved=if(rowKey="${rowKey}", "1", solved) | outputlookup sbc_warning.csv`; var writeSearch = new SearchManager({ id: "writeSearch_" + Math.floor(Math.random() * 100000), search: searchQuery, autostart: true }); writeSearch.on('search:done', function() { $btn.text('Solved'); var panelSearch = mvc.Components.get('panel_search_id'); if (panelSearch) { panelSearch.startSearch(); } }); }); $td.append($btn); } }); // Apply the renderer to the specified table var tableComponent = mvc.Components.get('sbc_warning_table'); if (tableComponent) { tableComponent.getVisualization(function(tableView) { tableView.table.addCellRenderer(new ActionButtonRenderer()); tableView.table.render(); }); } }); The button is created with $('<button>').text('Unsolved'). When clicked, after lookup update (search:done), the button label is changed using $btn.text('Solved'). This only changes text for that row's button; repeat clicks won't revert. Note: If you want the initial state ("Unsolved"/"Solved") to reflect actual data, you must pass the current "solved" value for each row and set the initial button text accordingly. Do you have a field for if its already Solved or not that you could use to set the initial button text?    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing