All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Notable creation as ServiceNow Incident:- The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method. To send specific notable... See more...
Notable creation as ServiceNow Incident:- The reverse integration between ServiceNow and Splunk for incident management can be achieved using an out-of-the-box method. To send specific notable events from the Enterprise Security Incident Review page for investigation, an add-on called the ServiceNow Security Operations Add-on is available. This add-on allows Splunk ES analysts to create security-related incidents and events in ServiceNow. It features on-demand single ServiceNow event or incident creation from Splunk Event Scheduled Alerts, enabling the creation of both single and multiple ServiceNow events and incidents. Another approach is to customize the Splunk Add-on for ServiceNow by modifying the /opt/splunk/etc/apps/Splunk_TA_snow/local/alert_actions.conf file with the following configuration, which should be applied to your deployer and pushed to your Search Head Cluster (SHC):     [snow_incident] param._cam = {\ "category": ["others"],\ "task": ["others"],\ "subject": ["others"],\ "technology": [{"vendor": "unknown", "product": "unknown"}],\ "supports_adhoc": true\ } param.state = 1 param.correlation_id = $job.sid$ param.configuration_item = splunk param.contact_type = param.assignment_group = param.category = param.subcategory = param.account = splunk_integration param.short_description =   All the param.* fields can be hardcoded in this configuration file to prepopulate the ad hoc invocation, if that is your preference. If you need any further assistance, please let me know. Note: Using both add-ons will facilitate sending notables to the ServiceNow Incident Review. 2. Notable Closure:- Updating Splunk Notables When Incidents Are Opened or Closed in ServiceNow (Need to Configure at ServiceNow) Step 1: Create an Outbound REST Message in ServiceNow Navigate to System Web Services > Outbound > REST Message in your ServiceNow instance. Click New to create a new REST message. Name the message and specify the endpoint, which should be the URL of your Splunk instance. Step 2: Define HTTP Methods In the new REST message, go to the HTTP Methods related list. Create a new record and select the appropriate HTTP method (usually POST). In the Endpoint field, add the specific API endpoint for updating notables. Step 3: Define Headers and Parameters If your Splunk instance requires specific headers or parameters, define them in this step. For example, you may need to set authentication headers or other required parameters. Step 4: Create a Business Rule Navigate to System Definition > Business Rules in ServiceNow. Create a new business rule: Set the table to Incident. Define the conditions to trigger the rule, typically "After" an insert or update when the incident state changes to "Closed." In the Advanced tab, write a script to send the REST message when the specified conditions are met. Here’s a sample script:   // Sample script to send the REST message var restMessage = new sn_ws.RESTMessageV2(); restMessage.setHttpMethod('POST'); // or 'PUT' restMessage.setEndpoint('https://your-splunk-instance/api/update_notables'); // Update with your endpoint restMessage.setRequestHeader('Content-Type', 'application/json'); restMessage.setRequestHeader('Authorization', 'Bearer your_api_token'); // If required var requestBody = { "incident_id": current.sys_id, "state": current.state, // Add other relevant fields here }; restMessage.setRequestBody(JSON.stringify(requestBody)); var response = restMessage.execute(); var responseBody = response.getBody(); var httpStatus = response.getStatusCode(); // Handle the response as needed   Step 5: Test the Integration Close an incident in ServiceNow and verify whether the corresponding alert is also closed in Splunk. Ensure that you replace 'Your REST Message' and 'Your HTTP Method' with the actual names you provided when creating the REST message. Adjust parameters and headers as required by your Splunk instance's API. Additional Configuration To properly configure the REST call for updating notables in Splunk, ensure you pass the necessary parameters and headers, particularly the ruleID as  mentioned in below document. NotableEventAPIreference /services/notable_update.  Splunk Notable Update Endpoint Endpoint URL:    https://<host>:<mPort>/services/notable_update​   HTTP Method: POST If this reply is helpful, karma would be appreciated .
You might be better off using eventstats to add the average to all the events, then use the where command to keep the events you want to delete, then remove the average field (with the fields command... See more...
You might be better off using eventstats to add the average to all the events, then use the where command to keep the events you want to delete, then remove the average field (with the fields command) before deleting the events.
Hi, delete ist not a mus have... to exclude the vaulty results to the search is another option... My logig: timechart avg > get the avg min and avg max from this timechart > exclude events with t... See more...
Hi, delete ist not a mus have... to exclude the vaulty results to the search is another option... My logig: timechart avg > get the avg min and avg max from this timechart > exclude events with the min max avg > new timechart
Hi @CMEOGNAD , at first, I suppose that you know that you must have the can_delete role associated to your user. Then, I suppose that you know that this is a logical not a physical removing, in oth... See more...
Hi @CMEOGNAD , at first, I suppose that you know that you must have the can_delete role associated to your user. Then, I suppose that you know that this is a logical not a physical removing, in other words, removed events are marked as deleted but not removed from the buckets until the end of the bucket life cycle. In other words you don't have any useful effect to the removing in terms of storage or license (because they are already indexed). Anyway, I'm not sure that's possible to apply the delete command to a streaming command: you should select the events to delete and use the delete command after the main search. Ciao. Giuseppe
Hi Community, i have a data source, that submit sometimes faulty humidity data like 3302.4 Percent. To clean / delete this outlier events, i buil a timechart avg to get the real humidity curve,... See more...
Hi Community, i have a data source, that submit sometimes faulty humidity data like 3302.4 Percent. To clean / delete this outlier events, i buil a timechart avg to get the real humidity curve, and from this curve i get the max and min with stats  to get the upper and bottom from this curves. ...but my search wont work, and i need your help, here is a makeresult sample: | makeresults format=json data="[{\"_time\":\"1729115947\", \"humidity\":70.7},{\"_time\":\"1729115887\", \"humidity\":70.6},{\"_time\":\"1729115827\", \"humidity\":70.5},{\"_time\":\"1729115762\", \"humidity\":30.9},{\"_time\":\"1729115707\", \"humidity\":70.6}]" [ search | timechart eval(round(avg(humidity),1)) AS avg_humidity | stats min(avg_humidity) as min_avg_humidity ] | where humidity < min_avg_humidity ```| delete ```
Hi @PickleRick, Thank you for your suggestions.    After following your suggestions, the configurations are now working correctly for my use case. Here are the changes I made for [route_to_teamid_i... See more...
Hi @PickleRick, Thank you for your suggestions.    After following your suggestions, the configurations are now working correctly for my use case. Here are the changes I made for [route_to_teamid_index] stanza in transforms.conf: 1) For [route_to_teamid_index] - Set FORMAT = $1 - Updated SOURCE_KEY = MetaData:Source Current working configs for my use cases: ----------------------------------------------------------------------------- props ----------------------------------------------------------------------------- #custom-props-for-starflow-logs [source::.../starflow-app-logs...] TRANSFORMS-set_new_sourcetype = new_sourcetype TRANSFORMS-set_route_to_teamid_index = route_to_teamid_index ----------------------------------------------------------------------------- transforms ----------------------------------------------------------------------------- #custom-transforms-for-starflow-logs [new_sourcetype] REGEX = .* DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:kinesis:starflow WRITE_META = true [route_to_teamid_index] REGEX = .*\/starflow-app-logs(?:-[a-z]+)?\/([a-zA-Z0-9]+)\/ SOURCE_KEY = MetaData:Source FORMAT = $1 DEST_KEY = _MetaData:Index WRITE_META = true Previously, the configuration had SOURCE_KEY = source, which was causing issues. The SOURCE_KEY = <field> setting essentially tells Splunk where the regex should be applied. In my configuration, it was set to "source" but Splunk might not have been able to apply the regex to just the source field. After spending time reading through transforms.conf, I noticed that under the global settings, there was a specific mention of this. SOURCE_KEY = <string> * NOTE: This setting is valid for both index-time and search-time field extractions. * Optional. Defines the KEY that Splunk software applies the REGEX to. * For search time extractions, you can use this setting to extract one or more values from the values of another field. You can use any field that is available at the time of the execution of this field extraction * For index-time extractions use the KEYs described at the bottom of this file. * KEYs are case-sensitive, and should be used exactly as they appear in the KEYs list at the bottom of this file. (For example, you would say SOURCE_KEY = MetaData:Host, *not* SOURCE_KEY = metadata:host .) Keys  MetaData:Source  : The source associated with the event.    Thank you sincerely for all of your genuine help!
Hi @Real_captain , you could try using the area charts and eventually using white for the min area so it seems that it's coloured only the difference between min and max. Ciao. Giuseppe
Hi @tbayer82 , the order of filters isn't relevant, but if you have OR operators I'd prefer to use parenthesis: index=* (dstip="192.168.1.0/24" OR srcip="192.168.1.0/24") action=deny and you don't... See more...
Hi @tbayer82 , the order of filters isn't relevant, but if you have OR operators I'd prefer to use parenthesis: index=* (dstip="192.168.1.0/24" OR srcip="192.168.1.0/24") action=deny and you don't need to use the AND operator that's the default. Ciao. Giuseppe
Hi @Strangertinz , check again the results in the rising column field: usually the issue is there. You have results executing the SQL query in DB-Connect, but it extracts only the records with the ... See more...
Hi @Strangertinz , check again the results in the rising column field: usually the issue is there. You have results executing the SQL query in DB-Connect, but it extracts only the records with the risong column values gretaer than the checkpoint, but if the rising column isn't correct or there are duplicated values you risk to lose records. Ciao. Giuseppe
Hi Team, I am fetching unique "ITEM" values from first sql query running on one database. Then passing those values to another sql query to fetch the corresponding values in the second database. ... See more...
Hi Team, I am fetching unique "ITEM" values from first sql query running on one database. Then passing those values to another sql query to fetch the corresponding values in the second database. first SQL query: select distinct a.item from price a, skus b, deps c,supp_country s where zone_id in (5, 25) and a.item = b.sku and b.dept = c.dept and a.item = s.item and s.primary_supp_ind = 'Y' and s.primary_pack_ind = 'Y' and b.dept in (7106, 1666, 1650, 1651, 1654, 1058, 4158, 4159, 489, 491, 492, 493, 495, 496, 497, 498, 499, 501, 7003, 502, 503, 7004, 450, 451, 464, 465, 455, 457, 458, 459, 460, 461, 467, 494, 7013, 448, 462, 310, 339, 7012, 7096, 200, 303, 304, 1950, 1951, 1952, 1970, 1976, 1201, 1206, 1207, 1273, 1352, 1274, 1969, 1987, 342, 343, 7107, 7098, 7095, 7104, 2101, 2117, 7107, 7098, 1990, 477, 162, 604, 900, 901, 902, 903, 904, 905, 906, 908, 910, 912, 916, 918, 7032, 919, 7110, 7093, 7101, 913, 915, 118, 119, 2701, 917) and b.js_status in ('CO'); Second SQL: WITH RankedData AS (SELECT Product_Id, BusinessUnit_Id, Price, LastUpdated, ROW_NUMBER() OVER (PARTITION BY Product_Id, BusinessUnit_Id ORDER BY LastUpdated DESC) AS RowNum FROM RETAIL.DBO.CAT_PRICE(nolock) WHERE BusinessUnit_Id IN ('zone_5', 'zone_25') AND Product_Id IN ($ITEM$) ) SELECT Product_Id, BusinessUnit_Id, Price, LastUpdated FROM RankedData WHERE RowNum = 1; When I am using map command as shown below, expected results are fetched but only 10k records as per map command limitations. But I want to to fetch all the records(around 30K) Splunk query: | dbxquery query="First SQL query" connection="ABC" |eval comma="'" |eval ITEM='comma' + 'ITEM' + 'comma'+"," |mvcombine ITEM |nomv ITEM |fields - comma |eval ITEM=rtrim(tostring(ITEM),",")| map search="| dbxquery query=\"Second SQL query" connection=\"XYZ\"" But when i am using join command as shown below to get all the results(more than 10K), I am not getting the desired output. The output only contains results from first query. I tried replacing the column name Product_Id in second sql with ITEM at all places, but still no luck. | dbxquery query="First SQL query" connection="ABC" |fields ITEM | join type=outer ITEM[search dbxquery query=\"Second SQL query" connection=\"XYZ\"" Could someone help me in understanding what is going wrong and how can i get all the matching results from second query?
Hi @Erendouille , the only way is to tune the Correlation Search filtering events with "unknown " or "NULL". One hint: don't modify Correlation Searches, clone and modify them in a custom app (call... See more...
Hi @Erendouille , the only way is to tune the Correlation Search filtering events with "unknown " or "NULL". One hint: don't modify Correlation Searches, clone and modify them in a custom app (calld e.g. "SA-SOC"). Ciao. Giuseppe
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to ... See more...
Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i'm trying to generate a PDF from one of my dashboard on the last 24 hours, using a splunk API call. I'm using a POST request to the ".../services/pdfgen/render" endpoint. First I couldn't find any documentation on  this matter. Furthermore, even when looking at $SPLUNK_HOME/lib/python3.7/sites-packages/splunk/pdf/pdfgen_*.py  (endpoint,views,search,utils) i could'nt really understand what arguments to use to ask for the last 24 hours data. I know it should be possible because it is doable on the splunk GUI, where you can choose a time range and render according to it.  I saw something looking like time range args : et and lt, which should be earliest time and latest time, but i don't know what type of time data it is expecting an trying random things didn't get me anywhere. If you know anything on this subject please help me thank you
It's usually nice to actually ask a question after reporting the current state. Typically if the search is properly defined and scheduled but is not being run, the issue is with resources. Are you s... See more...
It's usually nice to actually ask a question after reporting the current state. Typically if the search is properly defined and scheduled but is not being run, the issue is with resources. Are you sure your SH(C) is not overloaded and you have no delayed/skipped searches? Did you check scheduler's logs?
HI  I want to know if it is possible to have a line chart with the area between max and min value filled with color.  Example :  For the below chart , we will be having 2 more new lines ( Max and ... See more...
HI  I want to know if it is possible to have a line chart with the area between max and min value filled with color.  Example :  For the below chart , we will be having 2 more new lines ( Max and Min) and we would like to have color filed in the area between Max and Min lines.  Current Query to generate the 3 lines :  | table Start_Time CurrentWeek "CurrentWeek-1" "CurrentWeek-2"  2 more lines ( Max and Min ) needs to be added in the above linechart and fill the color between max and min. 
Up
Well. This is a fairly generic question and to answer it you have to look into your own data. The Endpoint datamodel definition is fairly well known and you can browse through its details any time i... See more...
Well. This is a fairly generic question and to answer it you have to look into your own data. The Endpoint datamodel definition is fairly well known and you can browse through its details any time in the gui. You know which indexes the datamodel pulls the events from. So you must check the data quality in your indexes and check if the sourcetypes have proper extractions and if your sources provide you with relevant data. If there is no data in your events what is Splunk supposed to do? Guess? It's not about repairing a datamodel because the datamodel is just an abstract definition. It's about repairing your data or its parsing rules so that necessary fields are extracted from your events. That's what CIM-compliance means.  If you have a TA for specific technology which tells you it's CIM-compliant, you can expect the fields to be filled properly (and you could fill a bug report if they aren't ;-)). But sometimes TAs require you to configure your source in a specific way because otherwise not all relevant data is being sent in the events. So it all boils down to have data and know your data.
OK. I assume you're talking about a DBConnect app installed on a HF in your on-prem environment, right? If you're getting other logs from that HF (_internal, some other inputs), that means that the ... See more...
OK. I assume you're talking about a DBConnect app installed on a HF in your on-prem environment, right? If you're getting other logs from that HF (_internal, some other inputs), that means that the HF is sending the data. It's the dbconnect input that's not pulling the data properly from the source database. (the dbconnect doesn't "send" anything on its own; it just gets the data from the source and lets Splunk handle it like any other input). So check your _internal for anything related to that input.
1. Most people don't speak Japanese here 2. 7.3 is a relatively old version. Are you sure you meant that one? Not 9.3? 3. Regardless, if you can connect to localhost on port 8000 it seems that y... See more...
1. Most people don't speak Japanese here 2. 7.3 is a relatively old version. Are you sure you meant that one? Not 9.3? 3. Regardless, if you can connect to localhost on port 8000 it seems that your Splunk instance is running. If you cannot connect from remote it means that either the splunkd.exe is listening on loopback interface only (which you can verify with netstat -an -p tcp) or you are unable to reach the server on a network level (which - depending on your network setup - means either filtering connections with windows firewall or problems with routing or filtering on your router).
Yeah i know the problem was quite specific, sorry for the late answer and thanks for your help. I was able to determine what failed, the GET was actually supposed to be a POST request. I don't really... See more...
Yeah i know the problem was quite specific, sorry for the late answer and thanks for your help. I was able to determine what failed, the GET was actually supposed to be a POST request. I don't really know why but one splunk error message said that the GET is outdated for pdfgen. Anyway thanks again.
Dear all, I'm trying to search for denied actions in a subnet, regardless if it is the source or destination. I tried those without success, maybe you can help me out. Thank you! index=* AND sr... See more...
Dear all, I'm trying to search for denied actions in a subnet, regardless if it is the source or destination. I tried those without success, maybe you can help me out. Thank you! index=* AND src="192.168.1.0/24" OR dst="192.168.1.0/24" AND action=deny index=* action=deny AND src_ip=192.168.1.0/24 OR dst_ip=192.168.1.0/24 Just found it: index=* dstip="192.168.1.0/24" OR srcip="192.168.1.0/24" action=deny