All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi If you cannot connect to LM from peer, you have 72h time to fix the situation. After that you cannot do normal searches before you fix it. There is no automatic timeout for getting it work again!... See more...
Hi If you cannot connect to LM from peer, you have 72h time to fix the situation. After that you cannot do normal searches before you fix it. There is no automatic timeout for getting it work again! You must check that you have connection from your peer to LM usually it use port 8089. Also you must have same pass4SymmKey on generic stanza on peer and LM to get connection to work.  If physical connection between host is working then just look from LM's _internal log what is the reason why it didn't accept peers connection. r. Ismo
Hi here is some good starting point to your journey with Splunk: https://lantern.splunk.com/Splunk_Platform/Getting_Started https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/WhatSplunkcanmo... See more...
Hi here is some good starting point to your journey with Splunk: https://lantern.splunk.com/Splunk_Platform/Getting_Started https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/WhatSplunkcanmonitor Happy Splunking r. Ismo
I suppose you have found a solution to this by now. But if not, here is how i solved it by using the itsi_group_id field from index=itsi_grouped_alerts: https://<your_splunk_instance>/en-GB/app/it... See more...
I suppose you have found a solution to this by now. But if not, here is how i solved it by using the itsi_group_id field from index=itsi_grouped_alerts: https://<your_splunk_instance>/en-GB/app/itsi/itsi_event_management?earliest=-24h&episodeid=$result.itsi_group_id$ I used this to make a link from ServiceNow directly to the episode in ITSI Alerts and Episodes. In the Configure Action part of the Create/update ServiceNow Incident in the NEAP, i put the following in Custom Fields to make the link: comments=[code]<a href="https://<your_splunk_instance>/en-GB/app/itsi/itsi_event_management?earliest=-24h&episodeid=$result.itsi_group_id$" target="_blank">Link to Splunk ITSI Alerts and Episodes<br></a>[/code]  
Hi It's probably like @richgalloway said and you have exceed too many times your license quota and need a reset key. And probably you must also increase your current license? You can check the situ... See more...
Hi It's probably like @richgalloway said and you have exceed too many times your license quota and need a reset key. And probably you must also increase your current license? You can check the situation by Settings -> Licensing where you can see that you have valid license. For statistic you need to push "Usage Report" button. It opens you a new dashboard, where you could check "Pool Usage Warnings" which told about are you exceed your license quota too many times. You must do those on your license server. If you have distributed environment then you must do it on MC node with MC -> Indexing -> License usage. r. Ismo
thanks @shoaibalimir  - Hi @richgalloway, Shoaib is my colleague and these are the limitations due to which we are not using Add-on. Can you please suggest what are the alternates? Thank you.
Hello, I do success to disable/enable tokens using the WEB interface.   But curl command fails while trying to disable token using REST API. Executing GET method works OK: curl -k -u USER1:US... See more...
Hello, I do success to disable/enable tokens using the WEB interface.   But curl command fails while trying to disable token using REST API. Executing GET method works OK: curl -k -u USER1:USER_PASW -X GET https://localhost:8089/services/authorization/tokens -d id=80e7402b9940a7ac761f259d1e3e49bad1417394924ad0909c8edfd8eb92800e But PUT is failed with no clear error message: curl -k -u USER1:USER_PASW  -X PUT https://localhost:8089/services/authorization/token/ron -d id=80e7402b9940a7ac761f259d1e3e49bad1417394924ad0909c8edfd8eb92800e -d status=disabled The result is: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Not Found</msg> </messages> </response> I tried to switch username between ron and david.  What's wrong and how to locate more informative problem description. Thanks in advance, David
Hi @vijreddy30 , you need a Cluster Manager only if you have an Indexer Cluster, so at least two Indexers. Anyway, you cannot use a Deployment Server as Cluster Manager and you cannot use the same ... See more...
Hi @vijreddy30 , you need a Cluster Manager only if you have an Indexer Cluster, so at least two Indexers. Anyway, you cannot use a Deployment Server as Cluster Manager and you cannot use the same server for these roles: Cluster Manager must have a dedicated server, Deployment Server can be in a shared server (not Search Head Or Indexer or Cluster Manager) only if it has to manage less than 50 clients, otherwise a dedicated server is required. Ciao. Giuseppe
Hi based on this document https://docs.splunk.com/Documentation/Splunk/9.1.2/Deploy/Manageyourdeployment you cannot combine DS and CM on same server instance. r. Ismo
If you do not need the values, you can simplify to     | spath input=response path=errors{} output=errors | mvexpand errors | rex field=errors mode=sed "s/(\bwith (sub|\w+ id)) (\S+)/\1 */ s/(and ... See more...
If you do not need the values, you can simplify to     | spath input=response path=errors{} output=errors | mvexpand errors | rex field=errors mode=sed "s/(\bwith (sub|\w+ id)) (\S+)/\1 */ s/(and \w+ datetime) (\S+ \S+)/\1 */ s/\band test (\w+ id) (\S+)/and test \1 */" | stats count by errors     Note: As @dtburrows3 points out, you probably do not want to count by individual date_time. errors{} is an array, so you need mvexpand to handle possible multiple values. When substituting multiple values, sed mode is more readable than nested replace. Using sample input you illustrated, this is the output errors count Message: Payment failed. Reason: Hi, we attempted to process the transaction but it seems there was an error. Please check your information and try again. If the problem persists please contact your bank. 1 Unable to retrieve User Profile with sub * as it does not exist 3 Unallocated LRW seat not found with product id * and start datetime * and test location id * 2 Here is an emulation you can play with and compare with real data   | makeresults | eval response = split("{\"errors\": [\"Message: Payment failed. Reason: Hi, we attempted to process the transaction but it seems there was an error. Please check your information and try again. If the problem persists please contact your bank.\"]} {\"errors\": [\"Unable to retrieve User Profile with sub '2415d' as it does not exist\"]} {\"errors\": [\"Unable to retrieve User Profile with sub 'dfadf' as it does not exist\"]} {\"errors\": [\"Unable to retrieve User Profile with sub 'fdsgad' as it does not exist\"]} {\"errors\": [\"Unallocated LRW seat not found with product id fdafdsaddsfa and start datetime utc 2024-01-06T05:30:00+00:00 and test location id dfafdfa\"]} {\"errors\": [\"Unallocated LRW seat not found with product id sfgdfa and start datetime utc 2024-01-06T05:30:00+00:00 and test location id dsfadfsa\"]}", " ") | mvexpand response ``` data emulation above ```  
Hi @dania_abujuma , I'm not an expert in ios, but there's this add on https://splunkbase.splunk.com/app/6561 that could guide you in iPad log ingestion. Read all the documentation because your iPad... See more...
Hi @dania_abujuma , I'm not an expert in ios, but there's this add on https://splunkbase.splunk.com/app/6561 that could guide you in iPad log ingestion. Read all the documentation because your iPad generates a large amount of logs, and you have to correctly setup your inputs. Ciao. Giuseppe
Hello Splunkers! Is there a way to collect iPad logs? I saw the Mint iOS SDK documentation, but I don't find it clear.
Hi @Poojitha  I think this looks like a the msg token issue. Could you pls try this: |makeresults |eval custommessage="sample sendemail message body" |eval dest="mailid@domain.com" |sendemail ... See more...
Hi @Poojitha  I think this looks like a the msg token issue. Could you pls try this: |makeresults |eval custommessage="sample sendemail message body" |eval dest="mailid@domain.com" |sendemail to="$result.dest$" message="$result.custommessage$" then check if the message body appears as you are expecting.  more details: https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/sendemail https://docs.splunk.com/Documentation/Splunk/9.1.2/Alert/EmailNotificationTokens   upvotes / karma points are appreciated, thanks. 
Alright I figured you would want the fields extracted with their intended fieldnames instead of any-and-all matches being contained in a single multivalue field so here is SPL to do that. <base_... See more...
Alright I figured you would want the fields extracted with their intended fieldnames instead of any-and-all matches being contained in a single multivalue field so here is SPL to do that. <base_search> ``` this SPL required a field named "data" containing a raw string as its value ``` ``` this can be macroed by replacing the input field "data" and lookup name "test_regex_lookup.csv" ``` ``` example: | `extract_regex_from_lookup(data, test_regex_lookup.csv)` ``` ``` pull in all regex patterns as an array of json objects into the parent search as a new field ``` | join type=left [ | inputlookup test_regex_lookup.csv | tojson str(pattern_type) str(regex) output_field=regex_json | stats values(regex_json) as regex_json | eval regex_array=mv_to_json_array(regex_json) | fields + regex_array ] ``` parse array of json objects into a multivalued field of json objects ``` | eval regex_json=json_array_to_mv(regex_array) ``` remove array (no longer needed) ``` | fields - regex_array ``` search the raw text of field "data" for matches against any of the regex patterns contained in the regex_json multivalue field ``` | eval regex_match_json=case( mvcount(regex_json)==1, if(match(data, spath(regex_json, "regex")), json_set(regex_json, "matches", replace(data, ".*(".spath(regex_json, "regex").").*", "\1")), null()), mvcount(regex_json)>1, mvmap(regex_json, if(match(data, spath(regex_json, "regex")), json_set(regex_json, "matches", replace(data, ".*(".spath(regex_json, "regex").").*", "\1")), null())) ) ``` remove regex_json (no longer needed) ``` | fields - regex_json ``` (optional) multivalued field containing all pattern matches ``` | eval all_regex_matches=mvmap(regex_match_json, spath(regex_match_json, "matches")) ``` create temporary json object to hold key/value pairs for pattern_type attribution ``` | eval tmp_json=json_object() ``` loop through the regex_match_json multivalue field and assign a key/value entry to "tmp_json" for the (pattern_type: matches) ``` | foreach mode=multivalue regex_match_json [ | eval tmp_json=json_set(tmp_json, spath('<<ITEM>>', "pattern_type"), spath('<<ITEM>>', "matches")) ] ``` full spath against tmp_json to get field extractions for all matches against the pattern_types ``` | spath input=tmp_json ``` remove temporary json object (no loger needed) ``` | fields - tmp_json ``` (optional) remove regex_match_json field ``` | fields - regex_match_json ``` end of `extract_regex_from_lookup(2)` macro ``` ``` table all extracted fields derived from "data" field and regex stored in lookup "test_regex_lookup.csv" ``` | table _time, data, all_regex_matches, * I am pretty happy with how this turned out but there may be an easier way of doing it. Would be glad to hear anybody else to chime in on an easier way of accomplishing this. I have just always had problems with piping in data from a lookup into a parent search as executable SPL other than pulling it into an eval of some sort. Reference screenshot of sample output So the current SPL will assign the match to its corresponding row's pattern_type value from the lookup as a fieldname. In this example it is SSN, date, and name.
Hi, After installing the Python agent from downloads and after extracting the .tar file we are getting a jar file that is related to Java. There were no .py files. Just wanted to know how to instal... See more...
Hi, After installing the Python agent from downloads and after extracting the .tar file we are getting a jar file that is related to Java. There were no .py files. Just wanted to know how to install the Python agent from AppDynamics Downloads. Thanks, Anusha
thank you very much
Ask your Splunk account team for a Reset license.
Hi, I'm actually trying to bring all the Datadog metrics, logs and it's telemetry data available to Splunk. I tried the Datadog Add-On for Splunk. In the Datadog-Splunk Add-On, there's limitation o... See more...
Hi, I'm actually trying to bring all the Datadog metrics, logs and it's telemetry data available to Splunk. I tried the Datadog Add-On for Splunk. In the Datadog-Splunk Add-On, there's limitation of metrics, like only system metrics and some limited metrics are coming into Splunk, not all. Also, the data points aren't lining up as they should as comparing to Datadog. In terms of metrics fetching, it seems that there're restricted to limited system metrics, but I have different plug-ins in Datadog to support various applications like Mongo DB and other applications which has their Out of the Box (OOTB) metrics streaming into Datadog, I require those to be ingested into Splunk. Please guide me on how to achieve the same.
Not the prettiest solution but shouldn't require updating since its looping through all regexes from the lookup using a mvmap() function. | makeresults | eval data="very personal in... See more...
Not the prettiest solution but shouldn't require updating since its looping through all regexes from the lookup using a mvmap() function. | makeresults | eval data="very personal info on John Doe: Birthday: 04/12/1973 and SSN: 123-45-6789" | join type=left [ | inputlookup test_regex_lookup.csv | tojson str(pattern_type) str(regex) output_field=regex_json | stats values(regex_json) as regex_json | eval regex_array=mv_to_json_array(regex_json) | fields + regex_array ] | eval regex_json=json_array_to_mv(regex_array) | fields - regex_array | eval regex_patterns=case( mvcount(regex_json)==1, spath(regex_json, "regex"), mvcount(regex_json)>1, mvmap(regex_json, spath(regex_json, "regex")) ) | eval regex_match_json=case( mvcount(regex_json)==1, if(match(data, spath(regex_json, "regex")), json_set(regex_json, "matches", replace(data, ".*(".spath(regex_json, "regex").").*", "\1")), null()), mvcount(regex_json)>1, mvmap(regex_json, if(match(data, spath(regex_json, "regex")), json_set(regex_json, "matches", replace(data, ".*(".spath(regex_json, "regex").").*", "\1")), null())) ) | fields - regex_json, regex_patterns | eval all_regex_matches=mvmap(regex_match_json, spath(regex_match_json, "matches")) Also makes another field (regex_match_json) to map back the pattern that matched the extraction for reference.  
That is a pretty good solution. But I was looking for something that wouldn't require updating the query if another regex is added to the list.