All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thanks @shoaibalimir  - Hi @richgalloway, Shoaib is my colleague and these are the limitations due to which we are not using Add-on. Can you please suggest what are the alternates? Thank you.
Hello, I do success to disable/enable tokens using the WEB interface.   But curl command fails while trying to disable token using REST API. Executing GET method works OK: curl -k -u USER1:US... See more...
Hello, I do success to disable/enable tokens using the WEB interface.   But curl command fails while trying to disable token using REST API. Executing GET method works OK: curl -k -u USER1:USER_PASW -X GET https://localhost:8089/services/authorization/tokens -d id=80e7402b9940a7ac761f259d1e3e49bad1417394924ad0909c8edfd8eb92800e But PUT is failed with no clear error message: curl -k -u USER1:USER_PASW  -X PUT https://localhost:8089/services/authorization/token/ron -d id=80e7402b9940a7ac761f259d1e3e49bad1417394924ad0909c8edfd8eb92800e -d status=disabled The result is: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Not Found</msg> </messages> </response> I tried to switch username between ron and david.  What's wrong and how to locate more informative problem description. Thanks in advance, David
Hi @vijreddy30 , you need a Cluster Manager only if you have an Indexer Cluster, so at least two Indexers. Anyway, you cannot use a Deployment Server as Cluster Manager and you cannot use the same ... See more...
Hi @vijreddy30 , you need a Cluster Manager only if you have an Indexer Cluster, so at least two Indexers. Anyway, you cannot use a Deployment Server as Cluster Manager and you cannot use the same server for these roles: Cluster Manager must have a dedicated server, Deployment Server can be in a shared server (not Search Head Or Indexer or Cluster Manager) only if it has to manage less than 50 clients, otherwise a dedicated server is required. Ciao. Giuseppe
Hi based on this document https://docs.splunk.com/Documentation/Splunk/9.1.2/Deploy/Manageyourdeployment you cannot combine DS and CM on same server instance. r. Ismo
If you do not need the values, you can simplify to     | spath input=response path=errors{} output=errors | mvexpand errors | rex field=errors mode=sed "s/(\bwith (sub|\w+ id)) (\S+)/\1 */ s/(and ... See more...
If you do not need the values, you can simplify to     | spath input=response path=errors{} output=errors | mvexpand errors | rex field=errors mode=sed "s/(\bwith (sub|\w+ id)) (\S+)/\1 */ s/(and \w+ datetime) (\S+ \S+)/\1 */ s/\band test (\w+ id) (\S+)/and test \1 */" | stats count by errors     Note: As @dtburrows3 points out, you probably do not want to count by individual date_time. errors{} is an array, so you need mvexpand to handle possible multiple values. When substituting multiple values, sed mode is more readable than nested replace. Using sample input you illustrated, this is the output errors count Message: Payment failed. Reason: Hi, we attempted to process the transaction but it seems there was an error. Please check your information and try again. If the problem persists please contact your bank. 1 Unable to retrieve User Profile with sub * as it does not exist 3 Unallocated LRW seat not found with product id * and start datetime * and test location id * 2 Here is an emulation you can play with and compare with real data   | makeresults | eval response = split("{\"errors\": [\"Message: Payment failed. Reason: Hi, we attempted to process the transaction but it seems there was an error. Please check your information and try again. If the problem persists please contact your bank.\"]} {\"errors\": [\"Unable to retrieve User Profile with sub '2415d' as it does not exist\"]} {\"errors\": [\"Unable to retrieve User Profile with sub 'dfadf' as it does not exist\"]} {\"errors\": [\"Unable to retrieve User Profile with sub 'fdsgad' as it does not exist\"]} {\"errors\": [\"Unallocated LRW seat not found with product id fdafdsaddsfa and start datetime utc 2024-01-06T05:30:00+00:00 and test location id dfafdfa\"]} {\"errors\": [\"Unallocated LRW seat not found with product id sfgdfa and start datetime utc 2024-01-06T05:30:00+00:00 and test location id dsfadfsa\"]}", " ") | mvexpand response ``` data emulation above ```  
Hi @dania_abujuma , I'm not an expert in ios, but there's this add on https://splunkbase.splunk.com/app/6561 that could guide you in iPad log ingestion. Read all the documentation because your iPad... See more...
Hi @dania_abujuma , I'm not an expert in ios, but there's this add on https://splunkbase.splunk.com/app/6561 that could guide you in iPad log ingestion. Read all the documentation because your iPad generates a large amount of logs, and you have to correctly setup your inputs. Ciao. Giuseppe
Hello Splunkers! Is there a way to collect iPad logs? I saw the Mint iOS SDK documentation, but I don't find it clear.
Hi @Poojitha  I think this looks like a the msg token issue. Could you pls try this: |makeresults |eval custommessage="sample sendemail message body" |eval dest="mailid@domain.com" |sendemail ... See more...
Hi @Poojitha  I think this looks like a the msg token issue. Could you pls try this: |makeresults |eval custommessage="sample sendemail message body" |eval dest="mailid@domain.com" |sendemail to="$result.dest$" message="$result.custommessage$" then check if the message body appears as you are expecting.  more details: https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/sendemail https://docs.splunk.com/Documentation/Splunk/9.1.2/Alert/EmailNotificationTokens   upvotes / karma points are appreciated, thanks. 
Alright I figured you would want the fields extracted with their intended fieldnames instead of any-and-all matches being contained in a single multivalue field so here is SPL to do that. <base_... See more...
Alright I figured you would want the fields extracted with their intended fieldnames instead of any-and-all matches being contained in a single multivalue field so here is SPL to do that. <base_search> ``` this SPL required a field named "data" containing a raw string as its value ``` ``` this can be macroed by replacing the input field "data" and lookup name "test_regex_lookup.csv" ``` ``` example: | `extract_regex_from_lookup(data, test_regex_lookup.csv)` ``` ``` pull in all regex patterns as an array of json objects into the parent search as a new field ``` | join type=left [ | inputlookup test_regex_lookup.csv | tojson str(pattern_type) str(regex) output_field=regex_json | stats values(regex_json) as regex_json | eval regex_array=mv_to_json_array(regex_json) | fields + regex_array ] ``` parse array of json objects into a multivalued field of json objects ``` | eval regex_json=json_array_to_mv(regex_array) ``` remove array (no longer needed) ``` | fields - regex_array ``` search the raw text of field "data" for matches against any of the regex patterns contained in the regex_json multivalue field ``` | eval regex_match_json=case( mvcount(regex_json)==1, if(match(data, spath(regex_json, "regex")), json_set(regex_json, "matches", replace(data, ".*(".spath(regex_json, "regex").").*", "\1")), null()), mvcount(regex_json)>1, mvmap(regex_json, if(match(data, spath(regex_json, "regex")), json_set(regex_json, "matches", replace(data, ".*(".spath(regex_json, "regex").").*", "\1")), null())) ) ``` remove regex_json (no longer needed) ``` | fields - regex_json ``` (optional) multivalued field containing all pattern matches ``` | eval all_regex_matches=mvmap(regex_match_json, spath(regex_match_json, "matches")) ``` create temporary json object to hold key/value pairs for pattern_type attribution ``` | eval tmp_json=json_object() ``` loop through the regex_match_json multivalue field and assign a key/value entry to "tmp_json" for the (pattern_type: matches) ``` | foreach mode=multivalue regex_match_json [ | eval tmp_json=json_set(tmp_json, spath('<<ITEM>>', "pattern_type"), spath('<<ITEM>>', "matches")) ] ``` full spath against tmp_json to get field extractions for all matches against the pattern_types ``` | spath input=tmp_json ``` remove temporary json object (no loger needed) ``` | fields - tmp_json ``` (optional) remove regex_match_json field ``` | fields - regex_match_json ``` end of `extract_regex_from_lookup(2)` macro ``` ``` table all extracted fields derived from "data" field and regex stored in lookup "test_regex_lookup.csv" ``` | table _time, data, all_regex_matches, * I am pretty happy with how this turned out but there may be an easier way of doing it. Would be glad to hear anybody else to chime in on an easier way of accomplishing this. I have just always had problems with piping in data from a lookup into a parent search as executable SPL other than pulling it into an eval of some sort. Reference screenshot of sample output So the current SPL will assign the match to its corresponding row's pattern_type value from the lookup as a fieldname. In this example it is SSN, date, and name.
Hi, After installing the Python agent from downloads and after extracting the .tar file we are getting a jar file that is related to Java. There were no .py files. Just wanted to know how to instal... See more...
Hi, After installing the Python agent from downloads and after extracting the .tar file we are getting a jar file that is related to Java. There were no .py files. Just wanted to know how to install the Python agent from AppDynamics Downloads. Thanks, Anusha
thank you very much
Ask your Splunk account team for a Reset license.
Hi, I'm actually trying to bring all the Datadog metrics, logs and it's telemetry data available to Splunk. I tried the Datadog Add-On for Splunk. In the Datadog-Splunk Add-On, there's limitation o... See more...
Hi, I'm actually trying to bring all the Datadog metrics, logs and it's telemetry data available to Splunk. I tried the Datadog Add-On for Splunk. In the Datadog-Splunk Add-On, there's limitation of metrics, like only system metrics and some limited metrics are coming into Splunk, not all. Also, the data points aren't lining up as they should as comparing to Datadog. In terms of metrics fetching, it seems that there're restricted to limited system metrics, but I have different plug-ins in Datadog to support various applications like Mongo DB and other applications which has their Out of the Box (OOTB) metrics streaming into Datadog, I require those to be ingested into Splunk. Please guide me on how to achieve the same.
Not the prettiest solution but shouldn't require updating since its looping through all regexes from the lookup using a mvmap() function. | makeresults | eval data="very personal in... See more...
Not the prettiest solution but shouldn't require updating since its looping through all regexes from the lookup using a mvmap() function. | makeresults | eval data="very personal info on John Doe: Birthday: 04/12/1973 and SSN: 123-45-6789" | join type=left [ | inputlookup test_regex_lookup.csv | tojson str(pattern_type) str(regex) output_field=regex_json | stats values(regex_json) as regex_json | eval regex_array=mv_to_json_array(regex_json) | fields + regex_array ] | eval regex_json=json_array_to_mv(regex_array) | fields - regex_array | eval regex_patterns=case( mvcount(regex_json)==1, spath(regex_json, "regex"), mvcount(regex_json)>1, mvmap(regex_json, spath(regex_json, "regex")) ) | eval regex_match_json=case( mvcount(regex_json)==1, if(match(data, spath(regex_json, "regex")), json_set(regex_json, "matches", replace(data, ".*(".spath(regex_json, "regex").").*", "\1")), null()), mvcount(regex_json)>1, mvmap(regex_json, if(match(data, spath(regex_json, "regex")), json_set(regex_json, "matches", replace(data, ".*(".spath(regex_json, "regex").").*", "\1")), null())) ) | fields - regex_json, regex_patterns | eval all_regex_matches=mvmap(regex_match_json, spath(regex_match_json, "matches")) Also makes another field (regex_match_json) to map back the pattern that matched the extraction for reference.  
That is a pretty good solution. But I was looking for something that wouldn't require updating the query if another regex is added to the list. 
I can get it working to an extent, not sure if this method will exactly fit your use-case but will leave it here for you. So with a lookup named "test_regex_lookup.csv" pattern_type regex ... See more...
I can get it working to an extent, not sure if this method will exactly fit your use-case but will leave it here for you. So with a lookup named "test_regex_lookup.csv" pattern_type regex date \d{2}\/\d{2}\/\d{4} SSN  \d{3}\-\d{2}\-\d{4} We are able to pull in these regex patterns into a parent search via eval and then use these patterns in another eval to extract data. Example.     | makeresults | eval data="very personal info on John Doe: Birthday: 04/12/1973 and SSN: 123-45-6789" ``` pull in regex patterns from lookup ``` | eval ssn_regex=[ | inputlookup test_regex_lookup.csv where pattern_type="SSN" | fields + regex | eval regex="\"".'regex'."\"" | return $regex ], bday_regex=[ | inputlookup test_regex_lookup.csv where pattern_type="date" | fields + regex | eval regex="\"".'regex'."\"" | return $regex ] ``` use regex pattern fields to extract matches from another field "data" ``` | eval ssn=replace(data, ".*(".'ssn_regex'.").*", "\1"), bday=replace(data, ".*(".'bday_regex'.").*", "\1")      Resulting dataset looks something like this I'm sure there are other methods that can work or we can build upon this method further. I am curious about different ways of doing this as well so will leave updates if I figure out any other methods. Update: Was able to shorten the SPL into a single eval by using the nifty lookup() function   | makeresults | eval data="very personal info on John Doe: Birthday: 04/12/1973 and SSN: 123-45-6789" ``` get regex pattern from lookup and utilize against raw data in another field to extract data into net-new field ``` | eval ssn=replace(data, ".*(".spath(lookup("test_regex_lookup.csv", json_object("pattern_type", "SSN"), json_array("regex")), "regex").").*", "\1"), bday=replace(data, ".*(".spath(lookup("test_regex_lookup.csv", json_object("pattern_type", "date"), json_array("regex")), "regex").").*", "\1")      
Is it possible to store regex patterns in a lookup table so that it can be used in a search? For example lets say I have these following regexes like "(?<regex1>hello)" and "(?<regex2>world)".  My a... See more...
Is it possible to store regex patterns in a lookup table so that it can be used in a search? For example lets say I have these following regexes like "(?<regex1>hello)" and "(?<regex2>world)".  My actual regexes are not simple word matches. I want to write another query that basically runs a bunch of regexes like  | rex field=data "regex1" | rex field=data "regex2" etc | makeresults 1 | eval data="Hello world" [| inputlookup regex.csv | streamstats count | strcat "| rex field=data \"" regex "\"" as regexstring | table regexstring | mvcombine regexstring]​ is it possible to use the subsearch to extract the regexes and then use them as commands in the main query? I was trying something like | makeresults 1 | eval data="Hello world" [| inputlookup regex.csv | streamstats count | strcat "| rex field=data \"" regex "\"" as regexstring | table regexstring | mvcombine regexstring]  so that the subsearch outputs the following  | rex field=data "(?<regex1>hello)" | rex field=data "(?<regex2>world)"
>>> I am running this query against large set of query. Does using for loop and json functions has any limitation in that case ? Like results getting truncated and so ? 1) may we know how large is... See more...
>>> I am running this query against large set of query. Does using for loop and json functions has any limitation in that case ? Like results getting truncated and so ? 1) may we know how large is the data set, so that we can suggest you better.  2) foreach does not have any limitation, whereas the spath got 5000 characters limitation. but the docs has given this overriding method: By default, the spath command extracts all the fields from the first 5,000 characters in the input field. If your events are longer than 5,000 characters and you want to extract all of the fields, you can override the extraction character limit for all searches that use the spath command. To change this character limit for all spath searches, change the extraction_cutoff setting in the limits.conf file to a larger value. https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Spath https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Foreach   PS - Upvotes / like / karma points are appreciated, thanks. 
  | ldapsearch domain="default" search="(&(samAccountType=000000000) (|(sAMAccountName=*)))" attrs="sAMAccountName, distinguishedName, userAccountControl, whenCreated, personalTitle, displayName, gi... See more...
  | ldapsearch domain="default" search="(&(samAccountType=000000000) (|(sAMAccountName=*)))" attrs="sAMAccountName, distinguishedName, userAccountControl, whenCreated, personalTitle, displayName, givenName, sn, mail, telephoneNumber, mobile, manager, department, co, l, st, accountExpires, memberOf" | rex field=memberOf "CN=(?<memberOf_parsed>[^,]+)" | eval memberOf=lower(replace(mvjoin(memberOf_parsed, "|"), " ", "_")) | rex max_match=5 field=distinguishedName "OU=(?<dn_parsed>[^,]+)" | eval category=lower(replace(mvjoin(dn_parsed, "|"), " ", "_")) | eval priority=case(match(category, "domain_admin|disabled|hold|executive") OR match(memberOf, "domain_admins|enterprise_admins|schema_admins|administrators"), "critical", match(category, "contractor|service_account|external"), "high", match(category, "employees|training|user_accounts|users|administration"), "medium", 1==1, "unknown") | eval watchlist=case(match(category,"disabled|hold"), "true", 1==1, "false") | eval startDate=strftime(strptime(whenCreated,"%Y%m%d%H%M"), "%m/%d/%Y %H:%M") | eval endDate=strftime(strptime(accountExpires,"%Y-%m-%dT%H:%M:%S%Z"), "%m/%d/%Y %H:%M") | eval work_city=mvjoin(mvappend(l, st), ", ") | rename sAMAccountName as identity, personalTitle as prefix, displayName as nick, givenName as first, sn as last, mail as email, telephoneNumber as phone,mobile as phone2, manager AS managedBy, department as bunit, co AS work_country | fillnull value="unknown" category, priority, bunit | table identity,prefix,nick,first,last,suffix,email,phone,phone2,managedBy,priority,bunit,category,watchlist,startDate,endDate,work_city,work_country,work_lat,work_long | outputcsv xyz.csv   this the search that is being used to generate a csv file, and yes, it's same addon as you mentioned.  I believe you're right that > they're writing to a directory (on the same host as HF) And ingesting it by using a input. conf file.  Because in cloud we cannot monitor directories directly from cloud instance.  Correct me? thanks
Hello Everyone, We have a Splunk server installed and working. However, a month ago our license expired, and we just renewed it. Unfortunately, it didn't go without issues. We renewed our license, b... See more...
Hello Everyone, We have a Splunk server installed and working. However, a month ago our license expired, and we just renewed it. Unfortunately, it didn't go without issues. We renewed our license, but now the system isn't working, and we are getting the following error message - "Error in 'litsearch' command: Your Splunk license expired, or you have exceeded your license limit too many times." We found the below article. However, we are unable to access it as it requires a Salesforce login. Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. | Splunk (site.com) Can anyone help us figure out how to fix this issue? Thank you, Richard