All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @inventsekar, There's a much simpler solution! The regular expression \X token will match any Unicode grapheme. Combined with a lookahead to match only non-whitespace characters, we can extract a... See more...
Hi @inventsekar, There's a much simpler solution! The regular expression \X token will match any Unicode grapheme. Combined with a lookahead to match only non-whitespace characters, we can extract and count each grapheme: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | rex max_match=0 "(?<char>(?=\\S)\\X)" | eval length=mvcount(char) length = 31 | makeresults | eval _raw="இடும்பைக்கு" | rex max_match=0 "(?<char>(?=\\S)\\X)" | eval length=mvcount(char) length = 6 We can condense that to a single eval expression: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | eval length=len(replace(replace(_raw, "(?=\\S)\\X", "x"), "\\s", "")) length = 31 You can then use the eval expression in a macro definition and call the macro directly: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | eval length=`num_graphemes(_raw)` To count whitespace characters, remove (?=\S) from the regular expression: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | eval length=len(replace(_raw, "\\X", "x")) length = 37 Your new macro would then count each Unicode grapheme, including whitespace characters.
Sure @tscroggins .. i spoke with my account mgr and wrote to a Splunk account manager(or sales manager i am not sure) and he said he will look into it and reply back within a day.. and three days pas... See more...
Sure @tscroggins .. i spoke with my account mgr and wrote to a Splunk account manager(or sales manager i am not sure) and he said he will look into it and reply back within a day.. and three days passed. still i am waiting, waiting and waiting. lets see, thanks a lot for your help. (as you can see in my youtube channel "siemnewbies", i have been working on this for more than half year.. but good learning actually)
Excellent @tscroggins .. (if community could allow, i should have added more than 1 upvote. thanks a ton! ) (I should start focusing on Python more, python really solves "big issues,.. just like tha... See more...
Excellent @tscroggins .. (if community could allow, i should have added more than 1 upvote. thanks a ton! ) (I should start focusing on Python more, python really solves "big issues,.. just like that")
Hi @inventsekar, The PDF appears to have modified the code points! I prefer to use SPL because it doesn't usually require elevated privileges; however, it might be simpler to use an external lookup ... See more...
Hi @inventsekar, The PDF appears to have modified the code points! I prefer to use SPL because it doesn't usually require elevated privileges; however, it might be simpler to use an external lookup script. The lookup command treats fields containing only whitespace as empty/null, so the lookup will only identify non-whitespace characters. We'll need to create a script and a transform, which I've encapsulated in an app:   $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system   With the app in place, we count 31 non-whitespace characters using the lookup: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | rex max_match=0 "(?<char>.)" | lookup ucd_category_lookup char output category | eval length=mvcount(mvfilter(NOT match(category, "^M")))   Since this doesn't depend on a language-specific lookup, it should work with text from the Kural or any other source with characters or glyphs represented by Unicode code points. We can add any logic we'd like to an external lookup script, including counting characters of specific categories directly: | makeresults | eval _raw="இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்" | lookup ucd_count_chars_lookup _raw output count If you'd like to try this approach, I can help with the script, but you may enjoy exploring it yourself first.
Hi @erikhill  That doc is for "Splunk Cloud"(CLI access is with Splunk Cloud Support Team) and from the GUI page you can not delete. For Splunk Enterprise, i tried it on my lab setup: C:\Program... See more...
Hi @erikhill  That doc is for "Splunk Cloud"(CLI access is with Splunk Cloud Support Team) and from the GUI page you can not delete. For Splunk Enterprise, i tried it on my lab setup: C:\Program Files\Splunk\bin>.\splunk.exe remove index main WARNING: Server Certificate Warning - ignore this cannot remove idx=main, is internal C:\Program Files\Splunk\bin>   so, we can not remove the default index(es), thanks. 
Maybe you don't have permission to do summary indexing, but that option is in the searches, reports and alerts
Don't immediately jump down the indexed fields route - there are uses but most of the time search time extraction is sufficient. Adding index extractions will increase storage requirements for data a... See more...
Don't immediately jump down the indexed fields route - there are uses but most of the time search time extraction is sufficient. Adding index extractions will increase storage requirements for data as the raw data AND the extractions are both stored. With recent developments in Splunk, the use of TERM(XX) in searches can hugely improve search times, as it does not have to look at the raw data to find hits, instead it will look at the "tsidx" files.
See this app for an example of making tabs using the splunk linked list input type. https://splunkbase.splunk.com/app/5256 You can use simple html links in an <html> panel to do whatever you need, ... See more...
See this app for an example of making tabs using the splunk linked list input type. https://splunkbase.splunk.com/app/5256 You can use simple html links in an <html> panel to do whatever you need, you cannot directly put a link in one of the tabs using the technique above
This page states:  You can't delete default indexes and third-party indexes from the Indexes page.    Can I still delete default indexes through the CLI?  
Hi @jenniferhao .. i am not much sure of whether two windows or one window, but lets do troubleshooting... could you pls update us what output you get.. from the results, you can decide about next ... See more...
Hi @jenniferhao .. i am not much sure of whether two windows or one window, but lets do troubleshooting... could you pls update us what output you get.. from the results, you can decide about next steps.. thanks.  | mstats count(os.cpu.pct.used) as c where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) by host_ip | join host type=left [| mstats avg(ps_metric.pctMEM) as avg_mem_java avg(ps_metric.pctCPU) as avg_cpu_java count(ps_metric.pctMEM) as ct_java_proc where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) sourcetype=ps_metric COMMAND=java by host host_ip COMMAND USER ] ```| fields - c commenting the above fields command for a testing``` | eval is_java_running = if(ct_java_proc>0, 1, 0)  
Hi @usej  on Splunkbase i see no apps for backbase and/or fintech, so no community developed apps/addons available yet.  Creating your own app and onboarding logs is a simple task, this may require... See more...
Hi @usej  on Splunkbase i see no apps for backbase and/or fintech, so no community developed apps/addons available yet.  Creating your own app and onboarding logs is a simple task, this may require some Splunk Dev experience, but we can help you with that. Let us know more details, thanks. 
Hi @venugoski .. out of 23 events some events(as shown in the 3rd event in table output) may not have that particular "log_processed.message".  Lets doublecheck - pls check this one.. as the table c... See more...
Hi @venugoski .. out of 23 events some events(as shown in the 3rd event in table output) may not have that particular "log_processed.message".  Lets doublecheck - pls check this one.. as the table command printing the _raw also, you can verify on same screen: index="sample" "log_processed.env"=prod "log_processed.app"=sample "log_processed.traceId"=90cf115a05ebb87b2 | table _time log_processed.message _raw
Hi @cmg  As i remember and as Doc confirms, Phantom / Splunk SOAR provides running the playbook on both situations(manual and automatic).  https://docs.splunk.com/Documentation/SOAR/current/Playboo... See more...
Hi @cmg  As i remember and as Doc confirms, Phantom / Splunk SOAR provides running the playbook on both situations(manual and automatic).  https://docs.splunk.com/Documentation/SOAR/current/Playbook/Overview After you create and save a playbook in Splunk SOAR (Cloud), you can run playbooks when performing these tasks in Splunk SOAR (Cloud): Triaging or investigating cases as an analyst Creating or adding a case to Investigation Configuring playbooks to run automatically directly from the playbook editor   PS - if this/any reply helped you, please upvote. if this/any reply resolves your query, then pls accept it as solution, so your question will move from unanswered to answered. thanks. 
Hi @tscroggins and all,  I tried to download that tamil_unicode_block.csv, after spending 20 mins i left it.  from your pdf file i created that tamil_unicode_block.csv myself and uploaded to Splunk... See more...
Hi @tscroggins and all,  I tried to download that tamil_unicode_block.csv, after spending 20 mins i left it.  from your pdf file i created that tamil_unicode_block.csv myself and uploaded to Splunk.  but still the rex counting does not work as i expected. Could you pls help me in counting characters, thanks.  sample event -  இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர்   background details - my idea is to Splunk on tamil language Thirukkural and do some analytics. each event will be a two lines containing (seven words exactly) onboarding details are available in youtube video(@siemnewbies channel name)(i should not post the youtube link here as it may look like marketing) i take care of this youtube channel, focusing only Splunk and SIEM newbies.    Best Regards, Sekar
Is it possible to run a playbook on demand, meaning a manual trigger by an analyst such as clicking a playbook during a workbook step? I have a use case where I want to run a playbook, but only from ... See more...
Is it possible to run a playbook on demand, meaning a manual trigger by an analyst such as clicking a playbook during a workbook step? I have a use case where I want to run a playbook, but only from user initiation. I could implement some logic for user interaction at the container, but I'd prefer not to have something waiting for input until a user can get to it.
When a container is created that contains multiple artifacts from a forwarded Splunk event, I noticed playbooks are running against every artifact that has been added, causing duplicate actions. R... See more...
When a container is created that contains multiple artifacts from a forwarded Splunk event, I noticed playbooks are running against every artifact that has been added, causing duplicate actions. Reading through the boards here a bit a possible solution was adding logic to check for a container tag on run. Use a decision block to see if a tag exists, if so simply end, otherwise continue and add a tag when complete. My problem is this appears to work when testing against existing containers (debug against existing container ID and all artifacts), but when a new container is created it seems to ignore this and run multiple times. My guess is the playbook is being run concurrently for each of the artifacts instead of one at a time. 1. What is causing the problem? 2. What is best practice to prevent this from occurring?
Hi @Ajith.Kumar, Please check out this information https://docs.appdynamics.com/appd/onprem/latest/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/configure-the-javascript-... See more...
Hi @Ajith.Kumar, Please check out this information https://docs.appdynamics.com/appd/onprem/latest/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/configure-the-javascript-agent/add-custom-user-data-to-a-page-browser-snapshot https://community.appdynamics.com/t5/Knowledge-Base/Troubleshooting-EUM-custom-user-data/ta-p/26267
i see the splunk query  index="sample" "log_processed.env"=prod "log_processed.app"=sample "log_processed.traceId"=90cf115a05ebb87b2 | table _time, log_processed.message this is displaying the e... See more...
i see the splunk query  index="sample" "log_processed.env"=prod "log_processed.app"=sample "log_processed.traceId"=90cf115a05ebb87b2 | table _time, log_processed.message this is displaying the empty messages in a table cell . i could the event in the raw format. do i have any limit to see the whole message in table box .
Hello community members, Has anyone successfully integrated the Backbase fintech product with Splunk for logging and monitoring purposes? If so, could you share your insights, experiences, and any t... See more...
Hello community members, Has anyone successfully integrated the Backbase fintech product with Splunk for logging and monitoring purposes? If so, could you share your insights, experiences, and any tips on how to effectively set up and maintain this integration? Thank you in advance for your help!
Since you are just wanting to display percentage of 200 and total count of all StatusCode in each minute. I think a search like this should work.   index=<index> sourcetype=<sourcetype> sc_status=... See more...
Since you are just wanting to display percentage of 200 and total count of all StatusCode in each minute. I think a search like this should work.   index=<index> sourcetype=<sourcetype> sc_status=* | bucket span=1m _time | stats count as Totalcount, count(eval('sc_status'==200)) as Count200 by _time | eval Percent200=round(('Count200'/'Totalcount')*100, 2) | fields + _time, Percent200, Totalcount   Example Output: