Don't immediately jump down the indexed fields route - there are uses but most of the time search time extraction is sufficient. Adding index extractions will increase storage requirements for data a...
See more...
Don't immediately jump down the indexed fields route - there are uses but most of the time search time extraction is sufficient. Adding index extractions will increase storage requirements for data as the raw data AND the extractions are both stored. With recent developments in Splunk, the use of TERM(XX) in searches can hugely improve search times, as it does not have to look at the raw data to find hits, instead it will look at the "tsidx" files.
See this app for an example of making tabs using the splunk linked list input type. https://splunkbase.splunk.com/app/5256 You can use simple html links in an <html> panel to do whatever you need, ...
See more...
See this app for an example of making tabs using the splunk linked list input type. https://splunkbase.splunk.com/app/5256 You can use simple html links in an <html> panel to do whatever you need, you cannot directly put a link in one of the tabs using the technique above
Hi @jenniferhao .. i am not much sure of whether two windows or one window, but lets do troubleshooting... could you pls update us what output you get.. from the results, you can decide about next ...
See more...
Hi @jenniferhao .. i am not much sure of whether two windows or one window, but lets do troubleshooting... could you pls update us what output you get.. from the results, you can decide about next steps.. thanks. | mstats count(os.cpu.pct.used) as c where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) by host_ip
| join host type=left
[| mstats avg(ps_metric.pctMEM) as avg_mem_java avg(ps_metric.pctCPU) as avg_cpu_java count(ps_metric.pctMEM) as ct_java_proc where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) sourcetype=ps_metric COMMAND=java by host host_ip COMMAND USER ]
```| fields - c
commenting the above fields command for a testing```
| eval is_java_running = if(ct_java_proc>0, 1, 0)
Hi @usej on Splunkbase i see no apps for backbase and/or fintech, so no community developed apps/addons available yet. Creating your own app and onboarding logs is a simple task, this may require...
See more...
Hi @usej on Splunkbase i see no apps for backbase and/or fintech, so no community developed apps/addons available yet. Creating your own app and onboarding logs is a simple task, this may require some Splunk Dev experience, but we can help you with that. Let us know more details, thanks.
Hi @venugoski .. out of 23 events some events(as shown in the 3rd event in table output) may not have that particular "log_processed.message". Lets doublecheck - pls check this one.. as the table c...
See more...
Hi @venugoski .. out of 23 events some events(as shown in the 3rd event in table output) may not have that particular "log_processed.message". Lets doublecheck - pls check this one.. as the table command printing the _raw also, you can verify on same screen: index="sample" "log_processed.env"=prod "log_processed.app"=sample "log_processed.traceId"=90cf115a05ebb87b2
| table _time log_processed.message _raw
Hi @cmg As i remember and as Doc confirms, Phantom / Splunk SOAR provides running the playbook on both situations(manual and automatic). https://docs.splunk.com/Documentation/SOAR/current/Playboo...
See more...
Hi @cmg As i remember and as Doc confirms, Phantom / Splunk SOAR provides running the playbook on both situations(manual and automatic). https://docs.splunk.com/Documentation/SOAR/current/Playbook/Overview After you create and save a playbook in Splunk SOAR (Cloud), you can run playbooks when performing these tasks in Splunk SOAR (Cloud): Triaging or investigating cases as an analyst Creating or adding a case to Investigation Configuring playbooks to run automatically directly from the playbook editor PS - if this/any reply helped you, please upvote. if this/any reply resolves your query, then pls accept it as solution, so your question will move from unanswered to answered. thanks.
Hi @tscroggins and all, I tried to download that tamil_unicode_block.csv, after spending 20 mins i left it. from your pdf file i created that tamil_unicode_block.csv myself and uploaded to Splunk...
See more...
Hi @tscroggins and all, I tried to download that tamil_unicode_block.csv, after spending 20 mins i left it. from your pdf file i created that tamil_unicode_block.csv myself and uploaded to Splunk. but still the rex counting does not work as i expected. Could you pls help me in counting characters, thanks. sample event - இடும்பைக்கு இடும்பை படுப்பர் இடும்பைக்கு இடும்பை படாஅ தவர் background details - my idea is to Splunk on tamil language Thirukkural and do some analytics. each event will be a two lines containing (seven words exactly) onboarding details are available in youtube video(@siemnewbies channel name)(i should not post the youtube link here as it may look like marketing) i take care of this youtube channel, focusing only Splunk and SIEM newbies. Best Regards, Sekar
Is it possible to run a playbook on demand, meaning a manual trigger by an analyst such as clicking a playbook during a workbook step? I have a use case where I want to run a playbook, but only from ...
See more...
Is it possible to run a playbook on demand, meaning a manual trigger by an analyst such as clicking a playbook during a workbook step? I have a use case where I want to run a playbook, but only from user initiation. I could implement some logic for user interaction at the container, but I'd prefer not to have something waiting for input until a user can get to it.
When a container is created that contains multiple artifacts from a forwarded Splunk event, I noticed playbooks are running against every artifact that has been added, causing duplicate actions. R...
See more...
When a container is created that contains multiple artifacts from a forwarded Splunk event, I noticed playbooks are running against every artifact that has been added, causing duplicate actions. Reading through the boards here a bit a possible solution was adding logic to check for a container tag on run. Use a decision block to see if a tag exists, if so simply end, otherwise continue and add a tag when complete. My problem is this appears to work when testing against existing containers (debug against existing container ID and all artifacts), but when a new container is created it seems to ignore this and run multiple times. My guess is the playbook is being run concurrently for each of the artifacts instead of one at a time. 1. What is causing the problem? 2. What is best practice to prevent this from occurring?
Hi @Ajith.Kumar,
Please check out this information
https://docs.appdynamics.com/appd/onprem/latest/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/configure-the-javascript-...
See more...
Hi @Ajith.Kumar,
Please check out this information
https://docs.appdynamics.com/appd/onprem/latest/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/configure-the-javascript-agent/add-custom-user-data-to-a-page-browser-snapshot
https://community.appdynamics.com/t5/Knowledge-Base/Troubleshooting-EUM-custom-user-data/ta-p/26267
i see the splunk query index="sample" "log_processed.env"=prod "log_processed.app"=sample "log_processed.traceId"=90cf115a05ebb87b2 | table _time, log_processed.message this is displaying the e...
See more...
i see the splunk query index="sample" "log_processed.env"=prod "log_processed.app"=sample "log_processed.traceId"=90cf115a05ebb87b2 | table _time, log_processed.message this is displaying the empty messages in a table cell . i could the event in the raw format. do i have any limit to see the whole message in table box .
Hello community members, Has anyone successfully integrated the Backbase fintech product with Splunk for logging and monitoring purposes? If so, could you share your insights, experiences, and any t...
See more...
Hello community members, Has anyone successfully integrated the Backbase fintech product with Splunk for logging and monitoring purposes? If so, could you share your insights, experiences, and any tips on how to effectively set up and maintain this integration? Thank you in advance for your help!
Since you are just wanting to display percentage of 200 and total count of all StatusCode in each minute. I think a search like this should work. index=<index> sourcetype=<sourcetype> sc_status=...
See more...
Since you are just wanting to display percentage of 200 and total count of all StatusCode in each minute. I think a search like this should work. index=<index> sourcetype=<sourcetype> sc_status=*
| bucket span=1m _time
| stats
count as Totalcount,
count(eval('sc_status'==200)) as Count200
by _time
| eval
Percent200=round(('Count200'/'Totalcount')*100, 2)
| fields + _time, Percent200, Totalcount Example Output:
we need to set up an alert if a server no java process for 15mins, only one alert was sent until the issue was solved. Do we need to create 2 windows for this? | mstats count(os.cpu.pct.used) as c w...
See more...
we need to set up an alert if a server no java process for 15mins, only one alert was sent until the issue was solved. Do we need to create 2 windows for this? | mstats count(os.cpu.pct.used) as c where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) by host_ip | join host type=left [| mstats avg(ps_metric.pctMEM) as avg_mem_java avg(ps_metric.pctCPU) as avg_cpu_java count(ps_metric.pctMEM) as ct_java_proc where index=cpet-os-metrics host_ip IN (10.0.0.1,10.0.0.2) sourcetype=ps_metric COMMAND=java by host host_ip COMMAND USER ] | fields - c | eval is_java_running = if(ct_java_proc>0, 1, 0)
Your link leads to wrong documentation (but for some strange reason Google seems to favour it over the proper SPL documentation). There are two different search languages - SPL and SPL2. SPL is used ...
See more...
Your link leads to wrong documentation (but for some strange reason Google seems to favour it over the proper SPL documentation). There are two different search languages - SPL and SPL2. SPL is used within Splunk Enterprise (and Splunk Cloud), SPL2 is used here and there (I think most notable use is the Edge Processor) but it's not as widely used as SPL. I know it's confusing Anyway, you need docs for SPL, not SPL2. https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/WhatsInThisManual
OK. So modifications to Domain Admins group is reflected with different events that 4743. So you seem to have a different problem than just losing one particular eventid.
Ok, been looking at this. I did not realize values(*) by * meant to 'look at everything' - I had just seen it in other examples! I see now that you can do something like this to get just the rows...
See more...
Ok, been looking at this. I did not realize values(*) by * meant to 'look at everything' - I had just seen it in other examples! I see now that you can do something like this to get just the rows you want: index="my_data" resourceId="enum*"
| stats values(sourcenumber) as sourcenumber values(destinationnumber) as destinationnumber by guid Never did I imagine this is FASTER, because I guess I just thought the first line meant it would get all the data anyway. This is a huge help and going to play with it for awhile to get a feel of it (it appears to also remove my need to use mvdedup) In reading the Indexed Extractions, it notes that you can extract at search (which we are doing above) or 'add custom fields at index' so the latter is what we're talking about doing. At first glance I don't see the performance bump just extracting because I will still have two separate streams (what I called DS1 and DS2). I think what would be a boon on performance is if I could consolidate the two streams of data into a new stream that had 'just the 12 fields I need' but that feels like a different thread! This is GREAT INFORMATION and THANK YOU SO MUCH!!!!