All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

After a recent upgrade to Splunk ES 8.0.2, we have observed that none of the drill downs for detection based searches are available in the mission control screen anymore. Don't see any errors that mi... See more...
After a recent upgrade to Splunk ES 8.0.2, we have observed that none of the drill downs for detection based searches are available in the mission control screen anymore. Don't see any errors that might hint any abnormality. Has anyone come across a similar issue? How can this issue be debugged?
You should have raw data from source. Then do needed extraction or use spath if it’s json. Best option is ingest data into your test/dev environment (like your workstation) and when it works copy tho... See more...
You should have raw data from source. Then do needed extraction or use spath if it’s json. Best option is ingest data into your test/dev environment (like your workstation) and when it works copy those into your SCP environment You could/should create app(s) for those KOs to manage those.  As you have SCP in use, you could order dev/test license from splunk to use in your test environment.
Best option is ask is it mandatory or not in that phase. I’m not sure if this is exactly required or not.
If they don’t tell it, then you should ask it or make assumption
@kiran_panchavat  @shabamichae  This is not strictly true. According to the documentation "Participants then perform a mock deployment according to requirements which adhere to Splunk Deployment Met... See more...
@kiran_panchavat  @shabamichae  This is not strictly true. According to the documentation "Participants then perform a mock deployment according to requirements which adhere to Splunk Deployment Methodology and best-practices." - SSL falls in to best-practice category here for both compression (for data transfer) and security benefits. Whilst not including SSL between Splunk servers might not result in a failing the the lab, there isnt a zero chance that it wont deduct marks which could affect the final score/outcome. Remember that this is one of the prereqs to Core Consulting certification and at this point it is expected that candidates will apply configuration that is most suitable to the customer. Splunk Lantern (great for some best-practice guidance) has a good page on SSL: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS You can also see more info on enabling SSL at https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/StepstosecuringSplunkwithTLS Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Your eventstats isn't doing anything since the responseTime field is no long available after the stats command. Try something like this | eval identifier=coalesce('queryParams.identifier', 'event.q... See more...
Your eventstats isn't doing anything since the responseTime field is no long available after the stats command. Try something like this | eval identifier=coalesce('queryParams.identifier', 'event.queryStringParameters.identifier') | eval responseTime=coalesce(responseTime, null) | where isnotnull(identifier) and isnotnull(responseTime) | stats avg(responseTime) as avg_response_time by identifier | eval SLA_response_time=300 | eval met_SLA=if(avg_response_time <= SLA_response_time, 1, 0) | stats count sum(met_SLA) as count_within_SLA | eval percentage_met_SLA=100 * count_within_SLA / count This assumes that your SLA has a static value of 300. If you want to use a different SLA value, you need to define how that is set or calculated.
Hi @gersplunk  When you search for the data, do you have src_ip or dest_ip in the field list on the left? You could also add | table *_ip to your search to see if src/dest IP is already an extracte... See more...
Hi @gersplunk  When you search for the data, do you have src_ip or dest_ip in the field list on the left? You could also add | table *_ip to your search to see if src/dest IP is already an extracted field from the JSON. If you can post a screenshot and/or sample data then it might help us to work to you getting to the bottom of this Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @dmoberg  Using SignalFlow you will end up with multiple rows output because when you can only publish a single field, and multiple published MTS are not grouped. As you're using a Table output ... See more...
Hi @dmoberg  Using SignalFlow you will end up with multiple rows output because when you can only publish a single field, and multiple published MTS are not grouped. As you're using a Table output you should have the option to select a "Group By" as per the example I put together below, however it is only currently possible to Group by a single field, which might not be what you are looking for?   You may be able to get around this by putting together a dashboard with a table for each METHOD you are interested in, and then have the method filtered and have a single group by route. Or use a single dashboard with a filter first to select a Method and then do the same group by route. Sorry this might not be the answer you hoped for! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @ITWhisperer @livehybrid  I was able to get the avg response time by identifier .. Now as next step I want to set an %Passed SLA(Percentage of service requests that passed service level agree... See more...
Hi @ITWhisperer @livehybrid  I was able to get the avg response time by identifier .. Now as next step I want to set an %Passed SLA(Percentage of service requests that passed service level agreement parameters, including response time and uptime).How do i set the SLA       index=* source IN ("") *response* | eval identifier=coalesce('queryParams.identifier', 'event.queryStringParameters.identifier') | eval responseTime=coalesce(responseTime, null) | where isnotnull(identifier) and isnotnull(responseTime) | stats avg(responseTime) as avg_response_time by identifier | eventstats avg(responseTime) as overall_avg_response_time       Get the totla no of request separetely by     index=* source IN ("*") *data* | eval identifier=coalesce('queryParams.identifier', 'event.queryStringParameters.identifier') | eval msg=coalesce(msg, null) | where isnotnull(identifier) and isnotnull(msg) | stats count    
@splunklearner    Please verify the prerequisites. It's a Java issue, you need to make sure Splunk can access Java.  I can see some solutions that, they are able to see the data inputs using the be... See more...
@splunklearner    Please verify the prerequisites. It's a Java issue, you need to make sure Splunk can access Java.  I can see some solutions that, they are able to see the data inputs using the below steps: Fixed the issue by adding the config in inputs.conf"[TA-Akamai_SIEM]#disable the running introspection. run_introspection=false    
Not able to find.
Your response is a solution for Splunk Core/Search not for Signalflow in Splunk APM.
@dmoberg  The query correctly aligns Percentage, Count, route, and method on the same rows, addressing your original issue.   | makeresults count=10 | streamstats count AS row_number | eval rout... See more...
@dmoberg  The query correctly aligns Percentage, Count, route, and method on the same rows, addressing your original issue.   | makeresults count=10 | streamstats count AS row_number | eval route=case(row_number=1, "*.html", row_number=2, "*.html", row_number=3, "*.css", row_number=4, "*.js", row_number=5, "*", row_number=6, "*.html", row_number=7, "*.html", row_number=8, "*.html", row_number=9, "*", row_number=10, "*"), method=case(row_number=1, "GET", row_number=2, "HEAD", row_number=3, "GET", row_number=4, "GET", row_number=5, "GET", row_number=6, "POST", row_number=7, "OPTIONS", row_number=8, "POST", row_number=9, "POST", row_number=10, "GET"), Count=case(row_number=1, 50, row_number=2, 30, row_number=3, 30, row_number=4, 30, row_number=5, 15, row_number=6, 12, row_number=7, 10, row_number=8, 5, row_number=9, 6, row_number=10, 6) | eventstats sum(Count) AS Total | eval Percentage = round((Count / Total) * 100, 2) | table Percentage, Count, route, method | sort - Percentage  
@splunklearner  Please check the pre-requisites . https://techdocs.akamai.com/siem-integration/docs/siem-splunk-connector 
@Nrsch If you're using Splunk ES version 8.x, navigate to the Splunk ES App, then go to Mission Control, where you'll find the "Analyst Queue." This serves the same function as "Incident Review."
@Nrsch  In Splunk Enterprise Security (ES), when a saved search like "Malware - Total Infection Count" is triggered, the results typically manifest as notable events. These notable events are design... See more...
@Nrsch  In Splunk Enterprise Security (ES), when a saved search like "Malware - Total Infection Count" is triggered, the results typically manifest as notable events. These notable events are designed to alert security analysts to potential issues and are centralized in specific dashboards within ES.   Incident Review Dashboard : -   The Incident Review dashboard is the main place to view triggered notable events from security saved searches, including something like "Malware - Total Infection Count."   How to Access: Log into Splunk ES. Navigate to Security > Incident Review in the ES menu. Look for notable events tied to the "Malware - Total Infection Count" search. You can filter by search name, urgency (e.g., critical, high), or time range to locate the specific event. Security Posture Dashboard   Provides a high-level overview of notable event activity across your environment.   https://docs.splunk.com/Documentation/ES/7.3.3/User/IncidentReviewdashboard  https://docs.splunk.com/Documentation/ES/7.3.3/User/IncidentReviewdashboard#How_Splunk_Enterprise_Security_identifies_notable_events 
@splunklearner  Go to Settings > Data Inputs, where you will find the Akamai data input.
Thanks, and have the fields already been extracted from these event? For 1, do you just want a count of these events? For 2, do you just want the total response time for all the events?
Hi, there are some security saved search and key indicator in ES, if I activate these searches, if they trigger,  in which dashboard in ES i can see the result? For example if the search "Malware- to... See more...
Hi, there are some security saved search and key indicator in ES, if I activate these searches, if they trigger,  in which dashboard in ES i can see the result? For example if the search "Malware- total infection count " trigger,  in which dashboard in ES can I see the result? # ES # enterprise security
I am getting the data extracted and published to a dashboard, but the problem is that the "Count" is published on separate rows, not merged in with the other rows. I want the count (from which the pe... See more...
I am getting the data extracted and published to a dashboard, but the problem is that the "Count" is published on separate rows, not merged in with the other rows. I want the count (from which the percentage is calculated) to end up as an additional column together with the Percentage, Route and Method. This is the Signalflow I currently use: B = data('http_requests_total', filter=filter('k8s.namespace.name', 'customer-service-pages-prd')).count() A = data('http_requests_total', filter=filter('k8s.namespace.name', 'customer-service-pages-prd')).count(by=['route', 'method']) Percentage=(A/B * 100) Percentage.publish(label='Percentage') A.publish('Count') And this is how it looks: Any ideas on how to merge the data so that also Count is on the same rows as the Percentage?