All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@SN1  This error typically indicates that the indexer is unable to communicate with the license master or there’s a licensing issue affecting its operation.   You can run this query on the License... See more...
@SN1  This error typically indicates that the indexer is unable to communicate with the license master or there’s a licensing issue affecting its operation.   You can run this query on the License Master servers to find host name/IP of the Indexer (license slaves) connecting to your License master.   | rest /services/licenser/slaves splunk_server=local | table title label | rename title as GUID label as Indexer index=_internal component=Metrics group=tcpin_connections [| rest /services/licenser/slaves splunk_server=local | table title | rename title as guid ] | dedup sourceHost sourceIp | table sourceHost sourceIp hostname guid version os
@muhammadfahimma  Please review the following, and I kindly request you to raise a Splunk support ticket. Investigate findings using drilldown searches and dashboards in Splunk Enterprise Security ... See more...
@muhammadfahimma  Please review the following, and I kindly request you to raise a Splunk support ticket. Investigate findings using drilldown searches and dashboards in Splunk Enterprise Security - Splunk Documentation
@Nrsch    By default, Key Indicator Searches like “Access - Total Access Attempts,” “Malware - Total Infection Count,” and “Risk - Median Risk Score By Other” do not directly change the “Aggreg... See more...
@Nrsch    By default, Key Indicator Searches like “Access - Total Access Attempts,” “Malware - Total Infection Count,” and “Risk - Median Risk Score By Other” do not directly change the “Aggregated User Risk” value on the Risk Analysis dashboard. They are designed to display metrics, not update risk scores. However, if they feed into correlation searches that assign risk scores, they could have an indirect effect.   https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Customizing_Enterprise_Security_dashboards_to_improve_security_monitoring     
One of my 5 indexer is getting this error [MSE-SVSPLUNKI01] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_GRACE_PERIOD,0]) I have some question . 1 . how do i check wheth... See more...
One of my 5 indexer is getting this error [MSE-SVSPLUNKI01] restricting search to internal indexes only (reason: [DISABLED_DUE_TO_GRACE_PERIOD,0]) I have some question . 1 . how do i check whether my indexer is connected with license master or not. 2.  if NOT then how can i connect them again. 3. And if the connection is good from start then what do I do next?
I am running a Splunk Indexer on Docker in an EC2 instance. I use the following Compose file to start the service. However, every time I restart the EC2 instance, the contents of inputs.conf get rese... See more...
I am running a Splunk Indexer on Docker in an EC2 instance. I use the following Compose file to start the service. However, every time I restart the EC2 instance, the contents of inputs.conf get reset.     version: "3.6" networks: splunknet: driver: bridge attachable: true volumes: splunk-var: external: true splunk-etc: external: true services: splunk: networks: splunknet: aliases: - splunk image: xxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/splunk/splunk:latest container_name: splunk restart: always environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_PASSWORD=password ports: - "80:8000" - "9997:9997" volumes: - splunk-var:/opt/splunk/var - splunk-etc:/opt/splunk/etc       The following is my conf.     [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem sslPassword = password requireClientCert = false          
Thank you for reply, it’s very useful. I can explain more my question : I have some “Key Indicator Search” like “Access - Total Access Attempts” , “Malware - Total Infection Count” , “Risk - Median ... See more...
Thank you for reply, it’s very useful. I can explain more my question : I have some “Key Indicator Search” like “Access - Total Access Attempts” , “Malware - Total Infection Count” , “Risk - Median Risk Score By Other” , you said if they trigger I can see their related notable event in “Incident Review” . It’s OK, But my main question is: Dose this searches have any effects on any a value in some dashboard in ES? For example may be they change the value of the “aggregated user risk” in “ES -> Security Intelligence -> Risk Analysis -> aggregated user risk” . Thank you very much for your reply
  `search_on_index_time("`$input_macro$`", $span$)` | fields _time source id | bin _time AS earliest_time span=$span$ | eval latest_time=earliest_time+$span$ | stats values(id) AS ids, values(source... See more...
  `search_on_index_time("`$input_macro$`", $span$)` | fields _time source id | bin _time AS earliest_time span=$span$ | eval latest_time=earliest_time+$span$ | stats values(id) AS ids, values(source) AS sources BY earliest_time latest_time | eval ids="\"".mvjoin(ids, "\",\"")."\"", sources="\"".mvjoin(sources, "\",\"")."\"" | `fillnull(value="", fields="earliest_time latest_time input_macro summarize_macro sources ids")` | map maxsearches=20000 search="search earliest=$earliest_time$ latest=$latest_time$ `$input_macro$(\"$sources$\",\"$ids$\")` | `$summarize_macro$($earliest_time$, $latest_time$)` | eval _time=$earliest_time$" | appendpipe [|where source="route" | collect index=$index$ source="route" | where false()] | appendpipe [|where source="system" | collect index=$index$ source="system" | where false()]   I am using a macro in one of my saved searches and encountering the below error in Splunk. Based on the known issue, what changes should I make to the macro to resolve this error and eliminate the message? ERROR TimeParser [24352 SchedulerThread] - Invalid value "$latest_time$" for time term 'latest' @isoutamo @livehybrid   
After a recent upgrade to Splunk ES 8.0.2, we have observed that none of the drill downs for detection based searches are available in the mission control screen anymore. Don't see any errors that mi... See more...
After a recent upgrade to Splunk ES 8.0.2, we have observed that none of the drill downs for detection based searches are available in the mission control screen anymore. Don't see any errors that might hint any abnormality. Has anyone come across a similar issue? How can this issue be debugged?
You should have raw data from source. Then do needed extraction or use spath if it’s json. Best option is ingest data into your test/dev environment (like your workstation) and when it works copy tho... See more...
You should have raw data from source. Then do needed extraction or use spath if it’s json. Best option is ingest data into your test/dev environment (like your workstation) and when it works copy those into your SCP environment You could/should create app(s) for those KOs to manage those.  As you have SCP in use, you could order dev/test license from splunk to use in your test environment.
Best option is ask is it mandatory or not in that phase. I’m not sure if this is exactly required or not.
If they don’t tell it, then you should ask it or make assumption
@kiran_panchavat  @shabamichae  This is not strictly true. According to the documentation "Participants then perform a mock deployment according to requirements which adhere to Splunk Deployment Met... See more...
@kiran_panchavat  @shabamichae  This is not strictly true. According to the documentation "Participants then perform a mock deployment according to requirements which adhere to Splunk Deployment Methodology and best-practices." - SSL falls in to best-practice category here for both compression (for data transfer) and security benefits. Whilst not including SSL between Splunk servers might not result in a failing the the lab, there isnt a zero chance that it wont deduct marks which could affect the final score/outcome. Remember that this is one of the prereqs to Core Consulting certification and at this point it is expected that candidates will apply configuration that is most suitable to the customer. Splunk Lantern (great for some best-practice guidance) has a good page on SSL: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS You can also see more info on enabling SSL at https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/StepstosecuringSplunkwithTLS Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Your eventstats isn't doing anything since the responseTime field is no long available after the stats command. Try something like this | eval identifier=coalesce('queryParams.identifier', 'event.q... See more...
Your eventstats isn't doing anything since the responseTime field is no long available after the stats command. Try something like this | eval identifier=coalesce('queryParams.identifier', 'event.queryStringParameters.identifier') | eval responseTime=coalesce(responseTime, null) | where isnotnull(identifier) and isnotnull(responseTime) | stats avg(responseTime) as avg_response_time by identifier | eval SLA_response_time=300 | eval met_SLA=if(avg_response_time <= SLA_response_time, 1, 0) | stats count sum(met_SLA) as count_within_SLA | eval percentage_met_SLA=100 * count_within_SLA / count This assumes that your SLA has a static value of 300. If you want to use a different SLA value, you need to define how that is set or calculated.
Hi @gersplunk  When you search for the data, do you have src_ip or dest_ip in the field list on the left? You could also add | table *_ip to your search to see if src/dest IP is already an extracte... See more...
Hi @gersplunk  When you search for the data, do you have src_ip or dest_ip in the field list on the left? You could also add | table *_ip to your search to see if src/dest IP is already an extracted field from the JSON. If you can post a screenshot and/or sample data then it might help us to work to you getting to the bottom of this Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @dmoberg  Using SignalFlow you will end up with multiple rows output because when you can only publish a single field, and multiple published MTS are not grouped. As you're using a Table output ... See more...
Hi @dmoberg  Using SignalFlow you will end up with multiple rows output because when you can only publish a single field, and multiple published MTS are not grouped. As you're using a Table output you should have the option to select a "Group By" as per the example I put together below, however it is only currently possible to Group by a single field, which might not be what you are looking for?   You may be able to get around this by putting together a dashboard with a table for each METHOD you are interested in, and then have the method filtered and have a single group by route. Or use a single dashboard with a filter first to select a Method and then do the same group by route. Sorry this might not be the answer you hoped for! Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @ITWhisperer @livehybrid  I was able to get the avg response time by identifier .. Now as next step I want to set an %Passed SLA(Percentage of service requests that passed service level agree... See more...
Hi @ITWhisperer @livehybrid  I was able to get the avg response time by identifier .. Now as next step I want to set an %Passed SLA(Percentage of service requests that passed service level agreement parameters, including response time and uptime).How do i set the SLA       index=* source IN ("") *response* | eval identifier=coalesce('queryParams.identifier', 'event.queryStringParameters.identifier') | eval responseTime=coalesce(responseTime, null) | where isnotnull(identifier) and isnotnull(responseTime) | stats avg(responseTime) as avg_response_time by identifier | eventstats avg(responseTime) as overall_avg_response_time       Get the totla no of request separetely by     index=* source IN ("*") *data* | eval identifier=coalesce('queryParams.identifier', 'event.queryStringParameters.identifier') | eval msg=coalesce(msg, null) | where isnotnull(identifier) and isnotnull(msg) | stats count    
@splunklearner    Please verify the prerequisites. It's a Java issue, you need to make sure Splunk can access Java.  I can see some solutions that, they are able to see the data inputs using the be... See more...
@splunklearner    Please verify the prerequisites. It's a Java issue, you need to make sure Splunk can access Java.  I can see some solutions that, they are able to see the data inputs using the below steps: Fixed the issue by adding the config in inputs.conf"[TA-Akamai_SIEM]#disable the running introspection. run_introspection=false    
Not able to find.
Your response is a solution for Splunk Core/Search not for Signalflow in Splunk APM.
@dmoberg  The query correctly aligns Percentage, Count, route, and method on the same rows, addressing your original issue.   | makeresults count=10 | streamstats count AS row_number | eval rout... See more...
@dmoberg  The query correctly aligns Percentage, Count, route, and method on the same rows, addressing your original issue.   | makeresults count=10 | streamstats count AS row_number | eval route=case(row_number=1, "*.html", row_number=2, "*.html", row_number=3, "*.css", row_number=4, "*.js", row_number=5, "*", row_number=6, "*.html", row_number=7, "*.html", row_number=8, "*.html", row_number=9, "*", row_number=10, "*"), method=case(row_number=1, "GET", row_number=2, "HEAD", row_number=3, "GET", row_number=4, "GET", row_number=5, "GET", row_number=6, "POST", row_number=7, "OPTIONS", row_number=8, "POST", row_number=9, "POST", row_number=10, "GET"), Count=case(row_number=1, 50, row_number=2, 30, row_number=3, 30, row_number=4, 30, row_number=5, 15, row_number=6, 12, row_number=7, 10, row_number=8, 5, row_number=9, 6, row_number=10, 6) | eventstats sum(Count) AS Total | eval Percentage = round((Count / Total) * 100, 2) | table Percentage, Count, route, method | sort - Percentage