All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello version 9.0.0 We are using v1.2 of the browscap add-on and are having issues with it performing searches.  The add-on has been installed on both the search heads and the indexers via the clus... See more...
Hello version 9.0.0 We are using v1.2 of the browscap add-on and are having issues with it performing searches.  The add-on has been installed on both the search heads and the indexers via the cluster manager.  "When running a search the error [inder1, indexer2, indexer3] Streamed search execute failed because: Error in 'lookup' command: Script execution failed for external search command <search head>/apps/TA-browscap/bin/browscap_lookup.py'.." There are no other errors in the _internal index or in any of the splunk logs on either the search heads or indexers.   Any idea
Hi, [12:30:13 INF 0ceafa153290582e1f1faec3f98d84ac] Gateway API|Request... [12:30:15 INF 0ceafa153290582e1f1faec3f98d84ac] Gateway API|Response... These are sample request, response structures tha... See more...
Hi, [12:30:13 INF 0ceafa153290582e1f1faec3f98d84ac] Gateway API|Request... [12:30:15 INF 0ceafa153290582e1f1faec3f98d84ac] Gateway API|Response... These are sample request, response structures that we log. There are scenarios where a request might not have a response. I'd like to write a query to find such correlation id (highlighted). This is something that I tried, but it's fetching all the matches. index=pcf sourcetype="gateway-api" "Gateway API|Request" | rex field=msg "INF ?(?P<correlationId>[a-zA-Z0-9-_, ]*)]" | rex field=msg "INF ?(?P<correlationIdReq>[a-zA-Z0-9-_, ]*)]" | table correlationId, correlationIdReq | join type="left" correlationId [search index=pcf sourcetype="gateway-api" "Gateway API|Response" | rex field=msg "INF ?(?P<correlationId>[a-zA-Z0-9-_, ]*)]" | rex field=msg "INF ?(?P<correlationIdRes>[a-zA-Z0-9-_, ]*)]" | table correlationId, correlationIdReq, correlationIdRes] Thanks, Arun
Hi Splunkers, in our environment we onboarded Forcepoint Cloud logs following this guide.  In a nuthsell, we have to use a script that regularly pulls data from cloud and place them on a HF, as .csv... See more...
Hi Splunkers, in our environment we onboarded Forcepoint Cloud logs following this guide.  In a nuthsell, we have to use a script that regularly pulls data from cloud and place them on a HF, as .csv files; then, info are sent to Splunk Cloud. We got the following problem: logs are always of 2 types, the correct one and the "empty one": As you can see, we have only the field labels, not values. Working with support, we discovered that a possible root case is in the add-on, that sometimes extract the .csv header and manage it like data. So, if we want to solve the problem and avoid to change the script, we could fix the problem going in props.conf of add-on used to parse, which is Splunk Add-on for Forcepoint Web Security , perform a change and take the correct logs. So, the question is: if I have to tell in a props.conf "Hey, don't extract the .csv headers", which syntax I have to use?
HI,  I am looking for splunk query which gives table having count field value greater than 5 in last 24 hr. if my name log count is greater than 5 in last 24 hr for specific search condition then... See more...
HI,  I am looking for splunk query which gives table having count field value greater than 5 in last 24 hr. if my name log count is greater than 5 in last 24 hr for specific search condition then it should be available in table, if tomorrow again in last 24 hr log count on my name is greater than 5 then again my name should be available in table for last two days time range. Below mentioned is query for last 24 hour. EXTERNAL_AUTH_COMPLETE deviceType=AnixisPPCProvider AND wsModel != "Microsoft Corporation / Virtual Machine" earliest=-24h@h latest=now | rex field=machineUserName "[A-Za-z-]+(?<empNo>\d+)" | rex field=machineUserName "(?<eMail>.*@.*)" | lookup WorkdayData.csv empNum AS empNo OUTPUTNEW country OCGRP OCSGRP name email | lookup WorkdayData.csv email AS eMail OUTPUTNEW country OCGRP OCSGRP name email | eval country = if (country == "Korea, Republic of","South Korea",country) | eval country = if (country == "United States of America","United States",country) | eval empType = if (like(email,"%@contractor.amat.com%"),"Contractor","RFT") | rename OCGRP as Department OCSGRP as BusinessUnit name as Name email as Email country as Country empType as EmployeeType | search Department = "*" AND Country="*" | stats count by Name Email Country Department BusinessUnit EmployeeType | where count > 5 Provide me query to get table where log count greaten than 5 on daily basis. Thanks Abhineet Kumar
I have a dashboard that shows the APIs called over a particular period and get the success rate based on the response status, but I need to also get the success rate of that API over the past two wee... See more...
I have a dashboard that shows the APIs called over a particular period and get the success rate based on the response status, but I need to also get the success rate of that API over the past two weeks (last week, last two weeks) on the exact date, but I've been having issues getting that. This is my current search parameter. Any help on how to get this will be appreciated. Index="Main" | stats count(eval(in(response_status, "200", "201", "202","203","204","205",....,"307","308", "0") and severity="Audit")) AS Success_count, count(eval(in(response_status,"400",...,"451"))) AS Backend_4XX, count(eval(in(response_status, "0") and severity="Exception")) AS L7_Error, count(eval(in(response_status, "500",...,"511"))) AS Backend_5XX BY API | eval Total = Success_count + (Backend_4XX + Backend_5XX + L7_Error), Success_Rate=round(Success_count/Total*100,2) | Table API Total Success_count L7_Error Backend_4XX Backend_5XX Success_Rate | sort API | search Success_Rate=*
Hi. I need help in understanding how this can be done: The application's log have a multivalue like this: <somedata> [field1=A,B,C] <someotherdata> <somedata> [field1=A,C] <someotherdata> <somed... See more...
Hi. I need help in understanding how this can be done: The application's log have a multivalue like this: <somedata> [field1=A,B,C] <someotherdata> <somedata> [field1=A,C] <someotherdata> <somedata> [field1=E,F] <someotherdata> And I need to find correlations between these values. I'm looking to have something like: field1mv inConjunctionWith count A <all> 2  A C 2  A B 1   B <all> 1 B A 1 C <all> 2 C A 2 C B 1  E <all> 1 E F 1 F <all> 1 F E 1   This way it'll be possible to identify that A+C, and E+F, have the same occurrences and probably are always together; also it'll show which values are the most common.   I feel I should be able to pull this off with mvmap but can't make my brain produce the actual process to it.
I have just configured Splunk and I have alert running for locked account. It keep generating multiple entries from per lockout account  So I want to generate one message incase of account get lock... See more...
I have just configured Splunk and I have alert running for locked account. It keep generating multiple entries from per lockout account  So I want to generate one message incase of account get locked   index=wineventlog source="WinEventLog:Security" sourcetype=WinEventLog action=failure Account_Name="*" user="*" AND "taskcategory=Account Lockout"
It looks like the token name parser of Splunk contains a bug, otherwise why would it expect to find a token in this eval expression which I simplified for better clarity: eval x=if(match(url,"test$"... See more...
It looks like the token name parser of Splunk contains a bug, otherwise why would it expect to find a token in this eval expression which I simplified for better clarity: eval x=if(match(url,"test$"),if(match(url,"onemoretest$"),"",""),"","")  I know that if I change it to double $$ it will work OK: eval x=if(match(url,"test$$"),if(match(url,"onemoretest$"),"",""),"","") Because of this alleged bug, my dashboard doesn't work properly. It seems that the token parser doesn't recognize any stop characters (like ",) and even space etc.) outside the allowed range of characters for token naming. Any help would be greatly appreciated!
Hi community, I am trying to identify where all settings defining an alert/notable are stored at the backend? Savedsearches.conf contain the alerts, but not sure how cron schedule and other setting... See more...
Hi community, I am trying to identify where all settings defining an alert/notable are stored at the backend? Savedsearches.conf contain the alerts, but not sure how cron schedule and other settings for an alert/notable defined via the UI are stored at the backend of Splunk. Thank you!
Hi, My cs is not raising an alerts, when I search index=_internal sourcetype=scheduler "xyz- CS" log_level=INFO 07-14-2023 12:50:00.552 +0000 INFO SavedSplunker - savedsearch_id="nobody;SplunkEnt... See more...
Hi, My cs is not raising an alerts, when I search index=_internal sourcetype=scheduler "xyz- CS" log_level=INFO 07-14-2023 12:50:00.552 +0000 INFO SavedSplunker - savedsearch_id="nobody;SplunkEnterpriseSecuritySuite;Audit - CP - PDM Uploads Non Colgate instances - Rule", search_type="scheduled", user="abc", app="SplunkEnterpriseSecuritySuite", savedsearch_name="xyz- CS - Rule", priority=default, status=continued, reason="Could not get next runtime", scheduled_time=0, window_time=-1
Hello Is it possible to add role capability so the user will be able to see the list of all active users but without adding more capabilities ? If not, is there other way to get such list with sear... See more...
Hello Is it possible to add role capability so the user will be able to see the list of all active users but without adding more capabilities ? If not, is there other way to get such list with searching option so the user will be able to search a specific user from the list ? Thanks
I configured the Splunk Infrastructure Monitoring add-on with Splunk Observability Cloud in order to receive infrastructure metrics from Splunk Observability. The connection was successful, as confir... See more...
I configured the Splunk Infrastructure Monitoring add-on with Splunk Observability Cloud in order to receive infrastructure metrics from Splunk Observability. The connection was successful, as confirmed by the Add-On's Connection Status test:  However, when I try to search for any data using the sim flow command, I receive the following error: Error in "sim" command: Error executing SignalFlow program. error_msg=[SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:1106)".  Query used to test:   | sim flow query="data('cpu.utilization', filter=filter('host', '*') and (not filter('cloud.provider', '*')) and (not filter('AWSUniqueId', '*')) and (not filter('gcp_id', '*')) and (not filter('azure_resource_id', '*')) and (not filter('kubernetes_node', '*')), extrapolation='last_value', maxExtrapolations=2).mean(by=['host']).count().publish()"   I have done this kind of configuration several times, but I have never incurred in such an error. I even used the same query on another configuration to cross-check, and it's working fine. Could it be a connection issue? Perhaps the search head is blocking some outside connection? Or is my environment using a different SSL package? Nevertheless, something seems to be preventing data from coming in. Additionally sharing type+version of the OS instance: And OpenSSL version: Does anyone have any suggestions, tips, ideas? Thanks!  
I need help removing these open & closed brackets in the token, please see below the dashboard code FYI     <form> <label>token eval drilldown</label> <fieldset submitButton="false"> <inpu... See more...
I need help removing these open & closed brackets in the token, please see below the dashboard code FYI     <form> <label>token eval drilldown</label> <fieldset submitButton="false"> <input type="dropdown" token="manager" searchWhenChanged="true"> <label>Account Managers</label> <choice value="*">All</choice> <choice value="123&quot;,&quot;456&quot;,&quot;789&quot;,&quot;431C&quot;,&quot;343">XYZ</choice> <choice value="786&quot;,&quot;274&quot;,&quot;245&quot;,&quot;237&quot;,&quot;2523&quot;,&quot;245&quot;,&quot;257">ABC</choice> <choice value="463&quot;,&quot;234&quot;,&quot;234&quot;,&quot;3543">DEF</choice> <default>*</default> <initialValue>*</initialValue> <prefix>| search u_user_type IN (</prefix> <suffix>)</suffix> <!--<valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter>--> <change> <condition> <eval token="tok_manager">replace($manager$,"(.\| search u_user_type IN \()","")</eval> </condition> </change> </input> </fieldset> <row> <panel> <html> <p>$manager$</p> <p>$tok_manager$ &lt;-- i need to remove these open &amp; close brackets</p> </html> </panel> </row> </form>    
Hi,  The dashboard table panels contain 4 columns (Host,user,A,B). how to query it to do percentage of the values and if the below criteria are met and change the panel values with orange and red ... See more...
Hi,  The dashboard table panels contain 4 columns (Host,user,A,B). how to query it to do percentage of the values and if the below criteria are met and change the panel values with orange and red color if percentages are met by user and host. if A > 80% of B, then color code A to Orange; if A > 90% of B, then color code to Red.   Regards, Satheesh 
Hi, I'm trying to exclude list of sites from my search from lookup table its not working as expected, base search sub search NOT ( [| inputlookup instances.csv | fields instance_id | re... See more...
Hi, I'm trying to exclude list of sites from my search from lookup table its not working as expected, base search sub search NOT ( [| inputlookup instances.csv | fields instance_id | return 1000 instance_id])   If we use same below as a sub search in my main search it is not giving any events what could be the reason ? do we need to modify sub search ? | inputlookup instances.csv | fields instance_id | return 1000 instance_id   output: instance_id search   (instance_id="xyz") OR (instance_id="abc.com") OR (instance_id="cpl.com") OR (instance_id="ipl.com") OR (instance_id="bcci.com") OR (instance_id="pca.com") OR (instance_id="eca.com") OR (instance_id="aca.com") OR (instance_id="nca.com") OR (instance_id="ica.com") OR (instance_id="bca.com")
Hello, In SPLUNK AWS-Add-On, there is one option Autodiscovered IAM Role, by default it is in No status. When we should be using this option (make it Yes) and how would we configure it (make it Yes)... See more...
Hello, In SPLUNK AWS-Add-On, there is one option Autodiscovered IAM Role, by default it is in No status. When we should be using this option (make it Yes) and how would we configure it (make it Yes)?  Any recommendation would be highly appreciated. Thank you!
Hi, I have a table of 3 columns: Event name, time(=when the event happened) and source (file name). I need to create a flow chart (similar to the attached picture) when X-axis represents time and Y... See more...
Hi, I have a table of 3 columns: Event name, time(=when the event happened) and source (file name). I need to create a flow chart (similar to the attached picture) when X-axis represents time and Y axis binary (event happen or not). I need a line for each event name and a different color for each line. I also need to filter by time range (like the chart in the picture, to have the option to look on different time intervals). Also, click on a specific point and get its raw data (to know from which file it was taken). Can I do it in Splunk? How? I tried to create a timeline, but it doesn't look so good :  | eval myTime=TimeStamp/10000000 - 11644473600 | eval WinTimeStamp2=strftime(myTime, "%Y-%m-%dT%H:%M:%S.%Q") | bin WinTimeStamp2 span=1d | stats count by WinTimeStamp2, Name example for timestamp=133265876804261336 Thanks, Maayan
I need to create a baseline for what is common in an environment before creating a rule. The rule can be as simple as:   search index=x sourcetype=y NOT [search index=x sourcetype=y earliest=-14d ... See more...
I need to create a baseline for what is common in an environment before creating a rule. The rule can be as simple as:   search index=x sourcetype=y NOT [search index=x sourcetype=y earliest=-14d | table user]   The issue is doing an historical search using a simple search. I've looked a few commands including transaction and streamstats but did not manage to find a way to run this search recursively.  The basic idea is to find a rare value on a specific field that is only seen less than a set threshold (e.g. 10 events) during a 14 days windows.
Hi Community, We have configured HF and UF then forwarding  data to Cloud instance. The problem is, we have installed SCP credential pack on HF, not in UF so UF is sending data to HF but UF data is ... See more...
Hi Community, We have configured HF and UF then forwarding  data to Cloud instance. The problem is, we have installed SCP credential pack on HF, not in UF so UF is sending data to HF but UF data is not forwarding by HF to Splunk Cloud but HF internal logs are forwarding to Splunk Cloud. So, Is it necessary to install credential pack on UF to see the UF data in Splunk Cloud through HF or any modifications are required at HF to forward UF data? Please suggest! Regards, Eshwar
Hi there  Every time when I restart my indexers I'm getting what you see in the attachment and this goes for all my 12 indexers how this can be fixed and the fix should be done from manage node or f... See more...
Hi there  Every time when I restart my indexers I'm getting what you see in the attachment and this goes for all my 12 indexers how this can be fixed and the fix should be done from manage node or for each indexer can any one help please  thanks in advance