All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sabari80, Performance will vary with the size of the result set, but you can use eventstats and where to remove outlier events based on percentile: | eventstats p90(pp_user_action_response) a... See more...
Hi @sabari80, Performance will vary with the size of the result set, but you can use eventstats and where to remove outlier events based on percentile: | eventstats p90(pp_user_action_response) as p90_pp_user_action_response | where pp_user_action_response<=p90_pp_user_action_response The placement of the commands depends on how you want to calculate the average response time. To calculate the average response time for all requests below or equal to the 90th percentile, try this (untested): index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}.visuallyCompleteTime" | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="******" | spath output=user_action_name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval proper_user_action=substr(user_action_name, 0, 150) ``` did you mean to extract the "proper" user action here? ``` | eventstats p90(pp_user_action_response) as p90_pp_user_action_response by proper_user_action | where pp_user_action_response<=p90_pp_user_action_response | stats count as total_calls avg(pp_user_action_response) as avg_pp_user_action_response values(p90_pp_user_action_response) as p90_pp_user_action_response by proper_user_action
| rex field=logs "\|(?<msg>.+)$" | stats sum(eval(case(msg=="**Starting**",1,msg=="Shutting down",-1))) as bad count(eval(case(msg=="**Starting**",1))) as starts | eval good=starts-bad
Hi @gbam, Splunk provides an eval function, json_array_to_mv, to convert JSON-like array values to multivalued field values. After conversion, you can use the lookup command just as you would for a... See more...
Hi @gbam, Splunk provides an eval function, json_array_to_mv, to convert JSON-like array values to multivalued field values. After conversion, you can use the lookup command just as you would for any other field: | makeresults | eval id="[\"123\", \"321\", \"456\"]" | eval id=json_array_to_mv(id, false()) | lookup gbam_lookup.csv id _time id x y 2023-11-10 16:14:53 123 321 456 Data Data Data3 Data2 Data2 Data3   Index 0 of multivalued field id corresponds to index 0 of multivalued fields x and y, index 1 corresponds to index 1, etc.
Which ip address? Did you find out if you have any events in that index? What timeframe did you search over?
Looking help to remove outliers (values greater than 90 percentile responses). For Ex:  Response Time  -------------------- 1 Second 2 Seconds 3 Seconds 4 Seconds 5 Seconds 6 Seconds ... See more...
Looking help to remove outliers (values greater than 90 percentile responses). For Ex:  Response Time  -------------------- 1 Second 2 Seconds 3 Seconds 4 Seconds 5 Seconds 6 Seconds 7 Seconds 8 Seconds 9 Seconds 10 Seconds 90 percentile for the above values is 9 Seconds. want to remove the outlier 10 Seconds and get the average response for remaining values. My expected Avg Response (after Removing the outlier) = 5 Seconds ==================================================== My Query is  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="******" | spath output=User_Action_Name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval User_Action_Name=substr(User_Action_Name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_Response" by Proper_User_Action | stats count(pp_user_action_response) As "Total_Calls",perc90(pp_user_action_response) AS "Perc90_Response" by User_Action_Name Avg_Response | eval Perc90_Response=round(Perc90_Response,0)/1000 | eval Avg_Response=round(Avg_Response,0)/1000 | table Proper_User_Action,Total_Calls,Perc90_Response
I have created an app for a team that I work with, and have set up mapping from our SAML auth so that the people on the team get a role that has access to the app. I would like for when these folks ... See more...
I have created an app for a team that I work with, and have set up mapping from our SAML auth so that the people on the team get a role that has access to the app. I would like for when these folks log in (they only have this one role, no other roles -- not even the default user role), they would land on the home page for the app.  As I understand it, that's supposed to be accomplished with the default_namespace parameter, set in the $SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf. In a regular browser window, now, when they log in, they get a 404 page for the app's home page (en-US/app/<appname>/search).  If they do it in an incognito/private browsing window, they land on the Launcher app and then then can navigate to the app and it works just fine.  The app's home page exists and is absolutely NOT a 404; after logging in in incognito, the URL they get when they manually navigate to the app is identical to the the link they're landed on when logging in without incognito.  (Ideally, I don't want these users to have access to the Launcher app, even.  But for now, they have to, in order to work around this.) We have a distributed environment (multiple indexers, multiple load-balanced search heads with a VIP).  This is the first time I've worked in a distributed environment.  So I'm assuming it's something to do with that. Any tips on what I'm doing wrong?
See if this helps.  It groups results by host, node_name, node_id, active, and type.  If there are 2 in a group then it's a match; otherwise, it isn't. index="postgresql" sourcetype="postgres" host=... See more...
See if this helps.  It groups results by host, node_name, node_id, active, and type.  If there are 2 in a group then it's a match; otherwise, it isn't. index="postgresql" sourcetype="postgres" host=FLSM-ZEUS-PSQL-* | fields host, node_name, node_id, active, type | where NOT isnull(node_name) | stats count by host, node_name, node_id, active, type | eval match = if(count=2, "Yes", "No") | fields - count
Hi, When using jdk8+ javaagent version 22.12.0,  I see below error $ java -javaagent:/cache/javaagent.jar -version Unable to locate appagent version to use - Java agent disabled openjdk version ... See more...
Hi, When using jdk8+ javaagent version 22.12.0,  I see below error $ java -javaagent:/cache/javaagent.jar -version Unable to locate appagent version to use - Java agent disabled openjdk version "1.8.0_382" OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-linux64) (build 1.8.0_382-b05) OpenJDK 64-Bit Server VM (Zulu 8.72.0.17-CA-linux64) (build 25.382-b05, mixed mode)   What is the compatible javaagent version for above Java version.
Match = When the same field on both hosts has the same value. In the example below, both server1 and server2 have a value of "1" in Field_a. That constitutes a match. If Field_a on both hosts has a v... See more...
Match = When the same field on both hosts has the same value. In the example below, both server1 and server2 have a value of "1" in Field_a. That constitutes a match. If Field_a on both hosts has a value of "1" then we have a match. Server1 - Field_a=1 Server2 - Field_a=1   I wish to verify that the values in each of the four fields on server1 match the values in each of the 4 fields on server2. Server1                Server2 node_name  =  node_name node_id      =     node_id active        =        active type         =          type  
Example logs 2022-08-19 08:10:53.0593|**Starting** 2022-08-19 08:10:53.5905|fff 2022-08-19 08:10:53.6061|dd 2022-08-19 08:10:53.6218|Shutting down 2022-08-19 08:10:53.6218|**Starting** 2022-08-... See more...
Example logs 2022-08-19 08:10:53.0593|**Starting** 2022-08-19 08:10:53.5905|fff 2022-08-19 08:10:53.6061|dd 2022-08-19 08:10:53.6218|Shutting down 2022-08-19 08:10:53.6218|**Starting** 2022-08-19 08:10:53.6374|fffff 2022-08-19 08:10:53.6686|ddd 2022-08-19 08:10:53.6843|**Starting** 2022-08-19 08:10:54.1530|aa 2022-08-19 08:10:54.1530|vv   From this I have created three columns Devicenumber,  _time ,Description If ** Starting ** message has followed by "Shutting down" mean, it should classify as good and if Starting message has not Shutting down mean, it should classify as bad.   From the above example, there should be 2 bad and one good.   If there is only one row which has only Starting and no shutting down recorded, in that case also , it should classify as bad
Hi @ch_payroc, The diff search command can quickly identify differences between fields, just as the diff program does for files: | stats values(dst) as dst by _time ``` convert multivalued dst fie... See more...
Hi @ch_payroc, The diff search command can quickly identify differences between fields, just as the diff program does for files: | stats values(dst) as dst by _time ``` convert multivalued dst field to multiline field for diff comparison ``` | eval dst=mvjoin(dst, urldecode("%0a")) | diff attribute=dst   2023-11-07 07:25:43.208 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 @@ -1,3 +1,3 @@ 10.240.0.0/30 -10.241.0.0/30 -10.242.0.0/30 +10.241.0.0/31 +10.245.0.0/30 6   Using diff context=true will provide slightly different output:   2023-11-07 07:25:43.208 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 *** 1,3 ****   10.240.0.0/30 ! 10.241.0.0/30 ! 10.242.0.0/30 --- 1,3 ---- 10.240.0.0/30 ! 10.241.0.0/31 ! 10.245.0.0/30 8  
There you go again using the word "match" without explaining what constitutes a match in this use case.   How would a human know if there is a match or not?  Once we know that then we can try to figu... See more...
There you go again using the word "match" without explaining what constitutes a match in this use case.   How would a human know if there is a match or not?  Once we know that then we can try to figure out how to get Splunk to make the same determination.
IMO, syslog should the onboarding choice of last resort.  There are too many syslog "standards" and issues always arise (like yours). Since you're building your own ingestion program, consider sendi... See more...
IMO, syslog should the onboarding choice of last resort.  There are too many syslog "standards" and issues always arise (like yours). Since you're building your own ingestion program, consider sending the data to Splunk using HTTP Event Collector (HEC).  See "To Add Data Directly to an Index" at https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtogetdatapython
I have 2 servers FLSM-ZEUS=-PSQL-01, FLSM-ZEUS-PSQL-02. Both servers are part of a SQL cluster. They both have identical records on them. The fields on both servers are node_name, node_id, active, an... See more...
I have 2 servers FLSM-ZEUS=-PSQL-01, FLSM-ZEUS-PSQL-02. Both servers are part of a SQL cluster. They both have identical records on them. The fields on both servers are node_name, node_id, active, and type. What I wish to do is come up with a search that makes sure the fields on both servers match. Some of them are multivalue fields. The reason for this is, if the cluster isn't communicating correctly, the records may become out of sync. If this happens, I'll create an alert letting me know.
I'm trying to run a lookup against a list of values in an array.  I have a CSV which look as follows: id x y 123 Data Data2 321 Data Data2 456 Data3 Data3   The field from t... See more...
I'm trying to run a lookup against a list of values in an array.  I have a CSV which look as follows: id x y 123 Data Data2 321 Data Data2 456 Data3 Data3   The field from the search is is an array which looks as follows: ["123", "321", 456"] I want to map the lookup value.  Do I need to iterate over the field or can I use a lookup or is the best option?
Hi @sjringo, Your original search may work as expected with the transaction keepevicted option, which will retain transactions without a closing event: index=anIndex sourcetype=aSourcetype aJobName... See more...
Hi @sjringo, Your original search may work as expected with the transaction keepevicted option, which will retain transactions without a closing event: index=anIndex sourcetype=aSourcetype aJobName AND ("START of script" OR "COMPLETED OK") | rex "(?<event_name>(START of script)|(COMPLETED OK))" | eval event_name=CASE(event_name="START of script", "script_start", event_name="COMPLETED OK", "script_complete") | rex field=_raw "Batch::(?<batchJobName>[^\s]*)" | transaction keepevicted=true host batchJobName startswith=(event_name="script_start") endswith=(event_name="script_complete")
great answer, was very useful, thanks.
Have you tried sorting your data after doing the append? Per the transaction command docs the data needs to be in descending time-order for the command to work correctly: | sort 0 -_time   When yo... See more...
Have you tried sorting your data after doing the append? Per the transaction command docs the data needs to be in descending time-order for the command to work correctly: | sort 0 -_time   When you do an append, you might be tacking on "earlier" timestamps that are not seen as the transaction command works on the stream of data.
Hi @Roynsky, With your sample data represented by the following events: 2023-11-10 17:00:10 Result=YES 2023-11-10 17:00:07 Result=NO 2023-11-10 17:00:05 Result=NO 2023-11-10 17:00:00 Result=YE... See more...
Hi @Roynsky, With your sample data represented by the following events: 2023-11-10 17:00:10 Result=YES 2023-11-10 17:00:07 Result=NO 2023-11-10 17:00:05 Result=NO 2023-11-10 17:00:00 Result=YES and sorted by _time descending (the default event sort order), here are two options: 1. | streamstats reset_before="("Result==\"YES\"")" max(_time) as end_time | eval duration=end_time-_time | stats max(duration) as duration by end_time => end_time,duration 1699635600,0 1699635610,5 The delta between 17:00:05 and 17:00:10 is 5 seconds ending at 17:00:10. 2. source="Roynsky_time_delta.txt" host="splunk" sourcetype="roynsky_time_delta" | transaction endswith=eval(Result=="YES") ``` or | transaction endswith=Result=YES for an exact term match ``` | table _time duration _time,duration 1699635605,5 1699635600,0 The delta between 17:00:05 and 17:00:10 is 5 seconds starting at 17:00:05. I don't have Symantec Endpoint Protection sample data available, but if actions have correlation identifiers associated with each sequence of quarantine events, you might also use stats: | stats range(_time) as duration by correlation_id ``` or whatever the field is called ```
What have you tried so far? These two eval commands | eval group = if(match(cs2, "^Secret Server"), cs2, null()) | eval user = if(match(cs2, "^Secret Server"), null(), cs2) Become these two EVAL s... See more...
What have you tried so far? These two eval commands | eval group = if(match(cs2, "^Secret Server"), cs2, null()) | eval user = if(match(cs2, "^Secret Server"), null(), cs2) Become these two EVAL statements in props.conf EVAL-group = if(match(cs2, "^Secret Server"), cs2, null()) EVAL-user = if(match(cs2, "^Secret Server"), null(), cs2) Assuming, that is, the cs2 field is already extracted. See https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Propsconf#Field_extraction_configuration