All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have created an app for a team that I work with, and have set up mapping from our SAML auth so that the people on the team get a role that has access to the app. I would like for when these folks ... See more...
I have created an app for a team that I work with, and have set up mapping from our SAML auth so that the people on the team get a role that has access to the app. I would like for when these folks log in (they only have this one role, no other roles -- not even the default user role), they would land on the home page for the app.  As I understand it, that's supposed to be accomplished with the default_namespace parameter, set in the $SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf. In a regular browser window, now, when they log in, they get a 404 page for the app's home page (en-US/app/<appname>/search).  If they do it in an incognito/private browsing window, they land on the Launcher app and then then can navigate to the app and it works just fine.  The app's home page exists and is absolutely NOT a 404; after logging in in incognito, the URL they get when they manually navigate to the app is identical to the the link they're landed on when logging in without incognito.  (Ideally, I don't want these users to have access to the Launcher app, even.  But for now, they have to, in order to work around this.) We have a distributed environment (multiple indexers, multiple load-balanced search heads with a VIP).  This is the first time I've worked in a distributed environment.  So I'm assuming it's something to do with that. Any tips on what I'm doing wrong?
See if this helps.  It groups results by host, node_name, node_id, active, and type.  If there are 2 in a group then it's a match; otherwise, it isn't. index="postgresql" sourcetype="postgres" host=... See more...
See if this helps.  It groups results by host, node_name, node_id, active, and type.  If there are 2 in a group then it's a match; otherwise, it isn't. index="postgresql" sourcetype="postgres" host=FLSM-ZEUS-PSQL-* | fields host, node_name, node_id, active, type | where NOT isnull(node_name) | stats count by host, node_name, node_id, active, type | eval match = if(count=2, "Yes", "No") | fields - count
Hi, When using jdk8+ javaagent version 22.12.0,  I see below error $ java -javaagent:/cache/javaagent.jar -version Unable to locate appagent version to use - Java agent disabled openjdk version ... See more...
Hi, When using jdk8+ javaagent version 22.12.0,  I see below error $ java -javaagent:/cache/javaagent.jar -version Unable to locate appagent version to use - Java agent disabled openjdk version "1.8.0_382" OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-linux64) (build 1.8.0_382-b05) OpenJDK 64-Bit Server VM (Zulu 8.72.0.17-CA-linux64) (build 25.382-b05, mixed mode)   What is the compatible javaagent version for above Java version.
Match = When the same field on both hosts has the same value. In the example below, both server1 and server2 have a value of "1" in Field_a. That constitutes a match. If Field_a on both hosts has a v... See more...
Match = When the same field on both hosts has the same value. In the example below, both server1 and server2 have a value of "1" in Field_a. That constitutes a match. If Field_a on both hosts has a value of "1" then we have a match. Server1 - Field_a=1 Server2 - Field_a=1   I wish to verify that the values in each of the four fields on server1 match the values in each of the 4 fields on server2. Server1                Server2 node_name  =  node_name node_id      =     node_id active        =        active type         =          type  
Example logs 2022-08-19 08:10:53.0593|**Starting** 2022-08-19 08:10:53.5905|fff 2022-08-19 08:10:53.6061|dd 2022-08-19 08:10:53.6218|Shutting down 2022-08-19 08:10:53.6218|**Starting** 2022-08-... See more...
Example logs 2022-08-19 08:10:53.0593|**Starting** 2022-08-19 08:10:53.5905|fff 2022-08-19 08:10:53.6061|dd 2022-08-19 08:10:53.6218|Shutting down 2022-08-19 08:10:53.6218|**Starting** 2022-08-19 08:10:53.6374|fffff 2022-08-19 08:10:53.6686|ddd 2022-08-19 08:10:53.6843|**Starting** 2022-08-19 08:10:54.1530|aa 2022-08-19 08:10:54.1530|vv   From this I have created three columns Devicenumber,  _time ,Description If ** Starting ** message has followed by "Shutting down" mean, it should classify as good and if Starting message has not Shutting down mean, it should classify as bad.   From the above example, there should be 2 bad and one good.   If there is only one row which has only Starting and no shutting down recorded, in that case also , it should classify as bad
Hi @ch_payroc, The diff search command can quickly identify differences between fields, just as the diff program does for files: | stats values(dst) as dst by _time ``` convert multivalued dst fie... See more...
Hi @ch_payroc, The diff search command can quickly identify differences between fields, just as the diff program does for files: | stats values(dst) as dst by _time ``` convert multivalued dst field to multiline field for diff comparison ``` | eval dst=mvjoin(dst, urldecode("%0a")) | diff attribute=dst   2023-11-07 07:25:43.208 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 @@ -1,3 +1,3 @@ 10.240.0.0/30 -10.241.0.0/30 -10.242.0.0/30 +10.241.0.0/31 +10.245.0.0/30 6   Using diff context=true will provide slightly different output:   2023-11-07 07:25:43.208 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 *** 1,3 ****   10.240.0.0/30 ! 10.241.0.0/30 ! 10.242.0.0/30 --- 1,3 ---- 10.240.0.0/30 ! 10.241.0.0/31 ! 10.245.0.0/30 8  
There you go again using the word "match" without explaining what constitutes a match in this use case.   How would a human know if there is a match or not?  Once we know that then we can try to figu... See more...
There you go again using the word "match" without explaining what constitutes a match in this use case.   How would a human know if there is a match or not?  Once we know that then we can try to figure out how to get Splunk to make the same determination.
IMO, syslog should the onboarding choice of last resort.  There are too many syslog "standards" and issues always arise (like yours). Since you're building your own ingestion program, consider sendi... See more...
IMO, syslog should the onboarding choice of last resort.  There are too many syslog "standards" and issues always arise (like yours). Since you're building your own ingestion program, consider sending the data to Splunk using HTTP Event Collector (HEC).  See "To Add Data Directly to an Index" at https://dev.splunk.com/enterprise/docs/devtools/python/sdk-python/howtousesplunkpython/howtogetdatapython
I have 2 servers FLSM-ZEUS=-PSQL-01, FLSM-ZEUS-PSQL-02. Both servers are part of a SQL cluster. They both have identical records on them. The fields on both servers are node_name, node_id, active, an... See more...
I have 2 servers FLSM-ZEUS=-PSQL-01, FLSM-ZEUS-PSQL-02. Both servers are part of a SQL cluster. They both have identical records on them. The fields on both servers are node_name, node_id, active, and type. What I wish to do is come up with a search that makes sure the fields on both servers match. Some of them are multivalue fields. The reason for this is, if the cluster isn't communicating correctly, the records may become out of sync. If this happens, I'll create an alert letting me know.
I'm trying to run a lookup against a list of values in an array.  I have a CSV which look as follows: id x y 123 Data Data2 321 Data Data2 456 Data3 Data3   The field from t... See more...
I'm trying to run a lookup against a list of values in an array.  I have a CSV which look as follows: id x y 123 Data Data2 321 Data Data2 456 Data3 Data3   The field from the search is is an array which looks as follows: ["123", "321", 456"] I want to map the lookup value.  Do I need to iterate over the field or can I use a lookup or is the best option?
Hi @sjringo, Your original search may work as expected with the transaction keepevicted option, which will retain transactions without a closing event: index=anIndex sourcetype=aSourcetype aJobName... See more...
Hi @sjringo, Your original search may work as expected with the transaction keepevicted option, which will retain transactions without a closing event: index=anIndex sourcetype=aSourcetype aJobName AND ("START of script" OR "COMPLETED OK") | rex "(?<event_name>(START of script)|(COMPLETED OK))" | eval event_name=CASE(event_name="START of script", "script_start", event_name="COMPLETED OK", "script_complete") | rex field=_raw "Batch::(?<batchJobName>[^\s]*)" | transaction keepevicted=true host batchJobName startswith=(event_name="script_start") endswith=(event_name="script_complete")
great answer, was very useful, thanks.
Have you tried sorting your data after doing the append? Per the transaction command docs the data needs to be in descending time-order for the command to work correctly: | sort 0 -_time   When yo... See more...
Have you tried sorting your data after doing the append? Per the transaction command docs the data needs to be in descending time-order for the command to work correctly: | sort 0 -_time   When you do an append, you might be tacking on "earlier" timestamps that are not seen as the transaction command works on the stream of data.
Hi @Roynsky, With your sample data represented by the following events: 2023-11-10 17:00:10 Result=YES 2023-11-10 17:00:07 Result=NO 2023-11-10 17:00:05 Result=NO 2023-11-10 17:00:00 Result=YE... See more...
Hi @Roynsky, With your sample data represented by the following events: 2023-11-10 17:00:10 Result=YES 2023-11-10 17:00:07 Result=NO 2023-11-10 17:00:05 Result=NO 2023-11-10 17:00:00 Result=YES and sorted by _time descending (the default event sort order), here are two options: 1. | streamstats reset_before="("Result==\"YES\"")" max(_time) as end_time | eval duration=end_time-_time | stats max(duration) as duration by end_time => end_time,duration 1699635600,0 1699635610,5 The delta between 17:00:05 and 17:00:10 is 5 seconds ending at 17:00:10. 2. source="Roynsky_time_delta.txt" host="splunk" sourcetype="roynsky_time_delta" | transaction endswith=eval(Result=="YES") ``` or | transaction endswith=Result=YES for an exact term match ``` | table _time duration _time,duration 1699635605,5 1699635600,0 The delta between 17:00:05 and 17:00:10 is 5 seconds starting at 17:00:05. I don't have Symantec Endpoint Protection sample data available, but if actions have correlation identifiers associated with each sequence of quarantine events, you might also use stats: | stats range(_time) as duration by correlation_id ``` or whatever the field is called ```
What have you tried so far? These two eval commands | eval group = if(match(cs2, "^Secret Server"), cs2, null()) | eval user = if(match(cs2, "^Secret Server"), null(), cs2) Become these two EVAL s... See more...
What have you tried so far? These two eval commands | eval group = if(match(cs2, "^Secret Server"), cs2, null()) | eval user = if(match(cs2, "^Secret Server"), null(), cs2) Become these two EVAL statements in props.conf EVAL-group = if(match(cs2, "^Secret Server"), cs2, null()) EVAL-user = if(match(cs2, "^Secret Server"), null(), cs2) Assuming, that is, the cs2 field is already extracted. See https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Propsconf#Field_extraction_configuration
Do i need to include the IP address?            
I have a working query that uses Transaction to find the Starting / Ending log event.  I am trying to make some changes but Transaction is not working as I expected. In my current working example I... See more...
I have a working query that uses Transaction to find the Starting / Ending log event.  I am trying to make some changes but Transaction is not working as I expected. In my current working example I am looking for a 'job name' and then the starting and ending log event. In my current code I am using one query: index=anIndex sourcetype=aSourcetype aJobName AND ("START of script" OR "COMPLETED OK"). This works fine when there are no issues but now if a job fails there will be multiple "START of script" and only one 'COMPLETED OK' event. So, I tried reworking my query to be as follows to only get the most recent of either the start / completed log event. index=anIndex sourcetype=aSourcetype aJobName AND "START of script" | head 1 | append [ index=anIndex sourcetype=aSourcetype aJobName AND "COMPLETED OK" | head 1 ] But when I get to the part of creating a transaction the transaction only has the Starting log event ? | rex "(?<event_name>(START of script)|(COMPLETED OK))" | eval event_name=CASE(event_name="START of script", "script_start", event_name="COMPLETED OK", "script_complete") | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval {event_name}_time=_time | rex field=_raw "Batch::(?<batchJobName>[^\s]*)" | transaction keeporphans=true host batchJobName startswith=(event_name="script_start") endswith=(event_name="script_complete")   Is the use of | append [...] the cause ? If append cannot be used for transaction what other way can I get the data Im looking for ?
Hi , How we can fix this issue in ES SH "Health Check: msg="A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install... See more...
Hi , How we can fix this issue in ES SH "Health Check: msg="A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py" stanza="default" Thanks..
@richgalloway , How we can modify for props.conf ? thanks
This topic is covered pretty well  via the props/transforms settings as such:   transforms.conf [mv_extract] REGEX = \*\*\sRABAX\:\s(?<ABAPRABAX>.*) MV_ADD = true REPEAT_MATCH = true  reference: ... See more...
This topic is covered pretty well  via the props/transforms settings as such:   transforms.conf [mv_extract] REGEX = \*\*\sRABAX\:\s(?<ABAPRABAX>.*) MV_ADD = true REPEAT_MATCH = true  reference: https://community.splunk.com/t5/Getting-Data-In/Multi-value-field-extraction-props-conf-transforms-conf/m-p/210426